* [v1 00/12] ENETC4 PMD support
@ 2024-10-18 7:26 vanshika.shukla
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
` (12 more replies)
0 siblings, 13 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This series introduces a new ENETC4 PMD driver for NXP's i.MX95
SoC, enabling basic network operations.
Vanshika Shukla (12):
net/enetc: Add initial ENETC4 PMD driver support
net/enetc: Add RX and TX queue APIs for ENETC4 PMD
net/enetc: Optimize ENETC4 data path
net/enetc: Add TX checksum offload and RX checksum validation
net/enetc: Add basic statistics
net/enetc: Add packet type parsing support
net/enetc: Add support for multiple queues with RSS
net/enetc: Add VF to PF messaging support and primary MAC setup
net/enetc: Add multicast and promiscuous mode support
net/enetc: Add link speed and status support
net/enetc: Add link status notification support
net/enetc: Add MAC and VLAN filter support
MAINTAINERS | 3 +
config/arm/arm64_imx_linux_gcc | 17 +
config/arm/meson.build | 14 +
doc/guides/nics/enetc4.rst | 99 ++
doc/guides/nics/features/enetc4.ini | 22 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/enetc/base/enetc4_hw.h | 186 ++++
drivers/net/enetc/base/enetc_hw.h | 52 +-
drivers/net/enetc/enetc.h | 246 ++++-
drivers/net/enetc/enetc4_ethdev.c | 1051 ++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 1364 ++++++++++++++++++++++++
drivers/net/enetc/enetc_cbdr.c | 311 ++++++
drivers/net/enetc/enetc_ethdev.c | 5 +-
drivers/net/enetc/enetc_rxtx.c | 171 ++-
drivers/net/enetc/kpage_ncache_api.h | 70 ++
drivers/net/enetc/meson.build | 5 +-
drivers/net/enetc/ntmp.h | 110 ++
18 files changed, 3690 insertions(+), 41 deletions(-)
create mode 100644 config/arm/arm64_imx_linux_gcc
create mode 100644 doc/guides/nics/enetc4.rst
create mode 100644 doc/guides/nics/features/enetc4.ini
create mode 100644 drivers/net/enetc/base/enetc4_hw.h
create mode 100644 drivers/net/enetc/enetc4_ethdev.c
create mode 100644 drivers/net/enetc/enetc4_vf.c
create mode 100644 drivers/net/enetc/enetc_cbdr.c
create mode 100644 drivers/net/enetc/kpage_ncache_api.h
create mode 100644 drivers/net/enetc/ntmp.h
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-20 23:39 ` Stephen Hemminger
` (4 more replies)
2024-10-18 7:26 ` [v1 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD vanshika.shukla
` (11 subsequent siblings)
12 siblings, 5 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Thomas Monjalon, Wathsala Vithanage, Bruce Richardson,
Gagandeep Singh, Sachin Saxena, Vanshika Shukla, Anatoly Burakov
Cc: Apeksha Gupta
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch introduces a new ENETC4 PMD driver for NXP's i.MX95
SoC, enabling basic network operations. Key features include:
- Probe and teardown functions
- Hardware initialization for both Virtual Functions (VFs)
and Physical Function (PF)
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
MAINTAINERS | 3 +
config/arm/arm64_imx_linux_gcc | 17 ++
config/arm/meson.build | 14 ++
doc/guides/nics/enetc4.rst | 99 ++++++++
doc/guides/nics/features/enetc4.ini | 9 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/enetc/base/enetc4_hw.h | 111 +++++++++
drivers/net/enetc/base/enetc_hw.h | 3 +-
drivers/net/enetc/enetc.h | 43 ++--
drivers/net/enetc/enetc4_ethdev.c | 309 +++++++++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 146 ++++++++++++
drivers/net/enetc/enetc_ethdev.c | 5 +-
drivers/net/enetc/kpage_ncache_api.h | 70 ++++++
drivers/net/enetc/meson.build | 4 +-
15 files changed, 816 insertions(+), 22 deletions(-)
create mode 100644 config/arm/arm64_imx_linux_gcc
create mode 100644 doc/guides/nics/enetc4.rst
create mode 100644 doc/guides/nics/features/enetc4.ini
create mode 100644 drivers/net/enetc/base/enetc4_hw.h
create mode 100644 drivers/net/enetc/enetc4_ethdev.c
create mode 100644 drivers/net/enetc/enetc4_vf.c
create mode 100644 drivers/net/enetc/kpage_ncache_api.h
diff --git a/MAINTAINERS b/MAINTAINERS
index f09cda04c8..b45524330c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -949,9 +949,12 @@ F: doc/guides/nics/features/dpaa2.ini
NXP enetc
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+M: Vanshika Shukla <vanshika.shukla@nxp.com>
F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
+F: doc/guides/nics/enetc4.rst
F: doc/guides/nics/features/enetc.ini
+F: doc/guides/nics/features/enetc4.ini
NXP enetfec - EXPERIMENTAL
M: Apeksha Gupta <apeksha.gupta@nxp.com>
diff --git a/config/arm/arm64_imx_linux_gcc b/config/arm/arm64_imx_linux_gcc
new file mode 100644
index 0000000000..c876ae1d2b
--- /dev/null
+++ b/config/arm/arm64_imx_linux_gcc
@@ -0,0 +1,17 @@
+[binaries]
+c = ['ccache', 'aarch64-linux-gnu-gcc']
+cpp = ['ccache', 'aarch64-linux-gnu-g++']
+ar = 'aarch64-linux-gnu-ar'
+as = 'aarch64-linux-gnu-as'
+strip = 'aarch64-linux-gnu-strip'
+pkgconfig = 'aarch64-linux-gnu-pkg-config'
+pcap-config = ''
+
+[host_machine]
+system = 'linux'
+cpu_family = 'aarch64'
+cpu = 'armv8.2-a'
+endian = 'little'
+
+[properties]
+platform = 'imx'
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 55be7c8711..6112244f2c 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -561,6 +561,18 @@ soc_hip10 = {
'numa': true
}
+soc_imx = {
+ 'description': 'NXP IMX',
+ 'implementer': '0x41',
+ 'part_number': '0xd05',
+ 'flags': [
+ ['RTE_MACHINE', '"armv8a"'],
+ ['RTE_MAX_LCORE', 6],
+ ['RTE_MAX_NUMA_NODES', 1],
+ ],
+ 'numa': false,
+}
+
soc_kunpeng920 = {
'description': 'HiSilicon Kunpeng 920',
'implementer': '0x48',
@@ -684,6 +696,7 @@ graviton2: AWS Graviton2
graviton3: AWS Graviton3
graviton4: AWS Graviton4
hip10: HiSilicon HIP10
+imx: NXP IMX
kunpeng920: HiSilicon Kunpeng 920
kunpeng930: HiSilicon Kunpeng 930
n1sdp: Arm Neoverse N1SDP
@@ -722,6 +735,7 @@ socs = {
'graviton3': soc_graviton3,
'graviton4': soc_graviton4,
'hip10': soc_hip10,
+ 'imx': soc_imx,
'kunpeng920': soc_kunpeng920,
'kunpeng930': soc_kunpeng930,
'n1sdp': soc_n1sdp,
diff --git a/doc/guides/nics/enetc4.rst b/doc/guides/nics/enetc4.rst
new file mode 100644
index 0000000000..8ffdc53376
--- /dev/null
+++ b/doc/guides/nics/enetc4.rst
@@ -0,0 +1,99 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2024 NXP
+
+ENETC4 Poll Mode Driver
+=======================
+
+The ENETC4 NIC PMD (**librte_net_enetc**) provides poll mode driver
+support for the inbuilt NIC found in the **NXP i.MX95** SoC.
+
+More information can be found at `NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-9-processors/i-mx-95-applications-processor-family-high-performance-safety-enabled-platform-with-eiq-neutron-npu:iMX95>`_.
+
+This section provides an overview of the NXP ENETC4
+and how it is integrated into the DPDK.
+
+Contents summary
+
+- ENETC4 overview
+- Supported ENETC4 SoCs
+- PCI bus driver
+- NIC driver
+- Prerequisites
+- Driver compilation and testing
+
+ENETC4 Overview
+---------------
+
+ENETC4 is a PCI Integrated End Point(IEP). IEP implements
+peripheral devices in an SoC such that software sees them as PCIe device.
+ENETC4 is an evolution of BDR(Buffer Descriptor Ring) based networking
+IPs.
+
+This infrastructure simplifies adding support for IEP and facilitates in following:
+
+- Device discovery and location
+- Resource requirement discovery and allocation (e.g. interrupt assignment,
+ device register address)
+- Event reporting
+
+Supported ENETC4 SoCs
+---------------------
+
+- i.MX95
+
+NIC Driver (PMD)
+----------------
+
+The ENETC4 PMD is a traditional DPDK PMD that bridges the RTE framework and
+ENETC4 internal drivers, supporting both Virtual Functions (VFs) and
+Physical Functions (PF). Key functionality includes:
+
+- Driver registration: The device vendor table is registered in the PCI subsystem.
+- Device discovery: The RTE framework scans the PCI bus for connected devices, triggering the ENETC4 driver's probe function.
+- Initialization: The probe function configures basic device registers and sets up Buffer Descriptor (BD) rings.
+- Receive processing: Upon packet reception, the BD Ring status bit is set, facilitating packet processing.
+- Transmission: Packet transmission precedes reception, ensuring efficient data transfer.
+
+Prerequisites
+-------------
+
+There are three main pre-requisites for executing ENETC4 PMD on ENETC4
+compatible boards:
+
+#. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* ARM Toolchain <https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-x86_64-aarch64-none-linux-gnu.tar.xz>`_.
+
+#. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://github.com/nxp-imx/linux-imx>`_.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux LF**
+
+ NXP Linux Factory (LF) includes support for family
+ of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+ and corresponding boards.
+
+ It includes the Linux board support packages (BSPs) for NXP SoCs,
+ a fully operational tool chain, kernel and board specific modules.
+
+ i.MX LF release and related information can be obtained from: `LF <https://www.nxp.com/design/design-center/software/embedded-software/i-mx-software/embedded-linux-for-i-mx-applications-processors:IMXLINUX>`_
+ Refer section: Linux Current Release.
+
+- **kpage_ncache Kernel Module**
+
+ i.MX95 platform is a IO non-cache coherent platform and driver is dependent on
+ a kernel module kpage_ncache.ko to mark the hugepage memory to non-cacheable.
+
+ The module can be obtained from: `kpage_ncache <https://github.com/nxp-qoriq/dpdk-extras/tree/main/linux/kpage_ncache>`_
+
+Driver compilation and testing
+------------------------------
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **testpmd**
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
new file mode 100644
index 0000000000..ca3b9ae992
--- /dev/null
+++ b/doc/guides/nics/features/enetc4.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetc4' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc7988a..da2af04777 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -27,6 +27,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetc4
enetfec
enic
fail_safe
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index d2301461ce..0f11dcbd8d 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -139,6 +139,10 @@ New Features
* Added SR-IOV VF support.
* Added recent 1400/14000 and 15000 models to the supported list.
+* **Added ENETC4 PMD**
+
+ * Added ENETC4 PMD for NXP i.MX95 platform.
+
* **Updated Marvell cnxk net driver.**
* Added ethdev driver support for CN20K SoC.
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
new file mode 100644
index 0000000000..34a4ca3b02
--- /dev/null
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -0,0 +1,111 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ *
+ * This header file defines the register offsets and bit fields
+ * of ENETC4 PF and VFs.
+ */
+
+#ifndef _ENETC4_HW_H_
+#define _ENETC4_HW_H_
+#include <rte_io.h>
+
+/* ENETC4 device IDs */
+#define ENETC4_DEV_ID 0xe101
+#define ENETC4_DEV_ID_VF 0xef00
+#define PCI_VENDOR_ID_NXP 0x1131
+
+/***************************ENETC port registers**************************/
+#define ENETC4_PMR 0x10
+#define ENETC4_PMR_EN (BIT(16) | BIT(17) | BIT(18))
+
+/* Port Station interface promiscuous MAC mode register */
+#define ENETC4_PSIPMMR 0x200
+#define PSIPMMR_SI0_MAC_UP BIT(0)
+#define PSIPMMR_SI_MAC_UP (BIT(0) | BIT(1) | BIT(2))
+#define PSIPMMR_SI0_MAC_MP BIT(16)
+#define PSIPMMR_SI_MAC_MP (BIT(16) | BIT(17) | BIT(18))
+
+/* Port Station interface a primary MAC address registers */
+#define ENETC4_PSIPMAR0(a) ((a) * 0x80 + 0x2000)
+#define ENETC4_PSIPMAR1(a) ((a) * 0x80 + 0x2004)
+
+/* Port MAC address register 0/1 */
+#define ENETC4_PMAR0 0x4020
+#define ENETC4_PMAR1 0x4024
+
+/* Port operational register */
+#define ENETC4_POR 0x4100
+
+/* Port traffic class a transmit maximum SDU register */
+#define ENETC4_PTCTMSDUR(a) ((a) * 0x20 + 0x4208)
+#define SDU_TYPE_MPDU BIT(16)
+
+#define ENETC4_PM_CMD_CFG(mac) (0x5008 + (mac) * 0x400)
+#define PM_CMD_CFG_TX_EN BIT(0)
+#define PM_CMD_CFG_RX_EN BIT(1)
+
+/* i.MX95 supports jumbo frame, but it is recommended to set the max frame
+ * size to 2000 bytes.
+ */
+#define ENETC4_MAC_MAXFRM_SIZE 2000
+
+/* Port MAC 0/1 Maximum Frame Length Register */
+#define ENETC4_PM_MAXFRM(mac) (0x5014 + (mac) * 0x400)
+
+/* Config register to reset counters */
+#define ENETC4_PM0_STAT_CONFIG 0x50e0
+/* Stats Reset Bit */
+#define ENETC4_CLEAR_STATS BIT(2)
+
+/* Port MAC 0/1 Receive Ethernet Octets Counter */
+#define ENETC4_PM_REOCT(mac) (0x5100 + (mac) * 0x400)
+
+/* Port MAC 0/1 Receive Frame Error Counter */
+#define ENETC4_PM_RERR(mac) (0x5138 + (mac) * 0x400)
+
+/* Port MAC 0/1 Receive Dropped Packets Counter */
+#define ENETC4_PM_RDRP(mac) (0x5158 + (mac) * 0x400)
+
+/* Port MAC 0/1 Receive Packets Counter */
+#define ENETC4_PM_RPKT(mac) (0x5160 + (mac) * 0x400)
+
+/* Port MAC 0/1 Transmit Frame Error Counter */
+#define ENETC4_PM_TERR(mac) (0x5238 + (mac) * 0x400)
+
+/* Port MAC 0/1 Transmit Ethernet Octets Counter */
+#define ENETC4_PM_TEOCT(mac) (0x5200 + (mac) * 0x400)
+
+/* Port MAC 0/1 Transmit Packets Counter */
+#define ENETC4_PM_TPKT(mac) (0x5260 + (mac) * 0x400)
+
+/* Port MAC 0 Interface Mode Control Register */
+#define ENETC4_PM_IF_MODE(mac) (0x5300 + (mac) * 0x400)
+#define PM_IF_MODE_IFMODE (BIT(0) | BIT(1) | BIT(2))
+#define IFMODE_XGMII 0
+#define IFMODE_RMII 3
+#define IFMODE_RGMII 4
+#define IFMODE_SGMII 5
+#define PM_IF_MODE_ENA BIT(15)
+
+/* general register accessors */
+#define enetc4_rd_reg(reg) rte_read32((void *)(reg))
+#define enetc4_wr_reg(reg, val) rte_write32((val), (void *)(reg))
+
+#define enetc4_rd(hw, off) enetc4_rd_reg((size_t)(hw)->reg + (off))
+#define enetc4_wr(hw, off, val) enetc4_wr_reg((size_t)(hw)->reg + (off), val)
+/* port register accessors - PF only */
+#define enetc4_port_rd(hw, off) enetc4_rd_reg((size_t)(hw)->port + (off))
+#define enetc4_port_wr(hw, off, val) \
+ enetc4_wr_reg((size_t)(hw)->port + (off), val)
+/* BDR register accessors, see ENETC_BDR() */
+#define enetc4_bdr_rd(hw, t, n, off) \
+ enetc4_rd(hw, ENETC_BDR(t, n, off))
+#define enetc4_bdr_wr(hw, t, n, off, val) \
+ enetc4_wr(hw, ENETC_BDR(t, n, off), val)
+#define enetc4_txbdr_rd(hw, n, off) enetc4_bdr_rd(hw, TX, n, off)
+#define enetc4_rxbdr_rd(hw, n, off) enetc4_bdr_rd(hw, RX, n, off)
+#define enetc4_txbdr_wr(hw, n, off, val) \
+ enetc4_bdr_wr(hw, TX, n, off, val)
+#define enetc4_rxbdr_wr(hw, n, off, val) \
+ enetc4_bdr_wr(hw, RX, n, off, val)
+#endif
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 66fad58e5e..2d63c54db6 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -1,10 +1,11 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2024 NXP
*/
#ifndef _ENETC_HW_H_
#define _ENETC_HW_H_
#include <rte_io.h>
+#include <ethdev_pci.h>
#define BIT(x) ((uint64_t)1 << ((x)))
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 7163633bce..87fc51b776 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -1,18 +1,21 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2019,2024 NXP
*/
#ifndef _ENETC_H_
#define _ENETC_H_
#include <rte_time.h>
+#include <ethdev_pci.h>
+#include "compat.h"
#include "base/enetc_hw.h"
+#include "enetc_logs.h"
#define PCI_VENDOR_ID_FREESCALE 0x1957
/* Max TX rings per ENETC. */
-#define MAX_TX_RINGS 2
+#define MAX_TX_RINGS 1
/* Max RX rings per ENTEC. */
#define MAX_RX_RINGS 1
@@ -33,21 +36,11 @@
#define ENETC_ETH_MAX_LEN (RTE_ETHER_MTU + \
RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
-/*
- * upper_32_bits - return bits 32-63 of a number
- * @n: the number we're accessing
- *
- * A basic shift-right of a 64- or 32-bit quantity. Use this to suppress
- * the "right shift count >= width of type" warning when that quantity is
- * 32-bits.
- */
-#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+/* eth name size */
+#define ENETC_ETH_NAMESIZE 20
-/*
- * lower_32_bits - return bits 0-31 of a number
- * @n: the number we're accessing
- */
-#define lower_32_bits(n) ((uint32_t)(n))
+/* size for marking hugepage non-cacheable */
+#define SIZE_2MB 0x200000
#define ENETC_TXBD(BDR, i) (&(((struct enetc_tx_bd *)((BDR).bd_base))[i]))
#define ENETC_RXBD(BDR, i) (&(((union enetc_rx_bd *)((BDR).bd_base))[i]))
@@ -74,6 +67,7 @@ struct enetc_bdr {
};
struct rte_mempool *mb_pool; /* mbuf pool to populate RX ring. */
struct rte_eth_dev *ndev;
+ const struct rte_memzone *mz;
};
/*
@@ -96,6 +90,20 @@ struct enetc_eth_adapter {
#define ENETC_DEV_PRIVATE_TO_INTR(adapter) \
(&((struct enetc_eth_adapter *)adapter)->intr)
+/*
+ * ENETC4 function prototypes
+ */
+int enetc4_pci_remove(struct rte_pci_device *pci_dev);
+int enetc4_dev_configure(struct rte_eth_dev *dev);
+int enetc4_dev_close(struct rte_eth_dev *dev);
+int enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+ struct rte_eth_dev_info *dev_info);
+
+/*
+ * enetc4_vf function prototype
+ */
+int enetc4_vf_dev_stop(struct rte_eth_dev *dev);
+
/*
* RX/TX ENETC function prototypes
*/
@@ -104,8 +112,9 @@ uint16_t enetc_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts,
uint16_t enetc_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-
int enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt);
+void enetc4_dev_hw_init(struct rte_eth_dev *eth_dev);
+void print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr);
static inline int
enetc_bd_unused(struct enetc_bdr *bdr)
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
new file mode 100644
index 0000000000..3fe14bd5a6
--- /dev/null
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -0,0 +1,309 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ */
+
+#include <stdbool.h>
+#include <rte_random.h>
+#include <dpaax_iova_table.h>
+
+#include "kpage_ncache_api.h"
+#include "base/enetc4_hw.h"
+#include "enetc_logs.h"
+#include "enetc.h"
+
+static int
+enetc4_dev_start(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t val;
+
+ PMD_INIT_FUNC_TRACE();
+
+ val = enetc4_port_rd(enetc_hw, ENETC4_PM_CMD_CFG(0));
+ enetc4_port_wr(enetc_hw, ENETC4_PM_CMD_CFG(0),
+ val | PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN);
+
+ val = enetc4_port_rd(enetc_hw, ENETC4_PM_CMD_CFG(1));
+ enetc4_port_wr(enetc_hw, ENETC4_PM_CMD_CFG(1),
+ val | PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN);
+
+ /* Enable port */
+ val = enetc4_port_rd(enetc_hw, ENETC4_PMR);
+ enetc4_port_wr(enetc_hw, ENETC4_PMR, val | ENETC4_PMR_EN);
+
+ /* Enable port transmit/receive */
+ enetc4_port_wr(enetc_hw, ENETC4_POR, 0);
+
+ return 0;
+}
+
+static int
+enetc4_dev_stop(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t val;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* Disable port */
+ val = enetc4_port_rd(enetc_hw, ENETC4_PMR);
+ enetc4_port_wr(enetc_hw, ENETC4_PMR, val & (~ENETC4_PMR_EN));
+
+ val = enetc4_port_rd(enetc_hw, ENETC4_PM_CMD_CFG(0));
+ enetc4_port_wr(enetc_hw, ENETC4_PM_CMD_CFG(0),
+ val & (~(PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN)));
+
+ val = enetc4_port_rd(enetc_hw, ENETC4_PM_CMD_CFG(1));
+ enetc4_port_wr(enetc_hw, ENETC4_PM_CMD_CFG(1),
+ val & (~(PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN)));
+
+ return 0;
+}
+
+static int
+enetc4_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
+{
+ uint32_t *mac = (uint32_t *)hw->mac.addr;
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t high_mac = 0;
+ uint16_t low_mac = 0;
+ char eth_name[ENETC_ETH_NAMESIZE];
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* Enabling Station Interface */
+ enetc4_wr(enetc_hw, ENETC_SIMR, ENETC_SIMR_EN);
+
+ *mac = (uint32_t)enetc4_port_rd(enetc_hw, ENETC4_PSIPMAR0(0));
+ high_mac = (uint32_t)*mac;
+ mac++;
+ *mac = (uint16_t)enetc4_port_rd(enetc_hw, ENETC4_PSIPMAR1(0));
+ low_mac = (uint16_t)*mac;
+
+ if ((high_mac | low_mac) == 0) {
+ char *first_byte;
+
+ ENETC_PMD_NOTICE("MAC is not available for this SI, "
+ "set random MAC");
+ mac = (uint32_t *)hw->mac.addr;
+ *mac = (uint32_t)rte_rand();
+ first_byte = (char *)mac;
+ *first_byte &= 0xfe; /* clear multicast bit */
+ *first_byte |= 0x02; /* set local assignment bit (IEEE802) */
+
+ enetc4_port_wr(enetc_hw, ENETC4_PMAR0, *mac);
+ mac++;
+ *mac = (uint16_t)rte_rand();
+ enetc4_port_wr(enetc_hw, ENETC4_PMAR1, *mac);
+ print_ethaddr("New address: ",
+ (const struct rte_ether_addr *)hw->mac.addr);
+ }
+
+ /* Allocate memory for storing MAC addresses */
+ snprintf(eth_name, sizeof(eth_name), "enetc4_eth_%d", eth_dev->data->port_id);
+ eth_dev->data->mac_addrs = rte_zmalloc(eth_name,
+ RTE_ETHER_ADDR_LEN, 0);
+ if (!eth_dev->data->mac_addrs) {
+ ENETC_PMD_ERR("Failed to allocate %d bytes needed to "
+ "store MAC addresses",
+ RTE_ETHER_ADDR_LEN * 1);
+ return -ENOMEM;
+ }
+
+ /* Copy the permanent MAC address */
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr,
+ ð_dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+int
+enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+ struct rte_eth_dev_info *dev_info)
+{
+ PMD_INIT_FUNC_TRACE();
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = MAX_BD_COUNT,
+ .nb_min = MIN_BD_COUNT,
+ .nb_align = BD_ALIGN,
+ };
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = MAX_BD_COUNT,
+ .nb_min = MIN_BD_COUNT,
+ .nb_align = BD_ALIGN,
+ };
+ dev_info->max_rx_queues = MAX_RX_RINGS;
+ dev_info->max_tx_queues = MAX_TX_RINGS;
+ dev_info->max_rx_pktlen = ENETC4_MAC_MAXFRM_SIZE;
+
+ return 0;
+}
+
+int
+enetc4_dev_close(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ if (hw->device_id == ENETC4_DEV_ID_VF)
+ ret = enetc4_vf_dev_stop(dev);
+ else
+ ret = enetc4_dev_stop(dev);
+
+ if (rte_eal_iova_mode() == RTE_IOVA_PA)
+ dpaax_iova_table_depopulate();
+
+ return ret;
+}
+
+int
+enetc4_dev_configure(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t max_len;
+ uint32_t val;
+
+ PMD_INIT_FUNC_TRACE();
+
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc4_port_wr(enetc_hw, ENETC4_PM_MAXFRM(0), ENETC_SET_MAXFRM(max_len));
+
+ val = ENETC4_MAC_MAXFRM_SIZE | SDU_TYPE_MPDU;
+ enetc4_port_wr(enetc_hw, ENETC4_PTCTMSDUR(0), val | SDU_TYPE_MPDU);
+
+ return 0;
+}
+
+
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_enetc4_map[] = {
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_NXP, ENETC4_DEV_ID) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+/* Features supported by this driver */
+static const struct eth_dev_ops enetc4_ops = {
+ .dev_configure = enetc4_dev_configure,
+ .dev_start = enetc4_dev_start,
+ .dev_stop = enetc4_dev_stop,
+ .dev_close = enetc4_dev_close,
+ .dev_infos_get = enetc4_dev_infos_get,
+};
+
+/*
+ * Storing the HW base addresses
+ *
+ * @param eth_dev
+ * - Pointer to the structure rte_eth_dev
+ */
+void
+enetc4_dev_hw_init(struct rte_eth_dev *eth_dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ eth_dev->rx_pkt_burst = &enetc_recv_pkts;
+ eth_dev->tx_pkt_burst = &enetc_xmit_pkts;
+
+ /* Retrieving and storing the HW base address of device */
+ hw->hw.reg = (void *)pci_dev->mem_resource[0].addr;
+ hw->device_id = pci_dev->id.device_id;
+
+ /* Calculating and storing the base HW addresses */
+ hw->hw.port = (void *)((size_t)hw->hw.reg + ENETC_PORT_BASE);
+ hw->hw.global = (void *)((size_t)hw->hw.reg + ENETC_GLOBAL_BASE);
+}
+
+/**
+ * Initialisation of the enetc4 device
+ *
+ * @param eth_dev
+ * - Pointer to the structure rte_eth_dev
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, negative value.
+ */
+
+static int
+enetc4_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int error = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ eth_dev->dev_ops = &enetc4_ops;
+ enetc4_dev_hw_init(eth_dev);
+
+ error = enetc4_mac_init(hw, eth_dev);
+ if (error != 0) {
+ ENETC_PMD_ERR("MAC initialization failed");
+ return -1;
+ }
+
+ /* Set MTU */
+ enetc_port_wr(&hw->hw, ENETC4_PM_MAXFRM(0),
+ ENETC_SET_MAXFRM(RTE_ETHER_MAX_LEN));
+ eth_dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+
+ if (rte_eal_iova_mode() == RTE_IOVA_PA)
+ dpaax_iova_table_populate();
+
+ ENETC_PMD_DEBUG("port_id %d vendorID=0x%x deviceID=0x%x",
+ eth_dev->data->port_id, pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
+ return 0;
+}
+
+static int
+enetc4_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ return enetc4_dev_close(eth_dev);
+}
+
+static int
+enetc4_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct enetc_eth_adapter),
+ enetc4_dev_init);
+}
+
+int
+enetc4_pci_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_remove(pci_dev, enetc4_dev_uninit);
+}
+
+static struct rte_pci_driver rte_enetc4_pmd = {
+ .id_table = pci_id_enetc4_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = enetc4_pci_probe,
+ .remove = enetc4_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_enetc4, rte_enetc4_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_enetc4, pci_id_enetc4_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_enetc4, "* vfio-pci");
+RTE_LOG_REGISTER_DEFAULT(enetc4_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
new file mode 100644
index 0000000000..7996d6decb
--- /dev/null
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -0,0 +1,146 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ */
+
+#include <stdbool.h>
+#include <rte_random.h>
+#include <dpaax_iova_table.h>
+#include "base/enetc4_hw.h"
+#include "base/enetc_hw.h"
+#include "enetc_logs.h"
+#include "enetc.h"
+
+int
+enetc4_vf_dev_stop(struct rte_eth_dev *dev __rte_unused)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ return 0;
+}
+
+static int
+enetc4_vf_dev_start(struct rte_eth_dev *dev __rte_unused)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ return 0;
+}
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_vf_id_enetc4_map[] = {
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_NXP, ENETC4_DEV_ID_VF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+/* Features supported by this driver */
+static const struct eth_dev_ops enetc4_vf_ops = {
+ .dev_configure = enetc4_dev_configure,
+ .dev_start = enetc4_vf_dev_start,
+ .dev_stop = enetc4_vf_dev_stop,
+ .dev_close = enetc4_dev_close,
+ .dev_infos_get = enetc4_dev_infos_get,
+};
+
+static int
+enetc4_vf_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
+{
+ uint32_t *mac = (uint32_t *)hw->mac.addr;
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t high_mac = 0;
+ uint16_t low_mac = 0;
+ char vf_eth_name[ENETC_ETH_NAMESIZE];
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* Enabling Station Interface */
+ enetc4_wr(enetc_hw, ENETC_SIMR, ENETC_SIMR_EN);
+ *mac = (uint32_t)enetc_rd(enetc_hw, ENETC_SIPMAR0);
+ high_mac = (uint32_t)*mac;
+ mac++;
+ *mac = (uint16_t)enetc_rd(enetc_hw, ENETC_SIPMAR1);
+ low_mac = (uint16_t)*mac;
+
+ if ((high_mac | low_mac) == 0) {
+ char *first_byte;
+ ENETC_PMD_NOTICE("MAC is not available for this SI, "
+ "set random MAC");
+ mac = (uint32_t *)hw->mac.addr;
+ *mac = (uint32_t)rte_rand();
+ first_byte = (char *)mac;
+ *first_byte &= 0xfe; /* clear multicast bit */
+ *first_byte |= 0x02; /* set local assignment bit (IEEE802) */
+ enetc4_port_wr(enetc_hw, ENETC4_PMAR0, *mac);
+ mac++;
+ *mac = (uint16_t)rte_rand();
+ enetc4_port_wr(enetc_hw, ENETC4_PMAR1, *mac);
+ print_ethaddr("New address: ",
+ (const struct rte_ether_addr *)hw->mac.addr);
+ }
+
+ /* Allocate memory for storing MAC addresses */
+ snprintf(vf_eth_name, sizeof(vf_eth_name), "enetc4_vf_eth_%d", eth_dev->data->port_id);
+ eth_dev->data->mac_addrs = rte_zmalloc(vf_eth_name,
+ RTE_ETHER_ADDR_LEN, 0);
+ if (!eth_dev->data->mac_addrs) {
+ ENETC_PMD_ERR("Failed to allocate %d bytes needed to "
+ "store MAC addresses",
+ RTE_ETHER_ADDR_LEN * 1);
+ return -ENOMEM;
+ }
+
+ /* Copy the permanent MAC address */
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr,
+ ð_dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetc4_vf_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int error = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ eth_dev->dev_ops = &enetc4_vf_ops;
+ enetc4_dev_hw_init(eth_dev);
+
+ error = enetc4_vf_mac_init(hw, eth_dev);
+ if (error != 0) {
+ ENETC_PMD_ERR("MAC initialization failed!!");
+ return -1;
+ }
+
+ if (rte_eal_iova_mode() == RTE_IOVA_PA)
+ dpaax_iova_table_populate();
+
+ ENETC_PMD_DEBUG("port_id %d vendorID=0x%x deviceID=0x%x",
+ eth_dev->data->port_id, pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
+ return 0;
+}
+
+static int
+enetc4_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct enetc_eth_adapter),
+ enetc4_vf_dev_init);
+}
+
+static struct rte_pci_driver rte_enetc4_vf_pmd = {
+ .id_table = pci_vf_id_enetc4_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = enetc4_vf_pci_probe,
+ .remove = enetc4_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_enetc4_vf, rte_enetc4_vf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_enetc4_vf, pci_vf_id_enetc4_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_enetc4_vf, "* uio_pci_generic");
+RTE_LOG_REGISTER_DEFAULT(enetc4_vf_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index ffbecc407c..d7cba1ba83 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -1,9 +1,8 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2024 NXP
*/
#include <stdbool.h>
-#include <ethdev_pci.h>
#include <rte_random.h>
#include <dpaax_iova_table.h>
@@ -145,7 +144,7 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
return rte_eth_linkstatus_set(dev, &link);
}
-static void
+void
print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
{
char buf[RTE_ETHER_ADDR_FMT_SIZE];
diff --git a/drivers/net/enetc/kpage_ncache_api.h b/drivers/net/enetc/kpage_ncache_api.h
new file mode 100644
index 0000000000..01291b1d7f
--- /dev/null
+++ b/drivers/net/enetc/kpage_ncache_api.h
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ *
+ * Copyright 2022-2024 NXP
+ *
+ */
+
+#ifndef KPG_NC_MODULE_H
+#define KPG_NC_MODULE_H
+
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+
+#include <rte_log.h>
+
+#include "enetc_logs.h"
+
+#define KPG_NC_DEVICE_NAME "page_ncache"
+#define KPG_NC_DEVICE_PATH "/dev/" KPG_NC_DEVICE_NAME
+
+/* IOCTL */
+#define KPG_NC_MAGIC_NUM 0xf0f0
+#define KPG_NC_IOCTL_UPDATE _IOWR(KPG_NC_MAGIC_NUM, 1, size_t)
+
+
+#define KNRM "\x1B[0m"
+#define KRED "\x1B[31m"
+#define KGRN "\x1B[32m"
+#define KYEL "\x1B[33m"
+#define KBLU "\x1B[34m"
+#define KMAG "\x1B[35m"
+#define KCYN "\x1B[36m"
+#define KWHT "\x1B[37m"
+
+#if defined(RTE_ARCH_ARM) && defined(RTE_ARCH_64)
+static inline void flush_tlb(void *p)
+{
+ asm volatile("dc civac, %0" ::"r"(p));
+ asm volatile("dsb ish");
+ asm volatile("isb");
+}
+#endif
+
+static inline void mark_kpage_ncache(uint64_t huge_page)
+{
+ int fd, ret;
+
+ fd = open(KPG_NC_DEVICE_PATH, O_RDONLY);
+ if (fd < 0) {
+ ENETC_PMD_ERR(KYEL "Error: " KNRM "Could not open: %s",
+ KPG_NC_DEVICE_PATH);
+ return;
+ }
+ ENETC_PMD_DEBUG(KCYN "%s: Huge_Page addr =" KNRM " 0x%" PRIX64,
+ __func__, huge_page);
+ ret = ioctl(fd, KPG_NC_IOCTL_UPDATE, (size_t)&huge_page);
+ if (ret) {
+ ENETC_PMD_ERR(KYEL "Error(%d): " KNRM "non-cachable set",
+ ret);
+ close(fd);
+ return;
+ }
+#if defined(RTE_ARCH_ARM) && defined(RTE_ARCH_64)
+ flush_tlb((void *)huge_page);
+#endif
+ ENETC_PMD_DEBUG(KYEL "Page should be non-cachable now" KNRM);
+
+ close(fd);
+}
+#endif /* KPG_NC_MODULE_H */
diff --git a/drivers/net/enetc/meson.build b/drivers/net/enetc/meson.build
index 966dc694fc..6e00758a36 100644
--- a/drivers/net/enetc/meson.build
+++ b/drivers/net/enetc/meson.build
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018 NXP
+# Copyright 2018,2024 NXP
if not is_linux
build = false
@@ -8,6 +8,8 @@ endif
deps += ['common_dpaax']
sources = files(
+ 'enetc4_ethdev.c',
+ 'enetc4_vf.c',
'enetc_ethdev.c',
'enetc_rxtx.c',
)
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-20 23:40 ` Stephen Hemminger
2024-10-18 7:26 ` [v1 03/12] net/enetc: Optimize ENETC4 data path vanshika.shukla
` (10 subsequent siblings)
12 siblings, 1 reply; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Introduces queue setup, release, start, and stop
APIs for ENETC4 RX and TX queues, enabling:
- Queue configuration and initialization
- Queue resource management (setup, release)
- Queue operation control (start, stop)
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/enetc.h | 13 +
drivers/net/enetc/enetc4_ethdev.c | 434 ++++++++++++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 8 +
drivers/net/enetc/enetc_rxtx.c | 32 +-
5 files changed, 485 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index ca3b9ae992..37b548dcab 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Queue start/stop = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 87fc51b776..9901e434d9 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -98,6 +98,19 @@ int enetc4_dev_configure(struct rte_eth_dev *dev);
int enetc4_dev_close(struct rte_eth_dev *dev);
int enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_dev_info *dev_info);
+int enetc4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+ uint16_t nb_rx_desc, unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool);
+int enetc4_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+int enetc4_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
+void enetc4_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+int enetc4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf);
+int enetc4_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+int enetc4_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
+void enetc4_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
/*
* enetc4_vf function prototype
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 3fe14bd5a6..4d05546308 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -143,10 +143,338 @@ enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
return 0;
}
+static int
+mark_memory_ncache(struct enetc_bdr *bdr, const char *mz_name, unsigned int size)
+{
+ uint64_t huge_page;
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_reserve_aligned(mz_name,
+ size, SOCKET_ID_ANY,
+ RTE_MEMZONE_2MB, size);
+ if (mz) {
+ bdr->bd_base = mz->addr;
+ } else {
+ ENETC_PMD_ERR("Failed to allocate memzone!!,"
+ " please reserve 2MB size pages");
+ return -ENOMEM;
+ }
+ if (mz->hugepage_sz != size)
+ ENETC_PMD_WARN("Hugepage size of queue memzone %" PRIx64,
+ mz->hugepage_sz);
+ bdr->mz = mz;
+
+ /* Mark memory NON-CACHEABLE */
+ huge_page =
+ (uint64_t)RTE_PTR_ALIGN_FLOOR(bdr->bd_base, size);
+ mark_kpage_ncache(huge_page);
+
+ return 0;
+}
+
+static int
+enetc4_alloc_txbdr(uint16_t port_id, struct enetc_bdr *txr, uint16_t nb_desc)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ int size;
+
+ size = nb_desc * sizeof(struct enetc_swbd);
+ txr->q_swbd = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
+ if (txr->q_swbd == NULL)
+ return -ENOMEM;
+
+ snprintf(mz_name, sizeof(mz_name), "bdt_addr_%d", port_id);
+ if (mark_memory_ncache(txr, mz_name, SIZE_2MB)) {
+ ENETC_PMD_ERR("Failed to mark BD memory non-cacheable!");
+ rte_free(txr->q_swbd);
+ txr->q_swbd = NULL;
+ return -ENOMEM;
+ }
+ txr->bd_count = nb_desc;
+ txr->next_to_clean = 0;
+ txr->next_to_use = 0;
+
+ return 0;
+}
+
+static void
+enetc4_free_bdr(struct enetc_bdr *rxr)
+{
+ rte_memzone_free(rxr->mz);
+ rxr->mz = NULL;
+ rte_free(rxr->q_swbd);
+ rxr->q_swbd = NULL;
+ rxr->bd_base = NULL;
+}
+
+static void
+enetc4_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
+{
+ int idx = tx_ring->index;
+ phys_addr_t bd_address;
+
+ bd_address = (phys_addr_t)
+ rte_mem_virt2iova((const void *)tx_ring->bd_base);
+ enetc4_txbdr_wr(hw, idx, ENETC_TBBAR0,
+ lower_32_bits((uint64_t)bd_address));
+ enetc4_txbdr_wr(hw, idx, ENETC_TBBAR1,
+ upper_32_bits((uint64_t)bd_address));
+ enetc4_txbdr_wr(hw, idx, ENETC_TBLENR,
+ ENETC_RTBLENR_LEN(tx_ring->bd_count));
+
+ enetc4_txbdr_wr(hw, idx, ENETC_TBCIR, 0);
+ enetc4_txbdr_wr(hw, idx, ENETC_TBCISR, 0);
+ tx_ring->tcir = (void *)((size_t)hw->reg +
+ ENETC_BDR(TX, idx, ENETC_TBCIR));
+ tx_ring->tcisr = (void *)((size_t)hw->reg +
+ ENETC_BDR(TX, idx, ENETC_TBCISR));
+}
+
+int
+enetc4_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ int err = 0;
+ struct enetc_bdr *tx_ring;
+ struct rte_eth_dev_data *data = dev->data;
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(data->dev_private);
+
+ PMD_INIT_FUNC_TRACE();
+ if (nb_desc > MAX_BD_COUNT)
+ return -1;
+
+ tx_ring = rte_zmalloc(NULL, sizeof(struct enetc_bdr), 0);
+ if (tx_ring == NULL) {
+ ENETC_PMD_ERR("Failed to allocate TX ring memory");
+ err = -ENOMEM;
+ return -1;
+ }
+
+ tx_ring->index = queue_idx;
+ err = enetc4_alloc_txbdr(data->port_id, tx_ring, nb_desc);
+ if (err)
+ goto fail;
+
+ tx_ring->ndev = dev;
+ enetc4_setup_txbdr(&priv->hw.hw, tx_ring);
+ data->tx_queues[queue_idx] = tx_ring;
+ if (!tx_conf->tx_deferred_start) {
+ /* enable ring */
+ enetc4_txbdr_wr(&priv->hw.hw, tx_ring->index,
+ ENETC_TBMR, ENETC_TBMR_EN);
+ dev->data->tx_queue_state[tx_ring->index] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+ } else {
+ dev->data->tx_queue_state[tx_ring->index] =
+ RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+fail:
+ rte_free(tx_ring);
+
+ return err;
+}
+
+void
+enetc4_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void *txq = dev->data->tx_queues[qid];
+
+ if (txq == NULL)
+ return;
+
+ struct enetc_bdr *tx_ring = (struct enetc_bdr *)txq;
+ struct enetc_eth_hw *eth_hw =
+ ENETC_DEV_PRIVATE_TO_HW(tx_ring->ndev->data->dev_private);
+ struct enetc_hw *hw;
+ struct enetc_swbd *tx_swbd;
+ int i;
+ uint32_t val;
+
+ /* Disable the ring */
+ hw = ð_hw->hw;
+ val = enetc4_txbdr_rd(hw, tx_ring->index, ENETC_TBMR);
+ val &= (~ENETC_TBMR_EN);
+ enetc4_txbdr_wr(hw, tx_ring->index, ENETC_TBMR, val);
+
+ /* clean the ring*/
+ i = tx_ring->next_to_clean;
+ tx_swbd = &tx_ring->q_swbd[i];
+ while (tx_swbd->buffer_addr != NULL) {
+ rte_pktmbuf_free(tx_swbd->buffer_addr);
+ tx_swbd->buffer_addr = NULL;
+ tx_swbd++;
+ i++;
+ if (unlikely(i == tx_ring->bd_count)) {
+ i = 0;
+ tx_swbd = &tx_ring->q_swbd[i];
+ }
+ }
+
+ enetc4_free_bdr(tx_ring);
+ rte_free(tx_ring);
+}
+
+static int
+enetc4_alloc_rxbdr(uint16_t port_id, struct enetc_bdr *rxr,
+ uint16_t nb_desc)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ int size;
+
+ size = nb_desc * sizeof(struct enetc_swbd);
+ rxr->q_swbd = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
+ if (rxr->q_swbd == NULL)
+ return -ENOMEM;
+
+ snprintf(mz_name, sizeof(mz_name), "bdr_addr_%d", port_id);
+ if (mark_memory_ncache(rxr, mz_name, SIZE_2MB)) {
+ ENETC_PMD_ERR("Failed to mark BD memory non-cacheable!");
+ rte_free(rxr->q_swbd);
+ rxr->q_swbd = NULL;
+ return -ENOMEM;
+ }
+ rxr->bd_count = nb_desc;
+ rxr->next_to_clean = 0;
+ rxr->next_to_use = 0;
+ rxr->next_to_alloc = 0;
+
+ return 0;
+}
+
+static void
+enetc4_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring,
+ struct rte_mempool *mb_pool)
+{
+ int idx = rx_ring->index;
+ uint16_t buf_size;
+ phys_addr_t bd_address;
+
+ bd_address = (phys_addr_t)
+ rte_mem_virt2iova((const void *)rx_ring->bd_base);
+
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBBAR0,
+ lower_32_bits((uint64_t)bd_address));
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBBAR1,
+ upper_32_bits((uint64_t)bd_address));
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBLENR,
+ ENETC_RTBLENR_LEN(rx_ring->bd_count));
+
+ rx_ring->mb_pool = mb_pool;
+ rx_ring->rcir = (void *)((size_t)hw->reg +
+ ENETC_BDR(RX, idx, ENETC_RBCIR));
+ enetc_refill_rx_ring(rx_ring, (enetc_bd_unused(rx_ring)));
+ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rx_ring->mb_pool) -
+ RTE_PKTMBUF_HEADROOM);
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBBSR, buf_size);
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBPIR, 0);
+}
+
+int
+enetc4_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ int err = 0;
+ struct enetc_bdr *rx_ring;
+ struct rte_eth_dev_data *data = dev->data;
+ struct enetc_eth_adapter *adapter =
+ ENETC_DEV_PRIVATE(data->dev_private);
+ uint64_t rx_offloads = data->dev_conf.rxmode.offloads;
+
+ PMD_INIT_FUNC_TRACE();
+ if (nb_rx_desc > MAX_BD_COUNT)
+ return -1;
+
+ rx_ring = rte_zmalloc(NULL, sizeof(struct enetc_bdr), 0);
+ if (rx_ring == NULL) {
+ ENETC_PMD_ERR("Failed to allocate RX ring memory");
+ err = -ENOMEM;
+ return err;
+ }
+
+ rx_ring->index = rx_queue_id;
+ err = enetc4_alloc_rxbdr(data->port_id, rx_ring, nb_rx_desc);
+ if (err)
+ goto fail;
+
+ rx_ring->ndev = dev;
+ enetc4_setup_rxbdr(&adapter->hw.hw, rx_ring, mb_pool);
+ data->rx_queues[rx_queue_id] = rx_ring;
+
+ if (!rx_conf->rx_deferred_start) {
+ /* enable ring */
+ enetc4_rxbdr_wr(&adapter->hw.hw, rx_ring->index, ENETC_RBMR,
+ ENETC_RBMR_EN);
+ dev->data->rx_queue_state[rx_ring->index] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+ } else {
+ dev->data->rx_queue_state[rx_ring->index] =
+ RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
+ RTE_ETHER_CRC_LEN : 0);
+ return 0;
+fail:
+ rte_free(rx_ring);
+
+ return err;
+}
+
+void
+enetc4_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void *rxq = dev->data->rx_queues[qid];
+
+ if (rxq == NULL)
+ return;
+
+ struct enetc_bdr *rx_ring = (struct enetc_bdr *)rxq;
+ struct enetc_eth_hw *eth_hw =
+ ENETC_DEV_PRIVATE_TO_HW(rx_ring->ndev->data->dev_private);
+ struct enetc_swbd *q_swbd;
+ struct enetc_hw *hw;
+ uint32_t val;
+ int i;
+
+ /* Disable the ring */
+ hw = ð_hw->hw;
+ val = enetc4_rxbdr_rd(hw, rx_ring->index, ENETC_RBMR);
+ val &= (~ENETC_RBMR_EN);
+ enetc4_rxbdr_wr(hw, rx_ring->index, ENETC_RBMR, val);
+
+ /* Clean the ring */
+ i = rx_ring->next_to_clean;
+ q_swbd = &rx_ring->q_swbd[i];
+ while (i != rx_ring->next_to_use) {
+ rte_pktmbuf_free(q_swbd->buffer_addr);
+ q_swbd->buffer_addr = NULL;
+ q_swbd++;
+ i++;
+ if (unlikely(i == rx_ring->bd_count)) {
+ i = 0;
+ q_swbd = &rx_ring->q_swbd[i];
+ }
+ }
+
+ enetc4_free_bdr(rx_ring);
+ rte_free(rx_ring);
+}
+
int
enetc4_dev_close(struct rte_eth_dev *dev)
{
struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint16_t i;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -158,6 +486,18 @@ enetc4_dev_close(struct rte_eth_dev *dev)
else
ret = enetc4_dev_stop(dev);
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ enetc4_rx_queue_release(dev, i);
+ dev->data->rx_queues[i] = NULL;
+ }
+ dev->data->nb_rx_queues = 0;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ enetc4_tx_queue_release(dev, i);
+ dev->data->tx_queues[i] = NULL;
+ }
+ dev->data->nb_tx_queues = 0;
+
if (rte_eal_iova_mode() == RTE_IOVA_PA)
dpaax_iova_table_depopulate();
@@ -185,7 +525,93 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
return 0;
}
+int
+enetc4_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(dev->data->dev_private);
+ struct enetc_bdr *rx_ring;
+ uint32_t rx_data;
+ PMD_INIT_FUNC_TRACE();
+ rx_ring = dev->data->rx_queues[qidx];
+ if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) {
+ rx_data = enetc4_rxbdr_rd(&priv->hw.hw, rx_ring->index,
+ ENETC_RBMR);
+ rx_data = rx_data | ENETC_RBMR_EN;
+ enetc4_rxbdr_wr(&priv->hw.hw, rx_ring->index, ENETC_RBMR,
+ rx_data);
+ dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ return 0;
+}
+
+int
+enetc4_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(dev->data->dev_private);
+ struct enetc_bdr *rx_ring;
+ uint32_t rx_data;
+
+ PMD_INIT_FUNC_TRACE();
+ rx_ring = dev->data->rx_queues[qidx];
+ if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) {
+ rx_data = enetc4_rxbdr_rd(&priv->hw.hw, rx_ring->index,
+ ENETC_RBMR);
+ rx_data = rx_data & (~ENETC_RBMR_EN);
+ enetc4_rxbdr_wr(&priv->hw.hw, rx_ring->index, ENETC_RBMR,
+ rx_data);
+ dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
+
+int
+enetc4_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(dev->data->dev_private);
+ struct enetc_bdr *tx_ring;
+ uint32_t tx_data;
+
+ PMD_INIT_FUNC_TRACE();
+ tx_ring = dev->data->tx_queues[qidx];
+ if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) {
+ tx_data = enetc4_txbdr_rd(&priv->hw.hw, tx_ring->index,
+ ENETC_TBMR);
+ tx_data = tx_data | ENETC_TBMR_EN;
+ enetc4_txbdr_wr(&priv->hw.hw, tx_ring->index, ENETC_TBMR,
+ tx_data);
+ dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ return 0;
+}
+
+int
+enetc4_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(dev->data->dev_private);
+ struct enetc_bdr *tx_ring;
+ uint32_t tx_data;
+
+ PMD_INIT_FUNC_TRACE();
+ tx_ring = dev->data->tx_queues[qidx];
+ if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) {
+ tx_data = enetc4_txbdr_rd(&priv->hw.hw, tx_ring->index,
+ ENETC_TBMR);
+ tx_data = tx_data & (~ENETC_TBMR_EN);
+ enetc4_txbdr_wr(&priv->hw.hw, tx_ring->index, ENETC_TBMR,
+ tx_data);
+ dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
/*
* The set of PCI devices this driver supports
@@ -202,6 +628,14 @@ static const struct eth_dev_ops enetc4_ops = {
.dev_stop = enetc4_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .rx_queue_setup = enetc4_rx_queue_setup,
+ .rx_queue_start = enetc4_rx_queue_start,
+ .rx_queue_stop = enetc4_rx_queue_stop,
+ .rx_queue_release = enetc4_rx_queue_release,
+ .tx_queue_setup = enetc4_tx_queue_setup,
+ .tx_queue_start = enetc4_tx_queue_start,
+ .tx_queue_stop = enetc4_tx_queue_stop,
+ .tx_queue_release = enetc4_tx_queue_release,
};
/*
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 7996d6decb..0c68229a8d 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -41,6 +41,14 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_stop = enetc4_vf_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .rx_queue_setup = enetc4_rx_queue_setup,
+ .rx_queue_start = enetc4_rx_queue_start,
+ .rx_queue_stop = enetc4_rx_queue_stop,
+ .rx_queue_release = enetc4_rx_queue_release,
+ .tx_queue_setup = enetc4_tx_queue_setup,
+ .tx_queue_start = enetc4_tx_queue_start,
+ .tx_queue_stop = enetc4_tx_queue_stop,
+ .tx_queue_release = enetc4_tx_queue_release,
};
static int
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index ea64c9f682..1fc5f11339 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2024 NXP
*/
#include <stdbool.h>
@@ -11,6 +11,7 @@
#include "rte_memzone.h"
#include "base/enetc_hw.h"
+#include "base/enetc4_hw.h"
#include "enetc.h"
#include "enetc_logs.h"
@@ -85,6 +86,12 @@ enetc_xmit_pkts(void *tx_queue,
int i, start, bds_to_use;
struct enetc_tx_bd *txbd;
struct enetc_bdr *tx_ring = (struct enetc_bdr *)tx_queue;
+ unsigned short buflen;
+ uint8_t *data;
+ int j;
+
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(tx_ring->ndev->data->dev_private);
i = tx_ring->next_to_use;
@@ -95,6 +102,13 @@ enetc_xmit_pkts(void *tx_queue,
start = 0;
while (nb_pkts--) {
tx_ring->q_swbd[i].buffer_addr = tx_pkts[start];
+
+ if (hw->device_id == ENETC4_DEV_ID || hw->device_id == ENETC4_DEV_ID_VF) {
+ buflen = rte_pktmbuf_pkt_len(tx_ring->q_swbd[i].buffer_addr);
+ data = rte_pktmbuf_mtod(tx_ring->q_swbd[i].buffer_addr, void *);
+ for (j = 0; j <= buflen; j += RTE_CACHE_LINE_SIZE)
+ dcbf(data + j);
+ }
txbd = ENETC_TXBD(*tx_ring, i);
tx_swbd = &tx_ring->q_swbd[i];
txbd->frm_len = tx_pkts[start]->pkt_len;
@@ -326,6 +340,12 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
int cleaned_cnt, i, bd_count;
struct enetc_swbd *rx_swbd;
union enetc_rx_bd *rxbd;
+ uint32_t bd_status;
+ uint8_t *data;
+ uint32_t j;
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(rx_ring->ndev->data->dev_private);
+
/* next descriptor to process */
i = rx_ring->next_to_clean;
@@ -351,9 +371,8 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
cleaned_cnt = enetc_bd_unused(rx_ring);
rx_swbd = &rx_ring->q_swbd[i];
- while (likely(rx_frm_cnt < work_limit)) {
- uint32_t bd_status;
+ while (likely(rx_frm_cnt < work_limit)) {
bd_status = rte_le_to_cpu_32(rxbd->r.lstatus);
if (!bd_status)
break;
@@ -366,6 +385,13 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
rx_swbd->buffer_addr->ol_flags = 0;
enetc_dev_rx_parse(rx_swbd->buffer_addr,
rxbd->r.parse_summary);
+
+ if (hw->device_id == ENETC4_DEV_ID || hw->device_id == ENETC4_DEV_ID_VF) {
+ data = rte_pktmbuf_mtod(rx_swbd->buffer_addr, void *);
+ for (j = 0; j <= rx_swbd->buffer_addr->pkt_len; j += RTE_CACHE_LINE_SIZE)
+ dccivac(data + j);
+ }
+
rx_pkts[rx_frm_cnt] = rx_swbd->buffer_addr;
cleaned_cnt++;
rx_swbd++;
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 03/12] net/enetc: Optimize ENETC4 data path
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
2024-10-18 7:26 ` [v1 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-21 0:06 ` Stephen Hemminger
2024-10-18 7:26 ` [v1 04/12] net/enetc: Add TX checksum offload and RX checksum validation vanshika.shukla
` (9 subsequent siblings)
12 siblings, 1 reply; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Improves ENETC4 data path on i.MX95 Non-cache coherent platform by:
- Adding separate RX and TX functions.
- Reducing memory accesses
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/enetc/base/enetc4_hw.h | 2 +
drivers/net/enetc/enetc.h | 5 +
drivers/net/enetc/enetc4_ethdev.c | 4 +-
drivers/net/enetc/enetc_rxtx.c | 153 ++++++++++++++++++++++++-----
4 files changed, 138 insertions(+), 26 deletions(-)
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 34a4ca3b02..759cfaba28 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -14,6 +14,8 @@
#define ENETC4_DEV_ID_VF 0xef00
#define PCI_VENDOR_ID_NXP 0x1131
+#define ENETC4_TXBD_FLAGS_F BIT(7)
+
/***************************ENETC port registers**************************/
#define ENETC4_PMR 0x10
#define ENETC4_PMR_EN (BIT(16) | BIT(17) | BIT(18))
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 9901e434d9..79c158513c 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -68,6 +68,7 @@ struct enetc_bdr {
struct rte_mempool *mb_pool; /* mbuf pool to populate RX ring. */
struct rte_eth_dev *ndev;
const struct rte_memzone *mz;
+ uint64_t ierrors;
};
/*
@@ -122,8 +123,12 @@ int enetc4_vf_dev_stop(struct rte_eth_dev *dev);
*/
uint16_t enetc_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+uint16_t enetc_xmit_pkts_nc(void *txq, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
uint16_t enetc_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+uint16_t enetc_recv_pkts_nc(void *rxq, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
int enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt);
void enetc4_dev_hw_init(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 4d05546308..290b90b9bc 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -651,8 +651,8 @@ enetc4_dev_hw_init(struct rte_eth_dev *eth_dev)
ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- eth_dev->rx_pkt_burst = &enetc_recv_pkts;
- eth_dev->tx_pkt_burst = &enetc_xmit_pkts;
+ eth_dev->rx_pkt_burst = &enetc_recv_pkts_nc;
+ eth_dev->tx_pkt_burst = &enetc_xmit_pkts_nc;
/* Retrieving and storing the HW base address of device */
hw->hw.reg = (void *)pci_dev->mem_resource[0].addr;
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 1fc5f11339..d29b64ab56 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -86,12 +86,6 @@ enetc_xmit_pkts(void *tx_queue,
int i, start, bds_to_use;
struct enetc_tx_bd *txbd;
struct enetc_bdr *tx_ring = (struct enetc_bdr *)tx_queue;
- unsigned short buflen;
- uint8_t *data;
- int j;
-
- struct enetc_eth_hw *hw =
- ENETC_DEV_PRIVATE_TO_HW(tx_ring->ndev->data->dev_private);
i = tx_ring->next_to_use;
@@ -103,12 +97,6 @@ enetc_xmit_pkts(void *tx_queue,
while (nb_pkts--) {
tx_ring->q_swbd[i].buffer_addr = tx_pkts[start];
- if (hw->device_id == ENETC4_DEV_ID || hw->device_id == ENETC4_DEV_ID_VF) {
- buflen = rte_pktmbuf_pkt_len(tx_ring->q_swbd[i].buffer_addr);
- data = rte_pktmbuf_mtod(tx_ring->q_swbd[i].buffer_addr, void *);
- for (j = 0; j <= buflen; j += RTE_CACHE_LINE_SIZE)
- dcbf(data + j);
- }
txbd = ENETC_TXBD(*tx_ring, i);
tx_swbd = &tx_ring->q_swbd[i];
txbd->frm_len = tx_pkts[start]->pkt_len;
@@ -136,6 +124,61 @@ enetc_xmit_pkts(void *tx_queue,
return start;
}
+uint16_t
+enetc_xmit_pkts_nc(void *tx_queue,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct enetc_swbd *tx_swbd;
+ int i, start, bds_to_use;
+ struct enetc_tx_bd *txbd;
+ struct enetc_bdr *tx_ring = (struct enetc_bdr *)tx_queue;
+ unsigned int buflen, j;
+ uint8_t *data;
+
+ i = tx_ring->next_to_use;
+
+ bds_to_use = enetc_bd_unused(tx_ring);
+ if (bds_to_use < nb_pkts)
+ nb_pkts = bds_to_use;
+
+ start = 0;
+ while (nb_pkts--) {
+ tx_ring->q_swbd[i].buffer_addr = tx_pkts[start];
+
+ buflen = rte_pktmbuf_pkt_len(tx_ring->q_swbd[i].buffer_addr);
+ data = rte_pktmbuf_mtod(tx_ring->q_swbd[i].buffer_addr, void *);
+ for (j = 0; j <= buflen; j += RTE_CACHE_LINE_SIZE)
+ dcbf(data + j);
+
+ txbd = ENETC_TXBD(*tx_ring, i);
+ txbd->flags = rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_F);
+
+ tx_swbd = &tx_ring->q_swbd[i];
+ txbd->frm_len = buflen;
+ txbd->buf_len = txbd->frm_len;
+ txbd->addr = (uint64_t)(uintptr_t)
+ rte_cpu_to_le_64((size_t)tx_swbd->buffer_addr->buf_iova +
+ tx_swbd->buffer_addr->data_off);
+ i++;
+ start++;
+ if (unlikely(i == tx_ring->bd_count))
+ i = 0;
+ }
+
+ /* we're only cleaning up the Tx ring here, on the assumption that
+ * software is slower than hardware and hardware completed sending
+ * older frames out by now.
+ * We're also cleaning up the ring before kicking off Tx for the new
+ * batch to minimize chances of contention on the Tx ring
+ */
+ enetc_clean_tx_ring(tx_ring);
+
+ tx_ring->next_to_use = i;
+ enetc_wr_reg(tx_ring->tcir, i);
+ return start;
+}
+
int
enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
{
@@ -171,7 +214,7 @@ enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
k++;
if (unlikely(i == rx_ring->bd_count)) {
i = 0;
- rxbd = ENETC_RXBD(*rx_ring, 0);
+ rxbd = ENETC_RXBD(*rx_ring, i);
rx_swbd = &rx_ring->q_swbd[i];
}
}
@@ -341,11 +384,6 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
struct enetc_swbd *rx_swbd;
union enetc_rx_bd *rxbd;
uint32_t bd_status;
- uint8_t *data;
- uint32_t j;
- struct enetc_eth_hw *hw =
- ENETC_DEV_PRIVATE_TO_HW(rx_ring->ndev->data->dev_private);
-
/* next descriptor to process */
i = rx_ring->next_to_clean;
@@ -386,12 +424,6 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
enetc_dev_rx_parse(rx_swbd->buffer_addr,
rxbd->r.parse_summary);
- if (hw->device_id == ENETC4_DEV_ID || hw->device_id == ENETC4_DEV_ID_VF) {
- data = rte_pktmbuf_mtod(rx_swbd->buffer_addr, void *);
- for (j = 0; j <= rx_swbd->buffer_addr->pkt_len; j += RTE_CACHE_LINE_SIZE)
- dccivac(data + j);
- }
-
rx_pkts[rx_frm_cnt] = rx_swbd->buffer_addr;
cleaned_cnt++;
rx_swbd++;
@@ -417,6 +449,79 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
return rx_frm_cnt;
}
+static int
+enetc_clean_rx_ring_nc(struct enetc_bdr *rx_ring,
+ struct rte_mbuf **rx_pkts,
+ int work_limit)
+{
+ int rx_frm_cnt = 0;
+ int cleaned_cnt, i;
+ struct enetc_swbd *rx_swbd;
+ union enetc_rx_bd *rxbd, rxbd_temp;
+ uint32_t bd_status;
+ uint8_t *data;
+ uint32_t j;
+
+ /* next descriptor to process */
+ i = rx_ring->next_to_clean;
+ /* next descriptor to process */
+ rxbd = ENETC_RXBD(*rx_ring, i);
+
+ cleaned_cnt = enetc_bd_unused(rx_ring);
+ rx_swbd = &rx_ring->q_swbd[i];
+
+ while (likely(rx_frm_cnt < work_limit)) {
+#ifdef RTE_ARCH_32
+ rte_memcpy(&rxbd_temp, rxbd, 16);
+#else
+ __uint128_t *dst128 = (__uint128_t *)&rxbd_temp;
+ const __uint128_t *src128 = (const __uint128_t *)rxbd;
+ *dst128 = *src128;
+#endif
+ bd_status = rte_le_to_cpu_32(rxbd_temp.r.lstatus);
+ if (!bd_status)
+ break;
+ if (rxbd_temp.r.error)
+ rx_ring->ierrors++;
+
+ rx_swbd->buffer_addr->pkt_len = rxbd_temp.r.buf_len -
+ rx_ring->crc_len;
+ rx_swbd->buffer_addr->data_len = rx_swbd->buffer_addr->pkt_len;
+ rx_swbd->buffer_addr->hash.rss = rxbd_temp.r.rss_hash;
+ enetc_dev_rx_parse(rx_swbd->buffer_addr,
+ rxbd_temp.r.parse_summary);
+
+ data = rte_pktmbuf_mtod(rx_swbd->buffer_addr, void *);
+ for (j = 0; j <= rx_swbd->buffer_addr->pkt_len; j += RTE_CACHE_LINE_SIZE)
+ dccivac(data + j);
+
+ rx_pkts[rx_frm_cnt] = rx_swbd->buffer_addr;
+ cleaned_cnt++;
+ rx_swbd++;
+ i++;
+ if (unlikely(i == rx_ring->bd_count)) {
+ i = 0;
+ rx_swbd = &rx_ring->q_swbd[i];
+ }
+ rxbd = ENETC_RXBD(*rx_ring, i);
+ rx_frm_cnt++;
+ }
+
+ rx_ring->next_to_clean = i;
+ enetc_refill_rx_ring(rx_ring, cleaned_cnt);
+
+ return rx_frm_cnt;
+}
+
+uint16_t
+enetc_recv_pkts_nc(void *rxq, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct enetc_bdr *rx_ring = (struct enetc_bdr *)rxq;
+
+ return enetc_clean_rx_ring_nc(rx_ring, rx_pkts, nb_pkts);
+}
+
uint16_t
enetc_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts)
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 04/12] net/enetc: Add TX checksum offload and RX checksum validation
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (2 preceding siblings ...)
2024-10-18 7:26 ` [v1 03/12] net/enetc: Optimize ENETC4 data path vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-18 7:26 ` [v1 05/12] net/enetc: Add basic statistics vanshika.shukla
` (8 subsequent siblings)
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch add support for:
- L3 (IPv4, IPv6) TX checksum offload
- L4 (TCP, UDP) TX checksum offload
- RX checksum validation for IPv4, IPv6, TCP, UDP
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 2 ++
drivers/net/enetc/base/enetc4_hw.h | 14 ++++++++++
drivers/net/enetc/base/enetc_hw.h | 18 ++++++++++---
drivers/net/enetc/enetc.h | 5 ++++
drivers/net/enetc/enetc4_ethdev.c | 40 +++++++++++++++++++++++++++++
drivers/net/enetc/enetc_rxtx.c | 22 ++++++++++++++++
6 files changed, 97 insertions(+), 4 deletions(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 37b548dcab..55b3b95953 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+L3 checksum offload = Y
+L4 checksum offload = Y
Queue start/stop = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 759cfaba28..114d27f34b 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -14,12 +14,26 @@
#define ENETC4_DEV_ID_VF 0xef00
#define PCI_VENDOR_ID_NXP 0x1131
+/* enetc4 txbd flags */
+#define ENETC4_TXBD_FLAGS_L4CS BIT(0)
+#define ENETC4_TXBD_FLAGS_L_TX_CKSUM BIT(3)
#define ENETC4_TXBD_FLAGS_F BIT(7)
+/* L4 type */
+#define ENETC4_TXBD_L4T_UDP BIT(0)
+#define ENETC4_TXBD_L4T_TCP BIT(1)
+/* L3 type is set to 0 for IPv4 and 1 for IPv6 */
+#define ENETC4_TXBD_L3T 0
+/* IPv4 checksum */
+#define ENETC4_TXBD_IPCS 1
/***************************ENETC port registers**************************/
#define ENETC4_PMR 0x10
#define ENETC4_PMR_EN (BIT(16) | BIT(17) | BIT(18))
+#define ENETC4_PARCSCR 0x9c
+#define L3_CKSUM BIT(0)
+#define L4_CKSUM BIT(1)
+
/* Port Station interface promiscuous MAC mode register */
#define ENETC4_PSIPMMR 0x200
#define PSIPMMR_SI0_MAC_UP BIT(0)
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 2d63c54db6..3cdfe23fc0 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -189,8 +189,7 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_TX_ADDR(txq, addr) ((void *)((txq)->enetc_txbdr + (addr)))
-#define ENETC_TXBD_FLAGS_IE BIT(13)
-#define ENETC_TXBD_FLAGS_F BIT(15)
+#define ENETC_TXBD_FLAGS_F BIT(7)
/* ENETC Parsed values (Little Endian) */
#define ENETC_PARSE_ERROR 0x8000
@@ -249,8 +248,19 @@ struct enetc_tx_bd {
uint64_t addr;
uint16_t buf_len;
uint16_t frm_len;
- uint16_t err_csum;
- uint16_t flags;
+ union {
+ struct {
+ uint8_t l3_start:7;
+ uint8_t ipcs:1;
+ uint8_t l3_hdr_size:7;
+ uint8_t l3t:1;
+ uint8_t resv:5;
+ uint8_t l4t:3;
+ uint8_t flags;
+ };/* default layout */
+ uint32_t txstart;
+ uint32_t lstatus;
+ };
};
/* RX buffer descriptor */
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 79c158513c..c29353a89b 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -45,6 +45,11 @@
#define ENETC_TXBD(BDR, i) (&(((struct enetc_tx_bd *)((BDR).bd_base))[i]))
#define ENETC_RXBD(BDR, i) (&(((union enetc_rx_bd *)((BDR).bd_base))[i]))
+#define ENETC4_MBUF_F_TX_IP_IPV4 (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4)
+#define ENETC4_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_TCP_CKSUM | \
+ RTE_MBUF_F_TX_UDP_CKSUM)
+
struct enetc_swbd {
struct rte_mbuf *buffer_addr;
};
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 290b90b9bc..02f048aa3c 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -11,6 +11,18 @@
#include "enetc_logs.h"
#include "enetc.h"
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
+
+/* Supported Tx offloads */
+static uint64_t dev_tx_offloads_sup =
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
+
static int
enetc4_dev_start(struct rte_eth_dev *dev)
{
@@ -139,6 +151,8 @@ enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
dev_info->max_rx_queues = MAX_RX_RINGS;
dev_info->max_tx_queues = MAX_TX_RINGS;
dev_info->max_rx_pktlen = ENETC4_MAC_MAXFRM_SIZE;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ dev_info->tx_offload_capa = dev_tx_offloads_sup;
return 0;
}
@@ -509,6 +523,10 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
{
struct enetc_eth_hw *hw =
ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
+ uint64_t tx_offloads = eth_conf->txmode.offloads;
+ uint32_t checksum = L3_CKSUM | L4_CKSUM;
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t max_len;
uint32_t val;
@@ -522,6 +540,28 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
val = ENETC4_MAC_MAXFRM_SIZE | SDU_TYPE_MPDU;
enetc4_port_wr(enetc_hw, ENETC4_PTCTMSDUR(0), val | SDU_TYPE_MPDU);
+ /* Rx offloads which are enabled by default */
+ if (dev_rx_offloads_sup & ~rx_offloads) {
+ ENETC_PMD_INFO("Some of rx offloads enabled by default"
+ " - requested 0x%" PRIx64 " fixed are 0x%" PRIx64,
+ rx_offloads, dev_rx_offloads_sup);
+ }
+
+ /* Tx offloads which are enabled by default */
+ if (dev_tx_offloads_sup & ~tx_offloads) {
+ ENETC_PMD_INFO("Some of tx offloads enabled by default"
+ " - requested 0x%" PRIx64 " fixed are 0x%" PRIx64,
+ tx_offloads, dev_tx_offloads_sup);
+ }
+
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
+ checksum &= ~L3_CKSUM;
+
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ checksum &= ~L4_CKSUM;
+
+ enetc4_port_wr(enetc_hw, ENETC4_PARCSCR, checksum);
+
return 0;
}
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index d29b64ab56..963bd6fb31 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -124,6 +124,26 @@ enetc_xmit_pkts(void *tx_queue,
return start;
}
+static void
+enetc4_tx_offload_checksum(struct rte_mbuf *mbuf, struct enetc_tx_bd *txbd)
+{
+ if ((mbuf->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4))
+ == ENETC4_MBUF_F_TX_IP_IPV4) {
+ txbd->l3t = ENETC4_TXBD_L3T;
+ txbd->ipcs = ENETC4_TXBD_IPCS;
+ txbd->l3_start = mbuf->l2_len;
+ txbd->l3_hdr_size = mbuf->l3_len / 4;
+ txbd->flags |= rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_L_TX_CKSUM);
+ if ((mbuf->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) == RTE_MBUF_F_TX_UDP_CKSUM) {
+ txbd->l4t = rte_cpu_to_le_16(ENETC4_TXBD_L4T_UDP);
+ txbd->flags |= rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_L4CS);
+ } else if ((mbuf->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) == RTE_MBUF_F_TX_TCP_CKSUM) {
+ txbd->l4t = rte_cpu_to_le_16(ENETC4_TXBD_L4T_TCP);
+ txbd->flags |= rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_L4CS);
+ }
+ }
+}
+
uint16_t
enetc_xmit_pkts_nc(void *tx_queue,
struct rte_mbuf **tx_pkts,
@@ -153,6 +173,8 @@ enetc_xmit_pkts_nc(void *tx_queue,
txbd = ENETC_TXBD(*tx_ring, i);
txbd->flags = rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_F);
+ if (tx_ring->q_swbd[i].buffer_addr->ol_flags & ENETC4_TX_CKSUM_OFFLOAD_MASK)
+ enetc4_tx_offload_checksum(tx_ring->q_swbd[i].buffer_addr, txbd);
tx_swbd = &tx_ring->q_swbd[i];
txbd->frm_len = buflen;
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 05/12] net/enetc: Add basic statistics
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (3 preceding siblings ...)
2024-10-18 7:26 ` [v1 04/12] net/enetc: Add TX checksum offload and RX checksum validation vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-18 7:26 ` [v1 06/12] net/enetc: Add packet type parsing support vanshika.shukla
` (7 subsequent siblings)
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Introduces basic statistics collection for ENETC4 PMD, including:
- Packet transmit/receive counts
- Byte transmit/receive counts
- Error counters (TX/RX drops, errors)
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/base/enetc4_hw.h | 7 +++++
drivers/net/enetc/base/enetc_hw.h | 5 +++-
drivers/net/enetc/enetc4_ethdev.c | 42 +++++++++++++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 24 +++++++++++++++++
5 files changed, 78 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 55b3b95953..e814852d2d 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Basic stats = Y
L3 checksum offload = Y
L4 checksum offload = Y
Queue start/stop = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 114d27f34b..874cdc4775 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -103,6 +103,13 @@
#define IFMODE_SGMII 5
#define PM_IF_MODE_ENA BIT(15)
+/* Station interface statistics */
+#define ENETC4_SIROCT0 0x300
+#define ENETC4_SIRFRM0 0x308
+#define ENETC4_SITOCT0 0x320
+#define ENETC4_SITFRM0 0x328
+#define ENETC4_SITDFCR 0x340
+
/* general register accessors */
#define enetc4_rd_reg(reg) rte_read32((void *)(reg))
#define enetc4_wr_reg(reg, val) rte_write32((val), (void *)(reg))
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 3cdfe23fc0..3208d91bc5 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -278,7 +278,10 @@ union enetc_rx_bd {
union {
struct {
uint16_t flags;
- uint16_t error;
+ uint8_t error;
+ uint8_t resv:6;
+ uint8_t r:1;
+ uint8_t f:1;
};
uint32_t lstatus;
};
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 02f048aa3c..2db45ecf0c 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -484,6 +484,46 @@ enetc4_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
rte_free(rx_ring);
}
+static
+int enetc4_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+
+ /*
+ * Total received packets, bad + good, if we want to get counters
+ * of only good received packets then use ENETC4_PM_RFRM,
+ * ENETC4_PM_TFRM registers.
+ */
+ stats->ipackets = enetc4_port_rd(enetc_hw, ENETC4_PM_RPKT(0));
+ stats->opackets = enetc4_port_rd(enetc_hw, ENETC4_PM_TPKT(0));
+ stats->ibytes = enetc4_port_rd(enetc_hw, ENETC4_PM_REOCT(0));
+ stats->obytes = enetc4_port_rd(enetc_hw, ENETC4_PM_TEOCT(0));
+ /*
+ * Dropped + Truncated packets, use ENETC4_PM_RDRNTP(0) for without
+ * truncated packets
+ */
+ stats->imissed = enetc4_port_rd(enetc_hw, ENETC4_PM_RDRP(0));
+ stats->ierrors = enetc4_port_rd(enetc_hw, ENETC4_PM_RERR(0));
+ stats->oerrors = enetc4_port_rd(enetc_hw, ENETC4_PM_TERR(0));
+
+ return 0;
+}
+
+static int
+enetc4_stats_reset(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+
+ enetc4_port_wr(enetc_hw, ENETC4_PM0_STAT_CONFIG, ENETC4_CLEAR_STATS);
+
+ return 0;
+}
+
int
enetc4_dev_close(struct rte_eth_dev *dev)
{
@@ -668,6 +708,8 @@ static const struct eth_dev_ops enetc4_ops = {
.dev_stop = enetc4_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .stats_get = enetc4_stats_get,
+ .stats_reset = enetc4_stats_reset,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 0c68229a8d..0d35fc2e1c 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -26,6 +26,29 @@ enetc4_vf_dev_start(struct rte_eth_dev *dev __rte_unused)
return 0;
}
+static int
+enetc4_vf_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_bdr *rx_ring;
+ uint8_t i;
+
+ PMD_INIT_FUNC_TRACE();
+ stats->ipackets = enetc4_rd(enetc_hw, ENETC4_SIRFRM0);
+ stats->opackets = enetc4_rd(enetc_hw, ENETC4_SITFRM0);
+ stats->ibytes = enetc4_rd(enetc_hw, ENETC4_SIROCT0);
+ stats->obytes = enetc4_rd(enetc_hw, ENETC4_SITOCT0);
+ stats->oerrors = enetc4_rd(enetc_hw, ENETC4_SITDFCR);
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rx_ring = dev->data->rx_queues[i];
+ stats->ierrors += rx_ring->ierrors;
+ }
+ return 0;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -41,6 +64,7 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_stop = enetc4_vf_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .stats_get = enetc4_vf_stats_get,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 06/12] net/enetc: Add packet type parsing support
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (4 preceding siblings ...)
2024-10-18 7:26 ` [v1 05/12] net/enetc: Add basic statistics vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-18 7:26 ` [v1 07/12] net/enetc: Add support for multiple queues with RSS vanshika.shukla
` (6 subsequent siblings)
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Introduces packet type parsing for ENETC4 PMD, supporting:
- RTE_PTYPE_L2_ETHER (Ethernet II)
- RTE_PTYPE_L3_IPV4 (IPv4)
- RTE_PTYPE_L3_IPV6 (IPv6)
- RTE_PTYPE_L4_TCP (TCP)
- RTE_PTYPE_L4_UDP (UDP)
- RTE_PTYPE_L4_SCTP (SCTP)
- RTE_PTYPE_L4_ICMP (ICMP)
- RTE_PTYPE_L4_FRAG (IPv4/IPv6 fragmentation)
- RTE_PTYPE_TUNNEL_ESP (ESP tunneling)
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/base/enetc_hw.h | 5 +++++
drivers/net/enetc/enetc.h | 2 ++
drivers/net/enetc/enetc4_ethdev.c | 23 +++++++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 1 +
drivers/net/enetc/enetc_rxtx.c | 10 ++++++++++
6 files changed, 42 insertions(+)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index e814852d2d..3356475317 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Packet type parsing = Y
Basic stats = Y
L3 checksum offload = Y
L4 checksum offload = Y
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 3208d91bc5..10bd3c050c 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -196,6 +196,7 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_PKT_TYPE_ETHER 0x0060
#define ENETC_PKT_TYPE_IPV4 0x0000
#define ENETC_PKT_TYPE_IPV6 0x0020
+#define ENETC_PKT_TYPE_IPV6_EXT 0x0080
#define ENETC_PKT_TYPE_IPV4_TCP \
(0x0010 | ENETC_PKT_TYPE_IPV4)
#define ENETC_PKT_TYPE_IPV6_TCP \
@@ -208,6 +209,10 @@ enum enetc_bdr_type {TX, RX};
(0x0013 | ENETC_PKT_TYPE_IPV4)
#define ENETC_PKT_TYPE_IPV6_SCTP \
(0x0013 | ENETC_PKT_TYPE_IPV6)
+#define ENETC_PKT_TYPE_IPV4_FRAG \
+ (0x0001 | ENETC_PKT_TYPE_IPV4)
+#define ENETC_PKT_TYPE_IPV6_FRAG \
+ (0x0001 | ENETC_PKT_TYPE_IPV6_EXT | ENETC_PKT_TYPE_IPV6)
#define ENETC_PKT_TYPE_IPV4_ICMP \
(0x0003 | ENETC_PKT_TYPE_IPV4)
#define ENETC_PKT_TYPE_IPV6_ICMP \
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index c29353a89b..8d4e432426 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -117,6 +117,8 @@ int enetc4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
int enetc4_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
int enetc4_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
void enetc4_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+const uint32_t *enetc4_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused,
+ size_t *no_of_elements);
/*
* enetc4_vf function prototype
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 2db45ecf0c..a57408fbe9 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -693,6 +693,28 @@ enetc4_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
return 0;
}
+const uint32_t *
+enetc4_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused,
+ size_t *no_of_elements)
+{
+ PMD_INIT_FUNC_TRACE();
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4,
+ RTE_PTYPE_L3_IPV6,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_TUNNEL_ESP,
+ RTE_PTYPE_UNKNOWN
+ };
+
+ *no_of_elements = RTE_DIM(ptypes);
+ return ptypes;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -718,6 +740,7 @@ static const struct eth_dev_ops enetc4_ops = {
.tx_queue_start = enetc4_tx_queue_start,
.tx_queue_stop = enetc4_tx_queue_stop,
.tx_queue_release = enetc4_tx_queue_release,
+ .dev_supported_ptypes_get = enetc4_supported_ptypes_get,
};
/*
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 0d35fc2e1c..360bb0c710 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -73,6 +73,7 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.tx_queue_start = enetc4_tx_queue_start,
.tx_queue_stop = enetc4_tx_queue_stop,
.tx_queue_release = enetc4_tx_queue_release,
+ .dev_supported_ptypes_get = enetc4_supported_ptypes_get,
};
static int
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 963bd6fb31..65c79e508f 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -389,6 +389,16 @@ enetc_dev_rx_parse(struct rte_mbuf *m, uint16_t parse_results)
RTE_PTYPE_L3_IPV6 |
RTE_PTYPE_L4_ICMP;
return;
+ case ENETC_PKT_TYPE_IPV4_FRAG:
+ m->packet_type = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_FRAG;
+ return;
+ case ENETC_PKT_TYPE_IPV6_FRAG:
+ m->packet_type = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_FRAG;
+ return;
/* More switch cases can be added */
default:
enetc_slow_parsing(m, parse_results);
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 07/12] net/enetc: Add support for multiple queues with RSS
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (5 preceding siblings ...)
2024-10-18 7:26 ` [v1 06/12] net/enetc: Add packet type parsing support vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-18 7:26 ` [v1 08/12] net/enetc: Add VF to PF messaging support and primary MAC setup vanshika.shukla
` (5 subsequent siblings)
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Introduces support for multiple transmit and receive queues in ENETC4
PMD, enabling scalable packet processing, improved throughput, and
latency. Packet distribution is handled through Receive Side Scaling
(RSS).
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/base/enetc4_hw.h | 11 +
drivers/net/enetc/base/enetc_hw.h | 21 +-
drivers/net/enetc/enetc.h | 37 +++-
drivers/net/enetc/enetc4_ethdev.c | 145 +++++++++++--
drivers/net/enetc/enetc4_vf.c | 10 +-
drivers/net/enetc/enetc_cbdr.c | 311 ++++++++++++++++++++++++++++
drivers/net/enetc/meson.build | 5 +-
drivers/net/enetc/ntmp.h | 110 ++++++++++
9 files changed, 617 insertions(+), 34 deletions(-)
create mode 100644 drivers/net/enetc/enetc_cbdr.c
create mode 100644 drivers/net/enetc/ntmp.h
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 3356475317..79430d0018 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+RSS hash = Y
Packet type parsing = Y
Basic stats = Y
L3 checksum offload = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 874cdc4775..49446f2cb4 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -110,6 +110,17 @@
#define ENETC4_SITFRM0 0x328
#define ENETC4_SITDFCR 0x340
+/* Control BDR regs */
+#define ENETC4_SICBDRMR 0x800
+#define ENETC4_SICBDRSR 0x804 /* RO */
+#define ENETC4_SICBDRBAR0 0x810
+#define ENETC4_SICBDRBAR1 0x814
+#define ENETC4_SICBDRPIR 0x818
+#define ENETC4_SICBDRCIR 0x81c
+#define ENETC4_SICBDRLENR 0x820
+#define ENETC4_SICTR0 0x18
+#define ENETC4_SICTR1 0x1c
+
/* general register accessors */
#define enetc4_rd_reg(reg) rte_read32((void *)(reg))
#define enetc4_wr_reg(reg, val) rte_write32((val), (void *)(reg))
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 10bd3c050c..3cb56cd851 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -22,6 +22,10 @@
/* SI regs, offset: 0h */
#define ENETC_SIMR 0x0
#define ENETC_SIMR_EN BIT(31)
+#define ENETC_SIMR_RSSE BIT(0)
+
+/* BDR grouping*/
+#define ENETC_SIRBGCR 0x38
#define ENETC_SICAR0 0x40
#define ENETC_SICAR0_COHERENT 0x2B2B6727
@@ -29,6 +33,7 @@
#define ENETC_SIPMAR1 0x84
#define ENETC_SICAPR0 0x900
+#define ENETC_SICAPR0_BDR_MASK 0xFF
#define ENETC_SICAPR1 0x904
#define ENETC_SIMSITRV(n) (0xB00 + (n) * 0x4)
@@ -36,6 +41,11 @@
#define ENETC_SICCAPR 0x1200
+#define ENETC_SIPCAPR0 0x20
+#define ENETC_SIPCAPR0_RSS BIT(8)
+#define ENETC_SIRSSCAPR 0x1600
+#define ENETC_SIRSSCAPR_GET_NUM_RSS(val) (BIT((val) & 0xf) * 32)
+
/* enum for BD type */
enum enetc_bdr_type {TX, RX};
@@ -44,6 +54,7 @@ enum enetc_bdr_type {TX, RX};
/* RX BDR reg offsets */
#define ENETC_RBMR 0x0 /* RX BDR mode register*/
#define ENETC_RBMR_EN BIT(31)
+#define ENETC_BMR_RESET 0x0 /* BDR reset*/
#define ENETC_RBSR 0x4 /* Rx BDR status register*/
#define ENETC_RBBSR 0x8 /* Rx BDR buffer size register*/
@@ -231,15 +242,6 @@ struct enetc_eth_mac_info {
uint8_t get_link_status;
};
-struct enetc_eth_hw {
- struct rte_eth_dev *ndev;
- struct enetc_hw hw;
- uint16_t device_id;
- uint16_t vendor_id;
- uint8_t revision_id;
- struct enetc_eth_mac_info mac;
-};
-
/* Transmit Descriptor */
struct enetc_tx_desc {
uint64_t addr;
@@ -292,5 +294,4 @@ union enetc_rx_bd {
};
} r;
};
-
#endif
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 8d4e432426..354cd761d7 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -10,7 +10,9 @@
#include "compat.h"
#include "base/enetc_hw.h"
+#include "base/enetc4_hw.h"
#include "enetc_logs.h"
+#include "ntmp.h"
#define PCI_VENDOR_ID_FREESCALE 0x1957
@@ -50,6 +52,18 @@
RTE_MBUF_F_TX_TCP_CKSUM | \
RTE_MBUF_F_TX_UDP_CKSUM)
+#define ENETC_CBD(R, i) (&(((struct enetc_cbd *)((R).bd_base))[i]))
+#define ENETC_CBDR_TIMEOUT 1000 /* In multiple of ENETC_CBDR_DELAY */
+#define ENETC_CBDR_DELAY 100 /* usecs */
+#define ENETC_CBDR_SIZE 64
+#define ENETC_CBDR_ALIGN 128
+
+/* supported RSS */
+#define ENETC_RSS_OFFLOAD_ALL ( \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP)
+
struct enetc_swbd {
struct rte_mbuf *buffer_addr;
};
@@ -76,6 +90,19 @@ struct enetc_bdr {
uint64_t ierrors;
};
+struct enetc_eth_hw {
+ struct rte_eth_dev *ndev;
+ struct enetc_hw hw;
+ uint16_t device_id;
+ uint16_t vendor_id;
+ uint8_t revision_id;
+ struct enetc_eth_mac_info mac;
+ struct netc_cbdr cbdr;
+ uint32_t num_rss;
+ uint32_t max_rx_queues;
+ uint32_t max_tx_queues;
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
@@ -102,7 +129,7 @@ struct enetc_eth_adapter {
int enetc4_pci_remove(struct rte_pci_device *pci_dev);
int enetc4_dev_configure(struct rte_eth_dev *dev);
int enetc4_dev_close(struct rte_eth_dev *dev);
-int enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+int enetc4_dev_infos_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
int enetc4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id __rte_unused,
@@ -149,4 +176,12 @@ enetc_bd_unused(struct enetc_bdr *bdr)
return bdr->bd_count + bdr->next_to_clean - bdr->next_to_use - 1;
}
+
+/* CBDR prototypes */
+int enetc4_setup_cbdr(struct rte_eth_dev *dev, struct enetc_hw *hw,
+ int bd_count, struct netc_cbdr *cbdr);
+void netc_free_cbdr(struct netc_cbdr *cbdr);
+int ntmp_rsst_query_or_update_entry(struct netc_cbdr *cbdr, uint32_t *table,
+ int count, bool query);
+
#endif /* _ENETC_H_ */
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index a57408fbe9..075205a0e5 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -7,7 +7,6 @@
#include <dpaax_iova_table.h>
#include "kpage_ncache_api.h"
-#include "base/enetc4_hw.h"
#include "enetc_logs.h"
#include "enetc.h"
@@ -134,10 +133,14 @@ enetc4_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
}
int
-enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+enetc4_dev_infos_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
PMD_INIT_FUNC_TRACE();
+
dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = MAX_BD_COUNT,
.nb_min = MIN_BD_COUNT,
@@ -148,11 +151,12 @@ enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
.nb_min = MIN_BD_COUNT,
.nb_align = BD_ALIGN,
};
- dev_info->max_rx_queues = MAX_RX_RINGS;
- dev_info->max_tx_queues = MAX_TX_RINGS;
+ dev_info->max_rx_queues = hw->max_rx_queues;
+ dev_info->max_tx_queues = hw->max_tx_queues;
dev_info->max_rx_pktlen = ENETC4_MAC_MAXFRM_SIZE;
dev_info->rx_offload_capa = dev_rx_offloads_sup;
dev_info->tx_offload_capa = dev_tx_offloads_sup;
+ dev_info->flow_type_rss_offloads = ENETC_RSS_OFFLOAD_ALL;
return 0;
}
@@ -178,6 +182,11 @@ mark_memory_ncache(struct enetc_bdr *bdr, const char *mz_name, unsigned int size
mz->hugepage_sz);
bdr->mz = mz;
+ /* Double check memzone alignment and hugepage size */
+ if (!rte_is_aligned(bdr->bd_base, size))
+ ENETC_PMD_WARN("Memzone is not aligned to %x", size);
+
+ ENETC_PMD_DEBUG("Ring Hugepage start address = %p", bdr->bd_base);
/* Mark memory NON-CACHEABLE */
huge_page =
(uint64_t)RTE_PTR_ALIGN_FLOOR(bdr->bd_base, size);
@@ -197,7 +206,7 @@ enetc4_alloc_txbdr(uint16_t port_id, struct enetc_bdr *txr, uint16_t nb_desc)
if (txr->q_swbd == NULL)
return -ENOMEM;
- snprintf(mz_name, sizeof(mz_name), "bdt_addr_%d", port_id);
+ snprintf(mz_name, sizeof(mz_name), "bdt_addr_%d_%d", port_id, txr->index);
if (mark_memory_ncache(txr, mz_name, SIZE_2MB)) {
ENETC_PMD_ERR("Failed to mark BD memory non-cacheable!");
rte_free(txr->q_swbd);
@@ -298,17 +307,20 @@ void
enetc4_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
void *txq = dev->data->tx_queues[qid];
+ struct enetc_hw *hw;
+ struct enetc_swbd *tx_swbd;
+ int i;
+ uint32_t val;
+ struct enetc_bdr *tx_ring;
+ struct enetc_eth_hw *eth_hw;
+ PMD_INIT_FUNC_TRACE();
if (txq == NULL)
return;
- struct enetc_bdr *tx_ring = (struct enetc_bdr *)txq;
- struct enetc_eth_hw *eth_hw =
+ tx_ring = (struct enetc_bdr *)txq;
+ eth_hw =
ENETC_DEV_PRIVATE_TO_HW(tx_ring->ndev->data->dev_private);
- struct enetc_hw *hw;
- struct enetc_swbd *tx_swbd;
- int i;
- uint32_t val;
/* Disable the ring */
hw = ð_hw->hw;
@@ -346,7 +358,7 @@ enetc4_alloc_rxbdr(uint16_t port_id, struct enetc_bdr *rxr,
if (rxr->q_swbd == NULL)
return -ENOMEM;
- snprintf(mz_name, sizeof(mz_name), "bdr_addr_%d", port_id);
+ snprintf(mz_name, sizeof(mz_name), "bdr_addr_%d_%d", port_id, rxr->index);
if (mark_memory_ncache(rxr, mz_name, SIZE_2MB)) {
ENETC_PMD_ERR("Failed to mark BD memory non-cacheable!");
rte_free(rxr->q_swbd);
@@ -448,17 +460,20 @@ void
enetc4_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
void *rxq = dev->data->rx_queues[qid];
+ struct enetc_swbd *q_swbd;
+ struct enetc_hw *hw;
+ uint32_t val;
+ int i;
+ struct enetc_bdr *rx_ring;
+ struct enetc_eth_hw *eth_hw;
+ PMD_INIT_FUNC_TRACE();
if (rxq == NULL)
return;
- struct enetc_bdr *rx_ring = (struct enetc_bdr *)rxq;
- struct enetc_eth_hw *eth_hw =
+ rx_ring = (struct enetc_bdr *)rxq;
+ eth_hw =
ENETC_DEV_PRIVATE_TO_HW(rx_ring->ndev->data->dev_private);
- struct enetc_swbd *q_swbd;
- struct enetc_hw *hw;
- uint32_t val;
- int i;
/* Disable the ring */
hw = ð_hw->hw;
@@ -524,10 +539,22 @@ enetc4_stats_reset(struct rte_eth_dev *dev)
return 0;
}
+static void
+enetc4_rss_configure(struct enetc_hw *hw, int enable)
+{
+ uint32_t reg;
+
+ reg = enetc4_rd(hw, ENETC_SIMR);
+ reg &= ~ENETC_SIMR_RSSE;
+ reg |= (enable) ? ENETC_SIMR_RSSE : 0;
+ enetc4_wr(hw, ENETC_SIMR, reg);
+}
+
int
enetc4_dev_close(struct rte_eth_dev *dev)
{
struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
uint16_t i;
int ret;
@@ -540,6 +567,13 @@ enetc4_dev_close(struct rte_eth_dev *dev)
else
ret = enetc4_dev_stop(dev);
+ if (dev->data->nb_rx_queues > 1) {
+ /* Disable RSS */
+ enetc4_rss_configure(enetc_hw, false);
+ /* Free CBDR */
+ netc_free_cbdr(&hw->cbdr);
+ }
+
for (i = 0; i < dev->data->nb_rx_queues; i++) {
enetc4_rx_queue_release(dev, i);
dev->data->rx_queues[i] = NULL;
@@ -569,7 +603,9 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
uint32_t checksum = L3_CKSUM | L4_CKSUM;
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t max_len;
- uint32_t val;
+ uint32_t val, num_rss;
+ uint32_t ret = 0, i;
+ uint32_t *rss_table;
PMD_INIT_FUNC_TRACE();
@@ -602,6 +638,69 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
enetc4_port_wr(enetc_hw, ENETC4_PARCSCR, checksum);
+ /* Disable and reset RX and TX rings */
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ enetc4_rxbdr_wr(enetc_hw, i, ENETC_RBMR, ENETC_BMR_RESET);
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ enetc4_rxbdr_wr(enetc_hw, i, ENETC_TBMR, ENETC_BMR_RESET);
+
+ if (dev->data->nb_rx_queues <= 1)
+ return 0;
+
+ /* Setup RSS */
+ /* Setup control BDR */
+ ret = enetc4_setup_cbdr(dev, enetc_hw, ENETC_CBDR_SIZE, &hw->cbdr);
+ if (ret) {
+ /* Disable RSS */
+ enetc4_rss_configure(enetc_hw, false);
+ return ret;
+ }
+
+ /* Reset CIR again after enable CBDR*/
+ rte_delay_us(ENETC_CBDR_DELAY);
+ ENETC_PMD_DEBUG("CIR %x after CBDR enable", rte_read32(hw->cbdr.regs.cir));
+ rte_write32(0, hw->cbdr.regs.cir);
+ ENETC_PMD_DEBUG("CIR %x after reset", rte_read32(hw->cbdr.regs.cir));
+
+ val = enetc_rd(enetc_hw, ENETC_SIPCAPR0);
+ if (val & ENETC_SIPCAPR0_RSS) {
+ num_rss = enetc_rd(enetc_hw, ENETC_SIRSSCAPR);
+ hw->num_rss = ENETC_SIRSSCAPR_GET_NUM_RSS(num_rss);
+ ENETC_PMD_DEBUG("num_rss = %d", hw->num_rss);
+
+ /* Add number of BDR groups */
+ enetc4_wr(enetc_hw, ENETC_SIRBGCR, dev->data->nb_rx_queues);
+
+
+ /* Configuring indirecton table with default values
+ * Hash algorithm and RSS secret key to be filled by PF
+ */
+ rss_table = rte_malloc(NULL, hw->num_rss * sizeof(*rss_table), ENETC_CBDR_ALIGN);
+ if (!rss_table) {
+ enetc4_rss_configure(enetc_hw, false);
+ netc_free_cbdr(&hw->cbdr);
+ return -ENOMEM;
+ }
+
+ ENETC_PMD_DEBUG("Enabling RSS for port %s with queues = %d", dev->device->name,
+ dev->data->nb_rx_queues);
+ for (i = 0; i < hw->num_rss; i++)
+ rss_table[i] = i % dev->data->nb_rx_queues;
+
+ ret = ntmp_rsst_query_or_update_entry(&hw->cbdr, rss_table, hw->num_rss, false);
+ if (ret) {
+ ENETC_PMD_WARN("RSS indirection table update fails,"
+ "Scaling behaviour is undefined");
+ enetc4_rss_configure(enetc_hw, false);
+ netc_free_cbdr(&hw->cbdr);
+ }
+ rte_free(rss_table);
+
+ /* Enable RSS */
+ enetc4_rss_configure(enetc_hw, true);
+ }
+
return 0;
}
@@ -786,11 +885,19 @@ enetc4_dev_init(struct rte_eth_dev *eth_dev)
ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
int error = 0;
+ uint32_t si_cap;
+ struct enetc_hw *enetc_hw = &hw->hw;
PMD_INIT_FUNC_TRACE();
eth_dev->dev_ops = &enetc4_ops;
enetc4_dev_hw_init(eth_dev);
+ si_cap = enetc_rd(enetc_hw, ENETC_SICAPR0);
+ hw->max_tx_queues = si_cap & ENETC_SICAPR0_BDR_MASK;
+ hw->max_rx_queues = (si_cap >> 16) & ENETC_SICAPR0_BDR_MASK;
+
+ ENETC_PMD_DEBUG("Max RX queues = %d Max TX queues = %d",
+ hw->max_rx_queues, hw->max_tx_queues);
error = enetc4_mac_init(hw, eth_dev);
if (error != 0) {
ENETC_PMD_ERR("MAC initialization failed");
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 360bb0c710..a9fb33c432 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -5,8 +5,6 @@
#include <stdbool.h>
#include <rte_random.h>
#include <dpaax_iova_table.h>
-#include "base/enetc4_hw.h"
-#include "base/enetc_hw.h"
#include "enetc_logs.h"
#include "enetc.h"
@@ -137,11 +135,19 @@ enetc4_vf_dev_init(struct rte_eth_dev *eth_dev)
ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
int error = 0;
+ uint32_t si_cap;
+ struct enetc_hw *enetc_hw = &hw->hw;
PMD_INIT_FUNC_TRACE();
eth_dev->dev_ops = &enetc4_vf_ops;
enetc4_dev_hw_init(eth_dev);
+ si_cap = enetc_rd(enetc_hw, ENETC_SICAPR0);
+ hw->max_tx_queues = si_cap & ENETC_SICAPR0_BDR_MASK;
+ hw->max_rx_queues = (si_cap >> 16) & ENETC_SICAPR0_BDR_MASK;
+
+ ENETC_PMD_DEBUG("Max RX queues = %d Max TX queues = %d",
+ hw->max_rx_queues, hw->max_tx_queues);
error = enetc4_vf_mac_init(hw, eth_dev);
if (error != 0) {
ENETC_PMD_ERR("MAC initialization failed!!");
diff --git a/drivers/net/enetc/enetc_cbdr.c b/drivers/net/enetc/enetc_cbdr.c
new file mode 100644
index 0000000000..021090775f
--- /dev/null
+++ b/drivers/net/enetc/enetc_cbdr.c
@@ -0,0 +1,311 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ */
+
+#include <ethdev_pci.h>
+
+#include "enetc_logs.h"
+#include "enetc.h"
+
+#define NTMP_RSST_ID 3
+
+/* Define NTMP Access Method */
+#define NTMP_AM_ENTRY_ID 0
+#define NTMP_AM_EXACT_KEY 1
+#define NTMP_AM_SEARCH 2
+#define NTMP_AM_TERNARY_KEY 3
+
+/* Define NTMP Header Version */
+#define NTMP_HEADER_VERSION2 2
+
+#define NTMP_REQ_HDR_NPF BIT(15)
+
+#define NTMP_RESP_LEN_MASK GENMASK(19, 0)
+#define NTMP_REQ_LEN_MASK GENMASK(31, 20)
+
+#define ENETC_NTMP_ENTRY_ID_SIZE 4
+
+#define ENETC_RSS_TABLE_ENTRY_NUM 64
+#define ENETC_RSS_CFGEU BIT(0)
+#define ENETC_RSS_STSEU BIT(1)
+#define ENETC_RSS_STSE_DATA_SIZE(n) ((n) * 8)
+#define ENETC_RSS_CFGE_DATA_SIZE(n) (n)
+
+#define NTMP_REQ_RESP_LEN(req, resp) (((req) << 20 & NTMP_REQ_LEN_MASK) | \
+ ((resp) & NTMP_RESP_LEN_MASK))
+
+static inline uint32_t
+netc_cbdr_read(void *reg)
+{
+ return rte_read32(reg);
+}
+
+static inline void
+netc_cbdr_write(void *reg, uint32_t val)
+{
+ rte_write32(val, reg);
+}
+
+static inline void
+ntmp_fill_request_headr(union netc_cbd *cbd, dma_addr_t dma,
+ int len, int table_id, int cmd,
+ int access_method)
+{
+ dma_addr_t dma_align;
+
+ memset(cbd, 0, sizeof(*cbd));
+ dma_align = dma;
+ cbd->ntmp_req_hdr.addr = dma_align;
+ cbd->ntmp_req_hdr.len = len;
+ cbd->ntmp_req_hdr.cmd = cmd;
+ cbd->ntmp_req_hdr.access_method = access_method;
+ cbd->ntmp_req_hdr.table_id = table_id;
+ cbd->ntmp_req_hdr.hdr_ver = NTMP_HEADER_VERSION2;
+ cbd->ntmp_req_hdr.cci = 0;
+ cbd->ntmp_req_hdr.rr = 0; /* Must be set to 0 by SW. */
+ /* For NTMP version 2.0 or later version */
+ cbd->ntmp_req_hdr.npf = NTMP_REQ_HDR_NPF;
+}
+
+static inline int
+netc_get_free_cbd_num(struct netc_cbdr *cbdr)
+{
+ return (cbdr->next_to_clean - cbdr->next_to_use - 1 + cbdr->bd_num) %
+ cbdr->bd_num;
+}
+
+static inline union
+netc_cbd *netc_get_cbd(struct netc_cbdr *cbdr, int index)
+{
+ return &((union netc_cbd *)(cbdr->addr_base_align))[index];
+}
+
+static void
+netc_clean_cbdr(struct netc_cbdr *cbdr)
+{
+ union netc_cbd *cbd;
+ uint32_t i;
+
+ i = cbdr->next_to_clean;
+ while (netc_cbdr_read(cbdr->regs.cir) != i) {
+ cbd = netc_get_cbd(cbdr, i);
+ memset(cbd, 0, sizeof(*cbd));
+ dcbf(cbd);
+ i = (i + 1) % cbdr->bd_num;
+ }
+
+ cbdr->next_to_clean = i;
+}
+
+static int
+netc_xmit_ntmp_cmd(struct netc_cbdr *cbdr, union netc_cbd *cbd)
+{
+ union netc_cbd *ring_cbd;
+ uint32_t i, err = 0;
+ uint16_t status;
+ uint32_t timeout = cbdr->timeout;
+ uint32_t delay = cbdr->delay;
+
+ if (unlikely(!cbdr->addr_base))
+ return -EFAULT;
+
+ rte_spinlock_lock(&cbdr->ring_lock);
+
+ if (unlikely(!netc_get_free_cbd_num(cbdr)))
+ netc_clean_cbdr(cbdr);
+
+ i = cbdr->next_to_use;
+ ring_cbd = netc_get_cbd(cbdr, i);
+
+ /* Copy command BD to the ring */
+ *ring_cbd = *cbd;
+ /* Update producer index of both software and hardware */
+ i = (i + 1) % cbdr->bd_num;
+ dcbf(ring_cbd);
+ cbdr->next_to_use = i;
+ netc_cbdr_write(cbdr->regs.pir, i);
+ ENETC_PMD_DEBUG("Control msg sent PIR = %d, CIR = %d", netc_cbdr_read(cbdr->regs.pir),
+ netc_cbdr_read(cbdr->regs.cir));
+ do {
+ if (netc_cbdr_read(cbdr->regs.cir) == i) {
+ dccivac(ring_cbd);
+ ENETC_PMD_DEBUG("got response");
+ ENETC_PMD_DEBUG("Matched = %d, status = 0x%x",
+ ring_cbd->ntmp_resp_hdr.num_matched,
+ ring_cbd->ntmp_resp_hdr.error_rr);
+ break;
+ }
+ rte_delay_us(delay);
+ } while (timeout--);
+
+ if (timeout <= 0)
+ ENETC_PMD_ERR("no response of RSS configuration");
+
+ ENETC_PMD_DEBUG("CIR after receive = %d, SICBDRSR = 0x%x",
+ netc_cbdr_read(cbdr->regs.cir),
+ netc_cbdr_read(cbdr->regs.st));
+ /* Check the writeback error status */
+ status = ring_cbd->ntmp_resp_hdr.error_rr & NTMP_RESP_HDR_ERR;
+ if (unlikely(status)) {
+ ENETC_PMD_ERR("Command BD error: 0x%04x", status);
+ err = -EIO;
+ }
+
+ netc_clean_cbdr(cbdr);
+ rte_spinlock_unlock(&cbdr->ring_lock);
+
+ return err;
+}
+
+int
+ntmp_rsst_query_or_update_entry(struct netc_cbdr *cbdr, uint32_t *table,
+ int count, bool query)
+{
+ struct rsst_req_update *requ;
+ struct rsst_req_query *req;
+ union netc_cbd cbd;
+ uint32_t len, data_size;
+ dma_addr_t dma;
+ int err, i;
+ void *tmp;
+
+ if (count != ENETC_RSS_TABLE_ENTRY_NUM)
+ /* HW only takes in a full 64 entry table */
+ return -EINVAL;
+
+ if (query)
+ data_size = ENETC_NTMP_ENTRY_ID_SIZE + ENETC_RSS_STSE_DATA_SIZE(count) +
+ ENETC_RSS_CFGE_DATA_SIZE(count);
+ else
+ data_size = sizeof(*requ) + count * sizeof(uint8_t);
+
+ tmp = rte_malloc(NULL, data_size, ENETC_CBDR_ALIGN);
+ if (!tmp)
+ return -ENOMEM;
+
+ dma = rte_mem_virt2iova(tmp);
+ req = tmp;
+ /* Set the request data buffer */
+ if (query) {
+ len = NTMP_REQ_RESP_LEN(sizeof(*req), data_size);
+ ntmp_fill_request_headr(&cbd, dma, len, NTMP_RSST_ID,
+ NTMP_CMD_QUERY, NTMP_AM_ENTRY_ID);
+ } else {
+ requ = (struct rsst_req_update *)req;
+ requ->crd.update_act = (ENETC_RSS_CFGEU | ENETC_RSS_STSEU);
+ for (i = 0; i < count; i++)
+ requ->groups[i] = (uint8_t)(table[i]);
+
+ len = NTMP_REQ_RESP_LEN(data_size, 0);
+ ntmp_fill_request_headr(&cbd, dma, len, NTMP_RSST_ID,
+ NTMP_CMD_UPDATE, NTMP_AM_ENTRY_ID);
+ dcbf(requ);
+ }
+
+ err = netc_xmit_ntmp_cmd(cbdr, &cbd);
+ if (err) {
+ ENETC_PMD_ERR("%s RSS table entry failed (%d)!",
+ query ? "Query" : "Update", err);
+ goto end;
+ }
+
+ if (query) {
+ uint8_t *group = (uint8_t *)req;
+
+ group += ENETC_NTMP_ENTRY_ID_SIZE + ENETC_RSS_STSE_DATA_SIZE(count);
+ for (i = 0; i < count; i++)
+ table[i] = group[i];
+ }
+end:
+ rte_free(tmp);
+
+ return err;
+}
+
+static int
+netc_setup_cbdr(struct rte_eth_dev *dev, int cbd_num,
+ struct netc_cbdr_regs *regs,
+ struct netc_cbdr *cbdr)
+{
+ int size;
+
+ size = cbd_num * sizeof(union netc_cbd) +
+ NETC_CBDR_BASE_ADDR_ALIGN;
+
+ cbdr->addr_base = rte_malloc(NULL, size, ENETC_CBDR_ALIGN);
+ if (!cbdr->addr_base)
+ return -ENOMEM;
+
+ cbdr->dma_base = rte_mem_virt2iova(cbdr->addr_base);
+ cbdr->dma_size = size;
+ cbdr->bd_num = cbd_num;
+ cbdr->regs = *regs;
+ cbdr->dma_dev = dev;
+ cbdr->timeout = ENETC_CBDR_TIMEOUT;
+ cbdr->delay = ENETC_CBDR_DELAY;
+
+ if (getenv("ENETC4_CBDR_TIMEOUT"))
+ cbdr->timeout = atoi(getenv("ENETC4_CBDR_TIMEOUT"));
+
+ if (getenv("ENETC4_CBDR_DELAY"))
+ cbdr->delay = atoi(getenv("ENETC4_CBDR_DELAY"));
+
+
+ ENETC_PMD_DEBUG("CBDR timeout = %u and delay = %u", cbdr->timeout,
+ cbdr->delay);
+ /* The base address of the Control BD Ring must be 128 bytes aligned */
+ cbdr->dma_base_align = cbdr->dma_base;
+ cbdr->addr_base_align = cbdr->addr_base;
+
+ cbdr->next_to_clean = 0;
+ cbdr->next_to_use = 0;
+ rte_spinlock_init(&cbdr->ring_lock);
+
+ netc_cbdr_write(cbdr->regs.mr, ~((uint32_t)NETC_CBDRMR_EN));
+ /* Step 1: Configure the base address of the Control BD Ring */
+ netc_cbdr_write(cbdr->regs.bar0, lower_32_bits(cbdr->dma_base_align));
+ netc_cbdr_write(cbdr->regs.bar1, upper_32_bits(cbdr->dma_base_align));
+
+ /* Step 2: Configure the producer index register */
+ netc_cbdr_write(cbdr->regs.pir, cbdr->next_to_clean);
+
+ /* Step 3: Configure the consumer index register */
+ netc_cbdr_write(cbdr->regs.cir, cbdr->next_to_use);
+ /* Step4: Configure the number of BDs of the Control BD Ring */
+ netc_cbdr_write(cbdr->regs.lenr, cbdr->bd_num);
+
+ /* Step 5: Enable the Control BD Ring */
+ netc_cbdr_write(cbdr->regs.mr, NETC_CBDRMR_EN);
+
+ return 0;
+}
+
+void
+netc_free_cbdr(struct netc_cbdr *cbdr)
+{
+ /* Disable the Control BD Ring */
+ if (cbdr->regs.mr != NULL) {
+ netc_cbdr_write(cbdr->regs.mr, 0);
+ rte_free(cbdr->addr_base);
+ memset(cbdr, 0, sizeof(*cbdr));
+ }
+}
+
+int
+enetc4_setup_cbdr(struct rte_eth_dev *dev, struct enetc_hw *hw,
+ int bd_count, struct netc_cbdr *cbdr)
+{
+ struct netc_cbdr_regs regs;
+
+ regs.pir = (void *)((size_t)hw->reg + ENETC4_SICBDRPIR);
+ regs.cir = (void *)((size_t)hw->reg + ENETC4_SICBDRCIR);
+ regs.mr = (void *)((size_t)hw->reg + ENETC4_SICBDRMR);
+ regs.st = (void *)((size_t)hw->reg + ENETC4_SICBDRSR);
+ regs.bar0 = (void *)((size_t)hw->reg + ENETC4_SICBDRBAR0);
+ regs.bar1 = (void *)((size_t)hw->reg + ENETC4_SICBDRBAR1);
+ regs.lenr = (void *)((size_t)hw->reg + ENETC4_SICBDRLENR);
+ regs.sictr0 = (void *)((size_t)hw->reg + ENETC4_SICTR0);
+ regs.sictr1 = (void *)((size_t)hw->reg + ENETC4_SICTR1);
+
+ return netc_setup_cbdr(dev, bd_count, ®s, cbdr);
+}
diff --git a/drivers/net/enetc/meson.build b/drivers/net/enetc/meson.build
index 6e00758a36..fe8fdc07f3 100644
--- a/drivers/net/enetc/meson.build
+++ b/drivers/net/enetc/meson.build
@@ -8,10 +8,11 @@ endif
deps += ['common_dpaax']
sources = files(
- 'enetc4_ethdev.c',
- 'enetc4_vf.c',
+ 'enetc4_ethdev.c',
+ 'enetc4_vf.c',
'enetc_ethdev.c',
'enetc_rxtx.c',
+ 'enetc_cbdr.c',
)
includes += include_directories('base')
diff --git a/drivers/net/enetc/ntmp.h b/drivers/net/enetc/ntmp.h
new file mode 100644
index 0000000000..0dbc006f26
--- /dev/null
+++ b/drivers/net/enetc/ntmp.h
@@ -0,0 +1,110 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ */
+
+#ifndef ENETC_NTMP_H
+#define ENETC_NTMP_H
+
+#include "compat.h"
+#include <linux/types.h>
+
+#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
+#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8)
+#define GENMASK(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+
+/* define NTMP Operation Commands */
+#define NTMP_CMD_DELETE BIT(0)
+#define NTMP_CMD_UPDATE BIT(1)
+#define NTMP_CMD_QUERY BIT(2)
+
+#define NETC_CBDR_TIMEOUT 1000 /* us */
+#define NETC_CBDR_BD_NUM 256
+#define NETC_CBDR_BASE_ADDR_ALIGN 128
+#define NETC_CBD_DATA_ADDR_ALIGN 16
+#define NETC_CBDRMR_EN BIT(31)
+
+#define NTMP_RESP_HDR_ERR GENMASK(11, 0)
+
+struct common_req_data {
+ uint16_t update_act;
+ uint8_t dbg_opt;
+ uint8_t query_act:4;
+ uint8_t tbl_ver:4;
+};
+
+/* RSS Table Request and Response Data Buffer Format */
+struct rsst_req_query {
+ struct common_req_data crd;
+ uint32_t entry_id;
+};
+
+/* struct for update operation */
+struct rsst_req_update {
+ struct common_req_data crd;
+ uint32_t entry_id;
+ uint8_t groups[];
+};
+
+/* The format of conctrol buffer descriptor */
+union netc_cbd {
+ struct {
+ uint64_t addr;
+ uint32_t len;
+ uint8_t cmd;
+ uint8_t resv1:4;
+ uint8_t access_method:4;
+ uint8_t table_id;
+ uint8_t hdr_ver:6;
+ uint8_t cci:1;
+ uint8_t rr:1;
+ uint32_t resv2[3];
+ uint32_t npf;
+ } ntmp_req_hdr; /* NTMP Request Message Header Format */
+
+ struct {
+ uint32_t resv1[3];
+ uint16_t num_matched;
+ uint16_t error_rr; /* bit0~11: error, bit12~14: reserved, bit15: rr */
+ uint32_t resv3[4];
+ } ntmp_resp_hdr; /* NTMP Response Message Header Format */
+};
+
+struct netc_cbdr_regs {
+ void *pir;
+ void *cir;
+ void *mr;
+ void *st;
+
+ void *bar0;
+ void *bar1;
+ void *lenr;
+
+ /* station interface current time register */
+ void *sictr0;
+ void *sictr1;
+};
+
+struct netc_cbdr {
+ struct netc_cbdr_regs regs;
+
+ int bd_num;
+ int next_to_use;
+ int next_to_clean;
+
+ int dma_size;
+ void *addr_base;
+ void *addr_base_align;
+ dma_addr_t dma_base;
+ dma_addr_t dma_base_align;
+ struct rte_eth_dev *dma_dev;
+
+ rte_spinlock_t ring_lock; /* Avoid race condition */
+
+ /* bitmap of used words of SGCL table */
+ unsigned long *sgclt_used_words;
+ uint32_t sgclt_words_num;
+ uint32_t timeout;
+ uint32_t delay;
+};
+
+#endif /* ENETC_NTMP_H */
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 08/12] net/enetc: Add VF to PF messaging support and primary MAC setup
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (6 preceding siblings ...)
2024-10-18 7:26 ` [v1 07/12] net/enetc: Add support for multiple queues with RSS vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-18 7:26 ` [v1 09/12] net/enetc: Add multicast and promiscuous mode support vanshika.shukla
` (4 subsequent siblings)
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Introduces Virtual Function (VF) to Physical Function (PF) messaging,
enabling VFs to communicate with the Linux PF driver for feature
enablement.
This patch also adds primary MAC address setup capability,
allowing VFs to configure their MAC addresses.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/enetc/base/enetc4_hw.h | 22 +++
drivers/net/enetc/enetc.h | 99 +++++++++++
drivers/net/enetc/enetc4_vf.c | 260 +++++++++++++++++++++++++++++
3 files changed, 381 insertions(+)
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 49446f2cb4..f0b7563d22 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -14,6 +14,12 @@
#define ENETC4_DEV_ID_VF 0xef00
#define PCI_VENDOR_ID_NXP 0x1131
+struct enetc_msg_swbd {
+ void *vaddr;
+ uint64_t dma;
+ int size;
+};
+
/* enetc4 txbd flags */
#define ENETC4_TXBD_FLAGS_L4CS BIT(0)
#define ENETC4_TXBD_FLAGS_L_TX_CKSUM BIT(3)
@@ -103,6 +109,9 @@
#define IFMODE_SGMII 5
#define PM_IF_MODE_ENA BIT(15)
+#define ENETC4_DEF_VSI_WAIT_TIMEOUT_UPDATE 100
+#define ENETC4_DEF_VSI_WAIT_DELAY_UPDATE 2000 /* us */
+
/* Station interface statistics */
#define ENETC4_SIROCT0 0x300
#define ENETC4_SIRFRM0 0x308
@@ -110,6 +119,19 @@
#define ENETC4_SITFRM0 0x328
#define ENETC4_SITDFCR 0x340
+/* VSI MSG Registers */
+#define ENETC4_VSIMSGSR 0x204 /* RO */
+#define ENETC4_VSIMSGSR_MB BIT(0)
+#define ENETC4_VSIMSGSR_MS BIT(1)
+#define ENETC4_VSIMSGSNDAR0 0x210
+#define ENETC4_VSIMSGSNDAR1 0x214
+
+#define ENETC4_VSIMSGRR 0x208
+#define ENETC4_VSIMSGRR_MR BIT(0)
+
+#define ENETC_SIMSGSR_SET_MC(val) ((val) << 16)
+#define ENETC_SIMSGSR_GET_MC(val) ((val) >> 16)
+
/* Control BDR regs */
#define ENETC4_SICBDRMR 0x800
#define ENETC4_SICBDRSR 0x804 /* RO */
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 354cd761d7..c0fba9d618 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -41,6 +41,11 @@
/* eth name size */
#define ENETC_ETH_NAMESIZE 20
+#define ENETC_DEFAULT_MSG_SIZE 1024 /* max size */
+
+/* Message length is in multiple of 32 bytes */
+#define ENETC_VSI_PSI_MSG_SIZE 32
+
/* size for marking hugepage non-cacheable */
#define SIZE_2MB 0x200000
@@ -123,6 +128,100 @@ struct enetc_eth_adapter {
#define ENETC_DEV_PRIVATE_TO_INTR(adapter) \
(&((struct enetc_eth_adapter *)adapter)->intr)
+/* Class ID for PSI-TO-VSI messages */
+#define ENETC_MSG_CLASS_ID_CMD_SUCCESS 0x1
+#define ENETC_MSG_CLASS_ID_PERMISSION_DENY 0x2
+#define ENETC_MSG_CLASS_ID_CMD_NOT_SUPPORT 0x3
+#define ENETC_MSG_CLASS_ID_PSI_BUSY 0x4
+#define ENETC_MSG_CLASS_ID_CRC_ERROR 0x5
+#define ENETC_MSG_CLASS_ID_PROTO_NOT_SUPPORT 0x6
+#define ENETC_MSG_CLASS_ID_INVALID_MSG_LEN 0x7
+#define ENETC_MSG_CLASS_ID_CMD_TIMEOUT 0x8
+#define ENETC_MSG_CLASS_ID_CMD_DEFERED 0xf
+
+#define ENETC_PROMISC_DISABLE 0x41
+#define ENETC_PROMISC_ENABLE 0x43
+#define ENETC_ALLMULTI_PROMISC_DIS 0x81
+#define ENETC_ALLMULTI_PROMISC_EN 0x83
+
+
+/* Enum for class IDs */
+enum enetc_msg_cmd_class_id {
+ ENETC_CLASS_ID_MAC_FILTER = 0x20,
+};
+
+/* Enum for command IDs */
+enum enetc_msg_cmd_id {
+ ENETC_CMD_ID_SET_PRIMARY_MAC = 0,
+};
+
+enum mac_addr_status {
+ ENETC_INVALID_MAC_ADDR = 0x0,
+ ENETC_DUPLICATE_MAC_ADDR = 0X1,
+ ENETC_MAC_ADDR_NOT_FOUND = 0X2,
+};
+
+/* PSI-VSI command header format */
+struct enetc_msg_cmd_header {
+ uint16_t csum; /* INET_CHECKSUM */
+ uint8_t class_id; /* Command class type */
+ uint8_t cmd_id; /* Denotes the specific required action */
+ uint8_t proto_ver; /* Supported VSI-PSI command protocol version */
+ uint8_t len; /* Extended message body length */
+ uint8_t reserved_1;
+ uint8_t cookie; /* Control command execution asynchronously on PSI side */
+ uint64_t reserved_2;
+};
+
+/* VF-PF set primary MAC address message format */
+struct enetc_msg_cmd_set_primary_mac {
+ struct enetc_msg_cmd_header header;
+ uint8_t count; /* number of MAC addresses */
+ uint8_t reserved_1;
+ uint16_t reserved_2;
+ struct rte_ether_addr addr;
+};
+
+struct enetc_msg_cmd_set_promisc {
+ struct enetc_msg_cmd_header header;
+ uint8_t op_type;
+};
+
+struct enetc_msg_cmd_get_link_status {
+ struct enetc_msg_cmd_header header;
+};
+
+struct enetc_msg_cmd_get_link_speed {
+ struct enetc_msg_cmd_header header;
+};
+
+struct enetc_msg_cmd_set_vlan_promisc {
+ struct enetc_msg_cmd_header header;
+ uint8_t op;
+ uint8_t reserved;
+};
+
+struct enetc_msg_vlan_exact_filter {
+ struct enetc_msg_cmd_header header;
+ uint8_t vlan_count;
+ uint8_t reserved_1;
+ uint16_t reserved_2;
+ uint16_t vlan_id;
+ uint8_t tpid;
+ uint8_t reserved2;
+};
+
+struct enetc_psi_reply_msg {
+ uint8_t class_id;
+ uint8_t status;
+};
+
+/* msg size encoding: default and max msg value of 1024B encoded as 0 */
+static inline uint32_t enetc_vsi_set_msize(uint32_t size)
+{
+ return size < ENETC_DEFAULT_MSG_SIZE ? size >> 5 : 0;
+}
+
/*
* ENETC4 function prototypes
*/
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index a9fb33c432..6bdd476f0a 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -8,6 +8,51 @@
#include "enetc_logs.h"
#include "enetc.h"
+#define ENETC_CRC_TABLE_SIZE 256
+#define ENETC_POLY 0x1021
+#define ENETC_CRC_INIT 0xffff
+#define ENETC_BYTE_SIZE 8
+#define ENETC_MSB_BIT 0x8000
+
+uint16_t enetc_crc_table[ENETC_CRC_TABLE_SIZE];
+bool enetc_crc_gen;
+
+static void
+enetc_gen_crc_table(void)
+{
+ uint16_t crc = 0;
+ uint16_t c;
+
+ for (int i = 0; i < ENETC_CRC_TABLE_SIZE; i++) {
+ crc = 0;
+ c = i << ENETC_BYTE_SIZE;
+ for (int j = 0; j < ENETC_BYTE_SIZE; j++) {
+ if ((crc ^ c) & ENETC_MSB_BIT)
+ crc = (crc << 1) ^ ENETC_POLY;
+ else
+ crc = crc << 1;
+ c = c << 1;
+ }
+
+ enetc_crc_table[i] = crc;
+ }
+
+ enetc_crc_gen = true;
+}
+
+static uint16_t
+enetc_crc_calc(uint16_t crc, const uint8_t *buffer, size_t len)
+{
+ uint8_t data;
+
+ while (len--) {
+ data = *buffer;
+ crc = (crc << 8) ^ enetc_crc_table[((crc >> 8) ^ data) & 0xff];
+ buffer++;
+ }
+ return crc;
+}
+
int
enetc4_vf_dev_stop(struct rte_eth_dev *dev __rte_unused)
{
@@ -47,6 +92,217 @@ enetc4_vf_stats_get(struct rte_eth_dev *dev,
return 0;
}
+
+static void
+enetc_msg_vf_fill_common_hdr(struct enetc_msg_swbd *msg,
+ uint8_t class_id, uint8_t cmd_id, uint8_t proto_ver,
+ uint8_t len, uint8_t cookie)
+{
+ struct enetc_msg_cmd_header *hdr = msg->vaddr;
+
+ hdr->class_id = class_id;
+ hdr->cmd_id = cmd_id;
+ hdr->proto_ver = proto_ver;
+ hdr->len = len;
+ hdr->cookie = cookie;
+ /* Incrementing msg 2 bytes ahead as the first two bytes are for CRC */
+ hdr->csum = rte_cpu_to_be_16(enetc_crc_calc(ENETC_CRC_INIT,
+ (uint8_t *)msg->vaddr + sizeof(uint16_t),
+ msg->size - sizeof(uint16_t)));
+
+ dcbf(hdr);
+}
+
+/* Messaging */
+static void
+enetc4_msg_vsi_write_msg(struct enetc_hw *hw,
+ struct enetc_msg_swbd *msg)
+{
+ uint32_t val;
+
+ val = enetc_vsi_set_msize(msg->size) | lower_32_bits(msg->dma);
+ enetc_wr(hw, ENETC4_VSIMSGSNDAR1, upper_32_bits(msg->dma));
+ enetc_wr(hw, ENETC4_VSIMSGSNDAR0, val);
+}
+
+static void
+enetc4_msg_vsi_reply_msg(struct enetc_hw *enetc_hw, struct enetc_psi_reply_msg *reply_msg)
+{
+ int vsimsgsr;
+ int8_t class_id = 0;
+ uint8_t status = 0;
+
+ vsimsgsr = enetc_rd(enetc_hw, ENETC4_VSIMSGSR);
+
+ /* Extracting 8 bits of message result in class_id */
+ class_id |= ((ENETC_SIMSGSR_GET_MC(vsimsgsr) >> 8) & 0xff);
+
+ /* Extracting 4 bits of message result in status */
+ status |= ((ENETC_SIMSGSR_GET_MC(vsimsgsr) >> 4) & 0xf);
+
+ reply_msg->class_id = class_id;
+ reply_msg->status = status;
+}
+
+static int
+enetc4_msg_vsi_send(struct enetc_hw *enetc_hw, struct enetc_msg_swbd *msg)
+{
+ int timeout = ENETC4_DEF_VSI_WAIT_TIMEOUT_UPDATE;
+ int delay_us = ENETC4_DEF_VSI_WAIT_DELAY_UPDATE;
+ uint8_t class_id = 0;
+ int err = 0;
+ int vsimsgsr;
+
+ enetc4_msg_vsi_write_msg(enetc_hw, msg);
+
+ do {
+ vsimsgsr = enetc_rd(enetc_hw, ENETC4_VSIMSGSR);
+ if (!(vsimsgsr & ENETC4_VSIMSGSR_MB))
+ break;
+ rte_delay_us(delay_us);
+ } while (--timeout);
+
+ if (!timeout) {
+ ENETC_PMD_ERR("Message not processed by PSI");
+ return -ETIMEDOUT;
+ }
+ /* check for message delivery error */
+ if (vsimsgsr & ENETC4_VSIMSGSR_MS) {
+ ENETC_PMD_ERR("Transfer error when copying the data");
+ return -EIO;
+ }
+
+ class_id |= ((ENETC_SIMSGSR_GET_MC(vsimsgsr) >> 8) & 0xff);
+
+ /* Check the user-defined completion status. */
+ if (class_id != ENETC_MSG_CLASS_ID_CMD_SUCCESS) {
+ switch (class_id) {
+ case ENETC_MSG_CLASS_ID_PERMISSION_DENY:
+ ENETC_PMD_ERR("Permission denied");
+ err = -EACCES;
+ break;
+ case ENETC_MSG_CLASS_ID_CMD_NOT_SUPPORT:
+ ENETC_PMD_ERR("Command not supported");
+ err = -EOPNOTSUPP;
+ break;
+ case ENETC_MSG_CLASS_ID_PSI_BUSY:
+ ENETC_PMD_ERR("PSI Busy");
+ err = -EBUSY;
+ break;
+ case ENETC_MSG_CLASS_ID_CMD_TIMEOUT:
+ ENETC_PMD_ERR("Command timeout");
+ err = -ETIME;
+ break;
+ case ENETC_MSG_CLASS_ID_CRC_ERROR:
+ ENETC_PMD_ERR("CRC error");
+ err = -EIO;
+ break;
+ case ENETC_MSG_CLASS_ID_PROTO_NOT_SUPPORT:
+ ENETC_PMD_ERR("Protocol Version not supported");
+ err = -EOPNOTSUPP;
+ break;
+ case ENETC_MSG_CLASS_ID_INVALID_MSG_LEN:
+ ENETC_PMD_ERR("Invalid message length");
+ err = -EINVAL;
+ break;
+ case ENETC_CLASS_ID_MAC_FILTER:
+ break;
+ default:
+ err = -EIO;
+ }
+ }
+
+ return err;
+}
+
+static int
+enetc4_vf_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_primary_mac *cmd;
+ struct enetc_msg_swbd *msg;
+ struct enetc_psi_reply_msg *reply_msg;
+ int msg_size;
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ reply_msg = rte_zmalloc(NULL, sizeof(*reply_msg), RTE_CACHE_LINE_SIZE);
+ if (!reply_msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for reply_msg");
+ return -ENOMEM;
+ }
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ rte_free(reply_msg);
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_primary_mac),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ rte_free(reply_msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ cmd = (struct enetc_msg_cmd_set_primary_mac *)msg->vaddr;
+
+ cmd->count = 0;
+ memcpy(&cmd->addr.addr_bytes, addr, sizeof(struct rte_ether_addr));
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_MAC_FILTER,
+ ENETC_CMD_ID_SET_PRIMARY_MAC, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_MAC_FILTER) {
+ switch (reply_msg->status) {
+ case ENETC_INVALID_MAC_ADDR:
+ ENETC_PMD_ERR("Invalid MAC address");
+ err = -EINVAL;
+ break;
+ case ENETC_DUPLICATE_MAC_ADDR:
+ ENETC_PMD_ERR("Duplicate MAC address");
+ err = -EINVAL;
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ if (err) {
+ ENETC_PMD_ERR("VSI command execute error!");
+ goto end;
+ }
+
+ rte_ether_addr_copy((struct rte_ether_addr *)&cmd->addr,
+ &dev->data->mac_addrs[0]);
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(reply_msg);
+ rte_free(msg);
+ return err;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -63,6 +319,7 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
.stats_get = enetc4_vf_stats_get,
+ .mac_addr_set = enetc4_vf_set_mac_addr,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
@@ -121,6 +378,9 @@ enetc4_vf_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
+ if (!enetc_crc_gen)
+ enetc_gen_crc_table();
+
/* Copy the permanent MAC address */
rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr,
ð_dev->data->mac_addrs[0]);
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 09/12] net/enetc: Add multicast and promiscuous mode support
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (7 preceding siblings ...)
2024-10-18 7:26 ` [v1 08/12] net/enetc: Add VF to PF messaging support and primary MAC setup vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-18 7:26 ` [v1 10/12] net/enetc: Add link speed and status support vanshika.shukla
` (3 subsequent siblings)
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Enables ENETC4 PMD to handle multicast and promiscuous modes.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 2 +
drivers/net/enetc/enetc.h | 5 +
drivers/net/enetc/enetc4_ethdev.c | 40 +++++
drivers/net/enetc/enetc4_vf.c | 265 ++++++++++++++++++++++++++++
4 files changed, 312 insertions(+)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 79430d0018..36d536d1f2 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Promiscuous mode = Y
+Allmulticast mode = Y
RSS hash = Y
Packet type parsing = Y
Basic stats = Y
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index c0fba9d618..902912f4fb 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -144,15 +144,20 @@ struct enetc_eth_adapter {
#define ENETC_ALLMULTI_PROMISC_DIS 0x81
#define ENETC_ALLMULTI_PROMISC_EN 0x83
+#define ENETC_PROMISC_VLAN_DISABLE 0x1
+#define ENETC_PROMISC_VLAN_ENABLE 0x3
/* Enum for class IDs */
enum enetc_msg_cmd_class_id {
ENETC_CLASS_ID_MAC_FILTER = 0x20,
+ ENETC_CLASS_ID_VLAN_FILTER = 0x21,
};
/* Enum for command IDs */
enum enetc_msg_cmd_id {
ENETC_CMD_ID_SET_PRIMARY_MAC = 0,
+ ENETC_CMD_ID_SET_MAC_PROMISCUOUS = 5,
+ ENETC_CMD_ID_SET_VLAN_PROMISCUOUS = 4,
};
enum mac_addr_status {
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 075205a0e5..9df01b1e4d 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -592,6 +592,44 @@ enetc4_dev_close(struct rte_eth_dev *dev)
return ret;
}
+static int
+enetc4_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t psipmr = 0;
+
+ psipmr = enetc4_port_rd(enetc_hw, ENETC4_PSIPMMR);
+
+ /* Setting to enable promiscuous mode for all ports*/
+ psipmr |= PSIPMMR_SI_MAC_UP | PSIPMMR_SI_MAC_MP;
+
+ enetc4_port_wr(enetc_hw, ENETC4_PSIPMMR, psipmr);
+
+ return 0;
+}
+
+static int
+enetc4_promiscuous_disable(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t psipmr = 0;
+
+ /* Setting to disable promiscuous mode for SI0*/
+ psipmr = enetc4_port_rd(enetc_hw, ENETC4_PSIPMMR);
+ psipmr &= (~PSIPMMR_SI_MAC_UP);
+
+ if (dev->data->all_multicast == 0)
+ psipmr &= (~PSIPMMR_SI_MAC_MP);
+
+ enetc4_port_wr(enetc_hw, ENETC4_PSIPMMR, psipmr);
+
+ return 0;
+}
+
int
enetc4_dev_configure(struct rte_eth_dev *dev)
{
@@ -831,6 +869,8 @@ static const struct eth_dev_ops enetc4_ops = {
.dev_infos_get = enetc4_dev_infos_get,
.stats_get = enetc4_stats_get,
.stats_reset = enetc4_stats_reset,
+ .promiscuous_enable = enetc4_promiscuous_enable,
+ .promiscuous_disable = enetc4_promiscuous_disable,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 6bdd476f0a..28cf83077c 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -303,6 +303,266 @@ enetc4_vf_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
return err;
}
+static int
+enetc4_vf_promisc_send_message(struct rte_eth_dev *dev, bool promisc_en)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_promisc *cmd;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_promisc), ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ cmd = (struct enetc_msg_cmd_set_promisc *)msg->vaddr;
+
+ /* op_type is based on the result of message format
+ * 7 6 1 0
+ type promisc flush
+ */
+
+ if (promisc_en)
+ cmd->op_type = ENETC_PROMISC_ENABLE;
+ else
+ cmd->op_type = ENETC_PROMISC_DISABLE;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_MAC_FILTER,
+ ENETC_CMD_ID_SET_MAC_PROMISCUOUS, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+static int
+enetc4_vf_allmulti_send_message(struct rte_eth_dev *dev, bool mc_promisc)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_promisc *cmd;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_promisc),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ cmd = (struct enetc_msg_cmd_set_promisc *)msg->vaddr;
+
+ /* op_type is based on the result of message format
+ * 7 6 1 0
+ type promisc flush
+ */
+
+ if (mc_promisc)
+ cmd->op_type = ENETC_ALLMULTI_PROMISC_EN;
+ else
+ cmd->op_type = ENETC_ALLMULTI_PROMISC_DIS;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_MAC_FILTER,
+ ENETC_CMD_ID_SET_MAC_PROMISCUOUS, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+
+static int
+enetc4_vf_multicast_enable(struct rte_eth_dev *dev)
+{
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ err = enetc4_vf_allmulti_send_message(dev, true);
+ if (err) {
+ ENETC_PMD_ERR("Failed to enable multicast promiscuous mode");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+enetc4_vf_multicast_disable(struct rte_eth_dev *dev)
+{
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ err = enetc4_vf_allmulti_send_message(dev, false);
+ if (err) {
+ ENETC_PMD_ERR("Failed to disable multicast promiscuous mode");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+enetc4_vf_promisc_enable(struct rte_eth_dev *dev)
+{
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ err = enetc4_vf_promisc_send_message(dev, true);
+ if (err) {
+ ENETC_PMD_ERR("Failed to enable promiscuous mode");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+enetc4_vf_promisc_disable(struct rte_eth_dev *dev)
+{
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ err = enetc4_vf_promisc_send_message(dev, false);
+ if (err) {
+ ENETC_PMD_ERR("Failed to disable promiscuous mode");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+enetc4_vf_vlan_promisc(struct rte_eth_dev *dev, bool promisc_en)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_vlan_promisc *cmd;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_vlan_promisc),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ cmd = (struct enetc_msg_cmd_set_vlan_promisc *)msg->vaddr;
+ /* op is based on the result of message format
+ * 1 0
+ * promisc flush
+ */
+
+ if (promisc_en)
+ cmd->op = ENETC_PROMISC_VLAN_ENABLE;
+ else
+ cmd->op = ENETC_PROMISC_VLAN_DISABLE;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_VLAN_FILTER,
+ ENETC_CMD_ID_SET_VLAN_PROMISCUOUS, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+static int enetc4_vf_vlan_offload_set(struct rte_eth_dev *dev, int mask __rte_unused)
+{
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->dev_conf.rxmode.offloads) {
+ ENETC_PMD_DEBUG("VLAN filter table entry inserted:"
+ "Disabling VLAN promisc mode");
+ err = enetc4_vf_vlan_promisc(dev, false);
+ if (err) {
+ ENETC_PMD_ERR("Added VLAN filter table entry:"
+ "Failed to disable promiscuous mode");
+ return err;
+ }
+ } else {
+ ENETC_PMD_DEBUG("Enabling VLAN promisc mode");
+ err = enetc4_vf_vlan_promisc(dev, true);
+ if (err) {
+ ENETC_PMD_ERR("Vlan filter table empty:"
+ "Failed to enable promiscuous mode");
+ return err;
+ }
+ }
+
+ return 0;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -320,6 +580,11 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_infos_get = enetc4_dev_infos_get,
.stats_get = enetc4_vf_stats_get,
.mac_addr_set = enetc4_vf_set_mac_addr,
+ .promiscuous_enable = enetc4_vf_promisc_enable,
+ .promiscuous_disable = enetc4_vf_promisc_disable,
+ .allmulticast_enable = enetc4_vf_multicast_enable,
+ .allmulticast_disable = enetc4_vf_multicast_disable,
+ .vlan_offload_set = enetc4_vf_vlan_offload_set,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 10/12] net/enetc: Add link speed and status support
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (8 preceding siblings ...)
2024-10-18 7:26 ` [v1 09/12] net/enetc: Add multicast and promiscuous mode support vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-18 7:26 ` [v1 11/12] net/enetc: Add link status notification support vanshika.shukla
` (2 subsequent siblings)
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch add support for link update operation.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 2 +
drivers/net/enetc/base/enetc4_hw.h | 9 ++
drivers/net/enetc/enetc.h | 25 ++++
drivers/net/enetc/enetc4_ethdev.c | 44 ++++++
drivers/net/enetc/enetc4_vf.c | 216 ++++++++++++++++++++++++++++
5 files changed, 296 insertions(+)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 36d536d1f2..78b06e9841 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Link status = Y
Promiscuous mode = Y
Allmulticast mode = Y
RSS hash = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index f0b7563d22..d899b82b9c 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -109,6 +109,15 @@ struct enetc_msg_swbd {
#define IFMODE_SGMII 5
#define PM_IF_MODE_ENA BIT(15)
+/* Port MAC 0 Interface Status Register */
+#define ENETC4_PM_IF_STATUS(mac) (0x5304 + (mac) * 0x400)
+#define ENETC4_LINK_MODE 0x0000000000080000ULL
+#define ENETC4_LINK_STATUS 0x0000000000010000ULL
+#define ENETC4_LINK_SPEED_MASK 0x0000000000060000ULL
+#define ENETC4_LINK_SPEED_10M 0x0ULL
+#define ENETC4_LINK_SPEED_100M 0x0000000000020000ULL
+#define ENETC4_LINK_SPEED_1G 0x0000000000040000ULL
+
#define ENETC4_DEF_VSI_WAIT_TIMEOUT_UPDATE 100
#define ENETC4_DEF_VSI_WAIT_DELAY_UPDATE 2000 /* us */
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 902912f4fb..7f5329de33 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -151,6 +151,8 @@ struct enetc_eth_adapter {
enum enetc_msg_cmd_class_id {
ENETC_CLASS_ID_MAC_FILTER = 0x20,
ENETC_CLASS_ID_VLAN_FILTER = 0x21,
+ ENETC_CLASS_ID_LINK_STATUS = 0x80,
+ ENETC_CLASS_ID_LINK_SPEED = 0x81
};
/* Enum for command IDs */
@@ -158,6 +160,8 @@ enum enetc_msg_cmd_id {
ENETC_CMD_ID_SET_PRIMARY_MAC = 0,
ENETC_CMD_ID_SET_MAC_PROMISCUOUS = 5,
ENETC_CMD_ID_SET_VLAN_PROMISCUOUS = 4,
+ ENETC_CMD_ID_GET_LINK_STATUS = 0,
+ ENETC_CMD_ID_GET_LINK_SPEED = 0
};
enum mac_addr_status {
@@ -166,6 +170,27 @@ enum mac_addr_status {
ENETC_MAC_ADDR_NOT_FOUND = 0X2,
};
+enum link_status {
+ ENETC_LINK_UP = 0x0,
+ ENETC_LINK_DOWN = 0x1
+};
+
+enum speed {
+ ENETC_SPEED_UNKNOWN = 0x0,
+ ENETC_SPEED_10_HALF_DUPLEX = 0x1,
+ ENETC_SPEED_10_FULL_DUPLEX = 0x2,
+ ENETC_SPEED_100_HALF_DUPLEX = 0x3,
+ ENETC_SPEED_100_FULL_DUPLEX = 0x4,
+ ENETC_SPEED_1000 = 0x5,
+ ENETC_SPEED_2500 = 0x6,
+ ENETC_SPEED_5000 = 0x7,
+ ENETC_SPEED_10G = 0x8,
+ ENETC_SPEED_25G = 0x9,
+ ENETC_SPEED_50G = 0xA,
+ ENETC_SPEED_100G = 0xB,
+ ENETC_SPEED_NOT_SUPPORTED = 0xF
+};
+
/* PSI-VSI command header format */
struct enetc_msg_cmd_header {
uint16_t csum; /* INET_CHECKSUM */
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 9df01b1e4d..29283f2d44 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -75,6 +75,49 @@ enetc4_dev_stop(struct rte_eth_dev *dev)
return 0;
}
+/* return 0 means link status changed, -1 means not changed */
+static int
+enetc4_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct rte_eth_link link;
+ uint32_t status;
+
+ PMD_INIT_FUNC_TRACE();
+
+ memset(&link, 0, sizeof(link));
+
+ status = enetc4_port_rd(enetc_hw, ENETC4_PM_IF_STATUS(0));
+
+ if (status & ENETC4_LINK_MODE)
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ else
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+
+ if (status & ENETC4_LINK_STATUS)
+ link.link_status = RTE_ETH_LINK_UP;
+ else
+ link.link_status = RTE_ETH_LINK_DOWN;
+
+ switch (status & ENETC4_LINK_SPEED_MASK) {
+ case ENETC4_LINK_SPEED_1G:
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
+ break;
+
+ case ENETC4_LINK_SPEED_100M:
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ break;
+
+ default:
+ case ENETC4_LINK_SPEED_10M:
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
+ }
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
static int
enetc4_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
{
@@ -867,6 +910,7 @@ static const struct eth_dev_ops enetc4_ops = {
.dev_stop = enetc4_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .link_update = enetc4_link_update,
.stats_get = enetc4_stats_get,
.stats_reset = enetc4_stats_reset,
.promiscuous_enable = enetc4_promiscuous_enable,
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 28cf83077c..307fabf2c6 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -206,6 +206,8 @@ enetc4_msg_vsi_send(struct enetc_hw *enetc_hw, struct enetc_msg_swbd *msg)
err = -EINVAL;
break;
case ENETC_CLASS_ID_MAC_FILTER:
+ case ENETC_CLASS_ID_LINK_STATUS:
+ case ENETC_CLASS_ID_LINK_SPEED:
break;
default:
err = -EIO;
@@ -479,6 +481,216 @@ enetc4_vf_promisc_disable(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetc4_vf_get_link_status(struct rte_eth_dev *dev, struct enetc_psi_reply_msg *reply_msg)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_get_link_status),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_LINK_STATUS,
+ ENETC_CMD_ID_GET_LINK_STATUS, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+static int
+enetc4_vf_get_link_speed(struct rte_eth_dev *dev, struct enetc_psi_reply_msg *reply_msg)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_get_link_speed),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_LINK_SPEED,
+ ENETC_CMD_ID_GET_LINK_SPEED, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+static int
+enetc4_vf_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+{
+ struct enetc_psi_reply_msg *reply_msg;
+ struct rte_eth_link link;
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ reply_msg = rte_zmalloc(NULL, sizeof(*reply_msg), RTE_CACHE_LINE_SIZE);
+ if (!reply_msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for reply_msg");
+ return -ENOMEM;
+ }
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ err = enetc4_vf_get_link_status(dev, reply_msg);
+ if (err) {
+ ENETC_PMD_ERR("Failed to get link status");
+ rte_free(reply_msg);
+ return err;
+ }
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_LINK_STATUS) {
+ switch (reply_msg->status) {
+ case ENETC_LINK_UP:
+ link.link_status = RTE_ETH_LINK_UP;
+ break;
+ case ENETC_LINK_DOWN:
+ link.link_status = RTE_ETH_LINK_DOWN;
+ break;
+ default:
+ ENETC_PMD_ERR("Unknown link status");
+ break;
+ }
+ } else {
+ ENETC_PMD_ERR("Wrong reply message");
+ return -1;
+ }
+
+ err = enetc4_vf_get_link_speed(dev, reply_msg);
+ if (err) {
+ ENETC_PMD_ERR("Failed to get link speed");
+ rte_free(reply_msg);
+ return err;
+ }
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_LINK_SPEED) {
+ switch (reply_msg->status) {
+ case ENETC_SPEED_UNKNOWN:
+ ENETC_PMD_DEBUG("Speed unknown");
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ break;
+ case ENETC_SPEED_10_HALF_DUPLEX:
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ break;
+ case ENETC_SPEED_10_FULL_DUPLEX:
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_100_HALF_DUPLEX:
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ break;
+ case ENETC_SPEED_100_FULL_DUPLEX:
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_1000:
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_2500:
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_5000:
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_10G:
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_25G:
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_50G:
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_100G:
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_NOT_SUPPORTED:
+ ENETC_PMD_DEBUG("Speed not supported");
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ break;
+ default:
+ ENETC_PMD_ERR("Unknown speed status");
+ break;
+ }
+ } else {
+ ENETC_PMD_ERR("Wrong reply message");
+ return -1;
+ }
+
+ link.link_autoneg = 1;
+
+ rte_eth_linkstatus_set(dev, &link);
+
+ rte_free(reply_msg);
+ return 0;
+}
+
static int
enetc4_vf_vlan_promisc(struct rte_eth_dev *dev, bool promisc_en)
{
@@ -584,6 +796,7 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.promiscuous_disable = enetc4_vf_promisc_disable,
.allmulticast_enable = enetc4_vf_multicast_enable,
.allmulticast_disable = enetc4_vf_multicast_disable,
+ .link_update = enetc4_vf_link_update,
.vlan_offload_set = enetc4_vf_vlan_offload_set,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
@@ -685,6 +898,9 @@ enetc4_vf_dev_init(struct rte_eth_dev *eth_dev)
ENETC_PMD_DEBUG("port_id %d vendorID=0x%x deviceID=0x%x",
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
+ /* update link */
+ enetc4_vf_link_update(eth_dev, 0);
+
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 11/12] net/enetc: Add link status notification support
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (9 preceding siblings ...)
2024-10-18 7:26 ` [v1 10/12] net/enetc: Add link speed and status support vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-18 7:26 ` [v1 12/12] net/enetc: Add MAC and VLAN filter support vanshika.shukla
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch supports link event notifications for ENETC4 PMD, enabling:
- Link up/down event notifications
- Notification of link speed changes
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/base/enetc4_hw.h | 9 +-
drivers/net/enetc/enetc.h | 3 +
drivers/net/enetc/enetc4_ethdev.c | 16 ++-
drivers/net/enetc/enetc4_vf.c | 215 +++++++++++++++++++++++++++-
5 files changed, 239 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 78b06e9841..31a1955215 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Link status event = Y
Speed capabilities = Y
Link status = Y
Promiscuous mode = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index d899b82b9c..2da779e351 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -128,7 +128,14 @@ struct enetc_msg_swbd {
#define ENETC4_SITFRM0 0x328
#define ENETC4_SITDFCR 0x340
-/* VSI MSG Registers */
+/* Station interface interrupts */
+#define ENETC4_SIMSIVR 0xA30
+#define ENETC4_VSIIER 0xA00
+#define ENETC4_VSIIDR 0xA08
+#define ENETC4_VSIIER_MRIE BIT(9)
+#define ENETC4_SI_INT_IDX 0
+
+/* VSI Registers */
#define ENETC4_VSIMSGSR 0x204 /* RO */
#define ENETC4_VSIMSGSR_MB BIT(0)
#define ENETC4_VSIMSGSR_MS BIT(1)
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 7f5329de33..6b37cd95dd 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -161,6 +161,8 @@ enum enetc_msg_cmd_id {
ENETC_CMD_ID_SET_MAC_PROMISCUOUS = 5,
ENETC_CMD_ID_SET_VLAN_PROMISCUOUS = 4,
ENETC_CMD_ID_GET_LINK_STATUS = 0,
+ ENETC_CMD_ID_REGISTER_LINK_NOTIF = 1,
+ ENETC_CMD_ID_UNREGISTER_LINK_NOTIF = 2,
ENETC_CMD_ID_GET_LINK_SPEED = 0
};
@@ -280,6 +282,7 @@ const uint32_t *enetc4_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused
* enetc4_vf function prototype
*/
int enetc4_vf_dev_stop(struct rte_eth_dev *dev);
+int enetc4_vf_dev_intr(struct rte_eth_dev *eth_dev, bool enable);
/*
* RX/TX ENETC function prototypes
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 29283f2d44..ab420aa301 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -605,10 +605,13 @@ enetc4_dev_close(struct rte_eth_dev *dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
- if (hw->device_id == ENETC4_DEV_ID_VF)
+ if (hw->device_id == ENETC4_DEV_ID_VF) {
+ if (dev->data->dev_conf.intr_conf.lsc != 0)
+ enetc4_vf_dev_intr(dev, false);
ret = enetc4_vf_dev_stop(dev);
- else
+ } else {
ret = enetc4_dev_stop(dev);
+ }
if (dev->data->nb_rx_queues > 1) {
/* Disable RSS */
@@ -719,6 +722,15 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
enetc4_port_wr(enetc_hw, ENETC4_PARCSCR, checksum);
+ /* Enable interrupts */
+ if (hw->device_id == ENETC4_DEV_ID_VF) {
+ if (dev->data->dev_conf.intr_conf.lsc != 0) {
+ ret = enetc4_vf_dev_intr(dev, true);
+ if (ret)
+ ENETC_PMD_WARN("Failed to setup link interrupts");
+ }
+ }
+
/* Disable and reset RX and TX rings */
for (i = 0; i < dev->data->nb_rx_queues; i++)
enetc4_rxbdr_wr(enetc_hw, i, ENETC_RBMR, ENETC_BMR_RESET);
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 307fabf2c6..22266188ee 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -144,6 +144,69 @@ enetc4_msg_vsi_reply_msg(struct enetc_hw *enetc_hw, struct enetc_psi_reply_msg *
reply_msg->status = status;
}
+static void
+enetc4_msg_get_psi_msg(struct enetc_hw *enetc_hw, struct enetc_psi_reply_msg *reply_msg)
+{
+ int vsimsgrr;
+ int8_t class_id = 0;
+ uint8_t status = 0;
+
+ vsimsgrr = enetc_rd(enetc_hw, ENETC4_VSIMSGRR);
+
+ /* Extracting 8 bits of message result in class_id */
+ class_id |= ((ENETC_SIMSGSR_GET_MC(vsimsgrr) >> 8) & 0xff);
+
+ /* Extracting 4 bits of message result in status */
+ status |= ((ENETC_SIMSGSR_GET_MC(vsimsgrr) >> 4) & 0xf);
+
+ reply_msg->class_id = class_id;
+ reply_msg->status = status;
+}
+
+static void
+enetc4_process_psi_msg(struct rte_eth_dev *eth_dev, struct enetc_hw *enetc_hw)
+{
+ struct enetc_psi_reply_msg *msg;
+ struct rte_eth_link link;
+ int ret = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ return;
+ }
+
+ rte_eth_linkstatus_get(eth_dev, &link);
+ enetc4_msg_get_psi_msg(enetc_hw, msg);
+
+ if (msg->class_id == ENETC_CLASS_ID_LINK_STATUS) {
+ switch (msg->status) {
+ case ENETC_LINK_UP:
+ ENETC_PMD_DEBUG("Link is up");
+ link.link_status = RTE_ETH_LINK_UP;
+ break;
+ case ENETC_LINK_DOWN:
+ ENETC_PMD_DEBUG("Link is down");
+ link.link_status = RTE_ETH_LINK_DOWN;
+ break;
+ default:
+ ENETC_PMD_ERR("Unknown link status 0x%x", msg->status);
+ break;
+ }
+ ret = rte_eth_linkstatus_set(eth_dev, &link);
+ if (!ret)
+ ENETC_PMD_DEBUG("Link status has been changed");
+
+ /* Process user registered callback */
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_INTR_LSC, NULL);
+ } else {
+ ENETC_PMD_ERR("Wrong message 0x%x", msg->class_id);
+ }
+
+ rte_free(msg);
+}
+
static int
enetc4_msg_vsi_send(struct enetc_hw *enetc_hw, struct enetc_msg_swbd *msg)
{
@@ -775,6 +838,55 @@ static int enetc4_vf_vlan_offload_set(struct rte_eth_dev *dev, int mask __rte_un
return 0;
}
+static int
+enetc4_vf_link_register_notif(struct rte_eth_dev *dev, bool enable)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_swbd *msg;
+ struct rte_eth_link link;
+ int msg_size;
+ int err = 0;
+ uint8_t cmd;
+
+ PMD_INIT_FUNC_TRACE();
+ memset(&link, 0, sizeof(struct rte_eth_link));
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_get_link_status), ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+ if (enable)
+ cmd = ENETC_CMD_ID_REGISTER_LINK_NOTIF;
+ else
+ cmd = ENETC_CMD_ID_UNREGISTER_LINK_NOTIF;
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_LINK_STATUS,
+ cmd, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err)
+ ENETC_PMD_ERR("VSI msg error for link status notification");
+
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+
+ return err;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -866,6 +978,45 @@ enetc4_vf_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
return 0;
}
+static void
+enetc_vf_enable_mr_int(struct enetc_hw *hw, bool en)
+{
+ uint32_t val;
+
+ val = enetc_rd(hw, ENETC4_VSIIER);
+ val &= ~ENETC4_VSIIER_MRIE;
+ val |= (en) ? ENETC4_VSIIER_MRIE : 0;
+ enetc_wr(hw, ENETC4_VSIIER, val);
+ ENETC_PMD_DEBUG("Interrupt enable status (VSIIER) = 0x%x", val);
+}
+
+static void
+enetc4_dev_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t status;
+
+ /* Disable interrupts before process */
+ enetc_vf_enable_mr_int(enetc_hw, false);
+
+ status = enetc_rd(enetc_hw, ENETC4_VSIIDR);
+ ENETC_PMD_DEBUG("Got INTR VSIIDR status = 0x%0x", status);
+ /* Check for PSI to VSI message interrupt */
+ if (!(status & ENETC4_VSIIER_MRIE)) {
+ ENETC_PMD_ERR("Interrupt is not PSI to VSI");
+ goto intr_clear;
+ }
+
+ enetc4_process_psi_msg(eth_dev, enetc_hw);
+intr_clear:
+ /* Clear Interrupts */
+ enetc_wr(enetc_hw, ENETC4_VSIIDR, 0xffffffff);
+ enetc_vf_enable_mr_int(enetc_hw, true);
+}
+
static int
enetc4_vf_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -913,14 +1064,74 @@ enetc4_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
enetc4_vf_dev_init);
}
+int
+enetc4_vf_dev_intr(struct rte_eth_dev *eth_dev, bool enable)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ if (!(intr_handle && rte_intr_fd_get(intr_handle))) {
+ ENETC_PMD_ERR("No INTR handle");
+ return -1;
+ }
+ if (enable) {
+ /* if the interrupts were configured on this devices*/
+ ret = rte_intr_callback_register(intr_handle,
+ enetc4_dev_interrupt_handler, eth_dev);
+ if (ret) {
+ ENETC_PMD_ERR("Failed to register INTR callback %d", ret);
+ return ret;
+ }
+ /* set one IRQ entry for PSI-to-VSI messaging */
+ /* Vector index 0 */
+ enetc_wr(enetc_hw, ENETC4_SIMSIVR, ENETC4_SI_INT_IDX);
+
+ /* enable uio/vfio intr/eventfd mapping */
+ ret = rte_intr_enable(intr_handle);
+ if (ret) {
+ ENETC_PMD_ERR("Failed to enable INTR %d", ret);
+ goto intr_enable_fail;
+ }
+
+ /* Enable message received interrupt */
+ enetc_vf_enable_mr_int(enetc_hw, true);
+ ret = enetc4_vf_link_register_notif(eth_dev, true);
+ if (ret) {
+ ENETC_PMD_ERR("Failed to register link notifications %d", ret);
+ goto disable;
+ }
+
+ return ret;
+ }
+
+ ret = enetc4_vf_link_register_notif(eth_dev, false);
+ if (ret)
+ ENETC_PMD_WARN("Failed to un-register link notification %d", ret);
+disable:
+ enetc_vf_enable_mr_int(enetc_hw, false);
+ ret = rte_intr_disable(intr_handle);
+ if (ret)
+ ENETC_PMD_WARN("Failed to disable INTR %d", ret);
+intr_enable_fail:
+ rte_intr_callback_unregister(intr_handle,
+ enetc4_dev_interrupt_handler, eth_dev);
+
+ return ret;
+}
+
static struct rte_pci_driver rte_enetc4_vf_pmd = {
.id_table = pci_vf_id_enetc4_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
.probe = enetc4_vf_pci_probe,
.remove = enetc4_pci_remove,
};
RTE_PMD_REGISTER_PCI(net_enetc4_vf, rte_enetc4_vf_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_enetc4_vf, pci_vf_id_enetc4_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_enetc4_vf, "* uio_pci_generic");
+RTE_PMD_REGISTER_KMOD_DEP(net_enetc4_vf, "* igb_uio | uio_pci_generic");
RTE_LOG_REGISTER_DEFAULT(enetc4_vf_logtype_pmd, NOTICE);
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v1 12/12] net/enetc: Add MAC and VLAN filter support
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (10 preceding siblings ...)
2024-10-18 7:26 ` [v1 11/12] net/enetc: Add link status notification support vanshika.shukla
@ 2024-10-18 7:26 ` vanshika.shukla
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
12 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-18 7:26 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Introduces support for:
- Up to 4 MAC addresses filtering
- Up to 4 VLAN filters
Enhances packet filtering capabilities for ENETC4 PMD.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 2 +
drivers/net/enetc/base/enetc4_hw.h | 3 +
drivers/net/enetc/enetc.h | 11 ++
drivers/net/enetc/enetc4_vf.c | 229 +++++++++++++++++++++++++++-
4 files changed, 244 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 31a1955215..87425f45c9 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -9,6 +9,8 @@ Speed capabilities = Y
Link status = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
+VLAN filter = Y
RSS hash = Y
Packet type parsing = Y
Basic stats = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 2da779e351..e3eef6fe19 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -71,6 +71,9 @@ struct enetc_msg_swbd {
*/
#define ENETC4_MAC_MAXFRM_SIZE 2000
+/* Number of MAC Address Filter table entries */
+#define ENETC4_MAC_ENTRIES 4
+
/* Port MAC 0/1 Maximum Frame Length Register */
#define ENETC4_PM_MAXFRM(mac) (0x5014 + (mac) * 0x400)
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 6b37cd95dd..e79a0bf0a9 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -158,7 +158,10 @@ enum enetc_msg_cmd_class_id {
/* Enum for command IDs */
enum enetc_msg_cmd_id {
ENETC_CMD_ID_SET_PRIMARY_MAC = 0,
+ ENETC_MSG_ADD_EXACT_MAC_ENTRIES = 1,
ENETC_CMD_ID_SET_MAC_PROMISCUOUS = 5,
+ ENETC_MSG_ADD_EXACT_VLAN_ENTRIES = 0,
+ ENETC_MSG_REMOVE_EXACT_VLAN_ENTRIES = 1,
ENETC_CMD_ID_SET_VLAN_PROMISCUOUS = 4,
ENETC_CMD_ID_GET_LINK_STATUS = 0,
ENETC_CMD_ID_REGISTER_LINK_NOTIF = 1,
@@ -170,6 +173,14 @@ enum mac_addr_status {
ENETC_INVALID_MAC_ADDR = 0x0,
ENETC_DUPLICATE_MAC_ADDR = 0X1,
ENETC_MAC_ADDR_NOT_FOUND = 0X2,
+ ENETC_MAC_FILTER_NO_RESOURCE = 0x3
+};
+
+enum vlan_status {
+ ENETC_INVALID_VLAN_ENTRY = 0x0,
+ ENETC_DUPLICATE_VLAN_ENTRY = 0X1,
+ ENETC_VLAN_ENTRY_NOT_FOUND = 0x2,
+ ENETC_VLAN_NO_RESOURCE = 0x3
};
enum link_status {
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 22266188ee..fb27557378 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -17,6 +17,10 @@
uint16_t enetc_crc_table[ENETC_CRC_TABLE_SIZE];
bool enetc_crc_gen;
+/* Supported Rx offloads */
+static uint64_t dev_vf_rx_offloads_sup =
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
static void
enetc_gen_crc_table(void)
{
@@ -53,6 +57,25 @@ enetc_crc_calc(uint16_t crc, const uint8_t *buffer, size_t len)
return crc;
}
+static int
+enetc4_vf_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = enetc4_dev_infos_get(dev, dev_info);
+ if (ret)
+ return ret;
+
+ dev_info->max_mtu = dev_info->max_rx_pktlen - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ dev_info->max_mac_addrs = ENETC4_MAC_ENTRIES;
+ dev_info->rx_offload_capa |= dev_vf_rx_offloads_sup;
+
+ return 0;
+}
+
int
enetc4_vf_dev_stop(struct rte_eth_dev *dev __rte_unused)
{
@@ -810,6 +833,201 @@ enetc4_vf_vlan_promisc(struct rte_eth_dev *dev, bool promisc_en)
return err;
}
+static int
+enetc4_vf_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ uint32_t index __rte_unused, uint32_t pool __rte_unused)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_primary_mac *cmd;
+ struct enetc_msg_swbd *msg;
+ struct enetc_psi_reply_msg *reply_msg;
+ int msg_size;
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (!rte_is_valid_assigned_ether_addr(addr))
+ return -EINVAL;
+
+ reply_msg = rte_zmalloc(NULL, sizeof(*reply_msg), RTE_CACHE_LINE_SIZE);
+ if (!reply_msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for reply_msg");
+ return -ENOMEM;
+ }
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ rte_free(reply_msg);
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_primary_mac),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ rte_free(reply_msg);
+ return -ENOMEM;
+ }
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+ cmd = (struct enetc_msg_cmd_set_primary_mac *)msg->vaddr;
+ memcpy(&cmd->addr.addr_bytes, addr, sizeof(struct rte_ether_addr));
+ cmd->count = 1;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_MAC_FILTER,
+ ENETC_MSG_ADD_EXACT_MAC_ENTRIES, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_MAC_FILTER) {
+ switch (reply_msg->status) {
+ case ENETC_INVALID_MAC_ADDR:
+ ENETC_PMD_ERR("Invalid MAC address");
+ err = -EINVAL;
+ break;
+ case ENETC_DUPLICATE_MAC_ADDR:
+ ENETC_PMD_ERR("Duplicate MAC address");
+ err = -EINVAL;
+ break;
+ case ENETC_MAC_FILTER_NO_RESOURCE:
+ ENETC_PMD_ERR("Not enough exact-match entries available");
+ err = -EINVAL;
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ if (err) {
+ ENETC_PMD_ERR("VSI command execute error!");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(reply_msg);
+ rte_free(msg);
+ return err;
+}
+
+static int enetc4_vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_vlan_exact_filter *cmd;
+ struct enetc_msg_swbd *msg;
+ struct enetc_psi_reply_msg *reply_msg;
+ int msg_size;
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ reply_msg = rte_zmalloc(NULL, sizeof(*reply_msg), RTE_CACHE_LINE_SIZE);
+ if (!reply_msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for reply_msg");
+ return -ENOMEM;
+ }
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ rte_free(reply_msg);
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_vlan_exact_filter),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ rte_free(reply_msg);
+ return -ENOMEM;
+ }
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+ cmd = (struct enetc_msg_vlan_exact_filter *)msg->vaddr;
+ cmd->vlan_count = 1;
+ cmd->vlan_id = vlan_id;
+
+ /* TPID 2-bit encoding value is taken from the H/W block guide:
+ * 00b Standard C-VLAN 0x8100
+ * 01b Standard S-VLAN 0x88A8
+ * 10b Custom VLAN as defined by CVLANR1[ETYPE]
+ * 11b Custom VLAN as defined by CVLANR2[ETYPE]
+ * Currently Standard C-VLAN is supported. To support others in future.
+ */
+ cmd->tpid = 0;
+
+ if (on) {
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_VLAN_FILTER,
+ ENETC_MSG_ADD_EXACT_VLAN_ENTRIES, 0, 0, 0);
+ } else {
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_VLAN_FILTER,
+ ENETC_MSG_REMOVE_EXACT_VLAN_ENTRIES, 0, 0, 0);
+ }
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_VLAN_FILTER) {
+ switch (reply_msg->status) {
+ case ENETC_INVALID_VLAN_ENTRY:
+ ENETC_PMD_ERR("VLAN entry not valid");
+ err = -EINVAL;
+ break;
+ case ENETC_DUPLICATE_VLAN_ENTRY:
+ ENETC_PMD_ERR("Duplicated VLAN entry");
+ err = -EINVAL;
+ break;
+ case ENETC_VLAN_ENTRY_NOT_FOUND:
+ ENETC_PMD_ERR("VLAN entry not found");
+ err = -EINVAL;
+ break;
+ case ENETC_VLAN_NO_RESOURCE:
+ ENETC_PMD_ERR("Not enough exact-match entries available");
+ err = -EINVAL;
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ if (err) {
+ ENETC_PMD_ERR("VSI command execute error!");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(reply_msg);
+ rte_free(msg);
+ return err;
+}
+
static int enetc4_vf_vlan_offload_set(struct rte_eth_dev *dev, int mask __rte_unused)
{
int err = 0;
@@ -838,6 +1056,12 @@ static int enetc4_vf_vlan_offload_set(struct rte_eth_dev *dev, int mask __rte_un
return 0;
}
+static int
+enetc4_vf_mtu_set(struct rte_eth_dev *dev __rte_unused, uint16_t mtu __rte_unused)
+{
+ return 0;
+}
+
static int
enetc4_vf_link_register_notif(struct rte_eth_dev *dev, bool enable)
{
@@ -901,14 +1125,17 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_start = enetc4_vf_dev_start,
.dev_stop = enetc4_vf_dev_stop,
.dev_close = enetc4_dev_close,
- .dev_infos_get = enetc4_dev_infos_get,
.stats_get = enetc4_vf_stats_get,
+ .dev_infos_get = enetc4_vf_dev_infos_get,
+ .mtu_set = enetc4_vf_mtu_set,
.mac_addr_set = enetc4_vf_set_mac_addr,
+ .mac_addr_add = enetc4_vf_mac_addr_add,
.promiscuous_enable = enetc4_vf_promisc_enable,
.promiscuous_disable = enetc4_vf_promisc_disable,
.allmulticast_enable = enetc4_vf_multicast_enable,
.allmulticast_disable = enetc4_vf_multicast_disable,
.link_update = enetc4_vf_link_update,
+ .vlan_filter_set = enetc4_vf_vlan_filter_set,
.vlan_offload_set = enetc4_vf_vlan_offload_set,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
@ 2024-10-20 23:39 ` Stephen Hemminger
2024-10-20 23:52 ` Stephen Hemminger
` (3 subsequent siblings)
4 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-10-20 23:39 UTC (permalink / raw)
To: vanshika.shukla
Cc: dev, Thomas Monjalon, Wathsala Vithanage, Bruce Richardson,
Gagandeep Singh, Sachin Saxena, Anatoly Burakov, Apeksha Gupta
On Fri, 18 Oct 2024 12:56:33 +0530
vanshika.shukla@nxp.com wrote:
> From: Vanshika Shukla <vanshika.shukla@nxp.com>
>
> This patch introduces a new ENETC4 PMD driver for NXP's i.MX95
> SoC, enabling basic network operations. Key features include:
>
> - Probe and teardown functions
> - Hardware initialization for both Virtual Functions (VFs)
> and Physical Function (PF)
>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Fix the spelling errors reported by checkpatch in next version.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v1 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD
2024-10-18 7:26 ` [v1 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD vanshika.shukla
@ 2024-10-20 23:40 ` Stephen Hemminger
0 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-10-20 23:40 UTC (permalink / raw)
To: vanshika.shukla; +Cc: dev, Gagandeep Singh, Sachin Saxena, Apeksha Gupta
On Fri, 18 Oct 2024 12:56:34 +0530
vanshika.shukla@nxp.com wrote:
> From: Vanshika Shukla <vanshika.shukla@nxp.com>
>
> Introduces queue setup, release, start, and stop
> APIs for ENETC4 RX and TX queues, enabling:
>
> - Queue configuration and initialization
> - Queue resource management (setup, release)
> - Queue operation control (start, stop)
>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
The author (From line) needs to give Signed-off-by since
Signed-off-by has legal meaning about Copyright and Patent grant.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
2024-10-20 23:39 ` Stephen Hemminger
@ 2024-10-20 23:52 ` Stephen Hemminger
2024-12-02 22:26 ` Stephen Hemminger
` (2 subsequent siblings)
4 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-10-20 23:52 UTC (permalink / raw)
To: vanshika.shukla
Cc: dev, Thomas Monjalon, Wathsala Vithanage, Bruce Richardson,
Gagandeep Singh, Sachin Saxena, Anatoly Burakov, Apeksha Gupta
On Fri, 18 Oct 2024 12:56:33 +0530
vanshika.shukla@nxp.com wrote:
> + /* Allocate memory for storing MAC addresses */
> + snprintf(eth_name, sizeof(eth_name), "enetc4_eth_%d", eth_dev->data->port_id);
> + eth_dev->data->mac_addrs = rte_zmalloc(eth_name,
> + RTE_ETHER_ADDR_LEN, 0);
> + if (!eth_dev->data->mac_addrs) {
The first argument of rte_malloc routines is hardly used. It does show up in
trace but that is about all. Ok to keep it this as is, but wasted effort.
> + if ((high_mac | low_mac) == 0) {
> + char *first_byte;
> +
> + ENETC_PMD_NOTICE("MAC is not available for this SI, "
> + "set random MAC");
> + mac = (uint32_t *)hw->mac.addr;
> + *mac = (uint32_t)rte_rand();
> + first_byte = (char *)mac;
> + *first_byte &= 0xfe; /* clear multicast bit */
> + *first_byte |= 0x02; /* set local assignment bit (IEEE802) */
> +
> + enetc4_port_wr(enetc_hw, ENETC4_PMAR0, *mac);
> + mac++;
> + *mac = (uint16_t)rte_rand();
> + enetc4_port_wr(enetc_hw, ENETC4_PMAR1, *mac);
> + print_ethaddr("New address: ",
> + (const struct rte_ether_addr *)hw->mac.addr);
Please use existing rte_eth_random_addr() for this.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v1 03/12] net/enetc: Optimize ENETC4 data path
2024-10-18 7:26 ` [v1 03/12] net/enetc: Optimize ENETC4 data path vanshika.shukla
@ 2024-10-21 0:06 ` Stephen Hemminger
0 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-10-21 0:06 UTC (permalink / raw)
To: vanshika.shukla; +Cc: dev, Gagandeep Singh, Sachin Saxena, Apeksha Gupta
On Fri, 18 Oct 2024 12:56:35 +0530
vanshika.shukla@nxp.com wrote:
> + while (likely(rx_frm_cnt < work_limit)) {
> +#ifdef RTE_ARCH_32
> + rte_memcpy(&rxbd_temp, rxbd, 16);
Hardcoding size of the rx buffer descriptor is bad idea.
Why not just use structure assignment?
rxbd_temp = *rxbd;
> +#else
> + __uint128_t *dst128 = (__uint128_t *)&rxbd_temp;
> + const __uint128_t *src128 = (const __uint128_t *)rxbd;
> + *dst128 = *src128;
> +#endif
When I look at godbolt, it already does the right thing with structure
assignment, you won't need the #ifdef.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 00/12] ENETC4 PMD support
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
` (11 preceding siblings ...)
2024-10-18 7:26 ` [v1 12/12] net/enetc: Add MAC and VLAN filter support vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
` (13 more replies)
12 siblings, 14 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This series introduces a new ENETC4 PMD driver for NXP's i.MX95
SoC, enabling basic network operations.
V2 changes:
Handled code comments by the reviewer in:
"net/enetc: Add initial ENETC4 PMD driver support"
"net/enetc: Optimize ENETC4 data path"
Apeksha Gupta (6):
net/enetc: Add initial ENETC4 PMD driver support
net/enetc: Add RX and TX queue APIs for ENETC4 PMD
net/enetc: Optimize ENETC4 data path
net/enetc: Add TX checksum offload and RX checksum validation
net/enetc: Add basic statistics
net/enetc: Add packet type parsing support
Gagandeep Singh (1):
net/enetc: Add support for multiple queues with RSS
Vanshika Shukla (5):
net/enetc: Add VF to PF messaging support and primary MAC setup
net/enetc: Add multicast and promiscuous mode support
net/enetc: Add link speed and status support
net/enetc: Add link status notification support
net/enetc: Add MAC and VLAN filter support
MAINTAINERS | 3 +
config/arm/arm64_imx_linux_gcc | 17 +
config/arm/meson.build | 14 +
doc/guides/nics/enetc4.rst | 99 ++
doc/guides/nics/features/enetc4.ini | 22 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/enetc/base/enetc4_hw.h | 186 ++++
drivers/net/enetc/base/enetc_hw.h | 52 +-
drivers/net/enetc/enetc.h | 246 ++++-
drivers/net/enetc/enetc4_ethdev.c | 1040 ++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 1364 ++++++++++++++++++++++++
drivers/net/enetc/enetc_cbdr.c | 311 ++++++
drivers/net/enetc/enetc_ethdev.c | 5 +-
drivers/net/enetc/enetc_rxtx.c | 165 ++-
drivers/net/enetc/kpage_ncache_api.h | 70 ++
drivers/net/enetc/meson.build | 5 +-
drivers/net/enetc/ntmp.h | 110 ++
18 files changed, 3673 insertions(+), 41 deletions(-)
create mode 100644 config/arm/arm64_imx_linux_gcc
create mode 100644 doc/guides/nics/enetc4.rst
create mode 100644 doc/guides/nics/features/enetc4.ini
create mode 100644 drivers/net/enetc/base/enetc4_hw.h
create mode 100644 drivers/net/enetc/enetc4_ethdev.c
create mode 100644 drivers/net/enetc/enetc4_vf.c
create mode 100644 drivers/net/enetc/enetc_cbdr.c
create mode 100644 drivers/net/enetc/kpage_ncache_api.h
create mode 100644 drivers/net/enetc/ntmp.h
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 01/12] net/enetc: Add initial ENETC4 PMD driver support
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-11-07 19:39 ` Stephen Hemminger
2024-10-23 6:24 ` [v2 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD vanshika.shukla
` (12 subsequent siblings)
13 siblings, 1 reply; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Thomas Monjalon, Wathsala Vithanage, Bruce Richardson,
Gagandeep Singh, Sachin Saxena, Vanshika Shukla, Anatoly Burakov
Cc: Apeksha Gupta
From: Apeksha Gupta <apeksha.gupta@nxp.com>
This patch introduces a new ENETC4 PMD driver for NXP's i.MX95
SoC, enabling basic network operations. Key features include:
- Probe and teardown functions
- Hardware initialization for both Virtual Functions (VFs)
and Physical Function (PF)
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
MAINTAINERS | 3 +
config/arm/arm64_imx_linux_gcc | 17 ++
config/arm/meson.build | 14 ++
doc/guides/nics/enetc4.rst | 99 ++++++++
doc/guides/nics/features/enetc4.ini | 9 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/enetc/base/enetc4_hw.h | 111 +++++++++
drivers/net/enetc/base/enetc_hw.h | 3 +-
drivers/net/enetc/enetc.h | 43 ++--
drivers/net/enetc/enetc4_ethdev.c | 298 +++++++++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 146 ++++++++++++
drivers/net/enetc/enetc_ethdev.c | 5 +-
drivers/net/enetc/kpage_ncache_api.h | 70 ++++++
drivers/net/enetc/meson.build | 4 +-
15 files changed, 805 insertions(+), 22 deletions(-)
create mode 100644 config/arm/arm64_imx_linux_gcc
create mode 100644 doc/guides/nics/enetc4.rst
create mode 100644 doc/guides/nics/features/enetc4.ini
create mode 100644 drivers/net/enetc/base/enetc4_hw.h
create mode 100644 drivers/net/enetc/enetc4_ethdev.c
create mode 100644 drivers/net/enetc/enetc4_vf.c
create mode 100644 drivers/net/enetc/kpage_ncache_api.h
diff --git a/MAINTAINERS b/MAINTAINERS
index f09cda04c8..b45524330c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -949,9 +949,12 @@ F: doc/guides/nics/features/dpaa2.ini
NXP enetc
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+M: Vanshika Shukla <vanshika.shukla@nxp.com>
F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
+F: doc/guides/nics/enetc4.rst
F: doc/guides/nics/features/enetc.ini
+F: doc/guides/nics/features/enetc4.ini
NXP enetfec - EXPERIMENTAL
M: Apeksha Gupta <apeksha.gupta@nxp.com>
diff --git a/config/arm/arm64_imx_linux_gcc b/config/arm/arm64_imx_linux_gcc
new file mode 100644
index 0000000000..c876ae1d2b
--- /dev/null
+++ b/config/arm/arm64_imx_linux_gcc
@@ -0,0 +1,17 @@
+[binaries]
+c = ['ccache', 'aarch64-linux-gnu-gcc']
+cpp = ['ccache', 'aarch64-linux-gnu-g++']
+ar = 'aarch64-linux-gnu-ar'
+as = 'aarch64-linux-gnu-as'
+strip = 'aarch64-linux-gnu-strip'
+pkgconfig = 'aarch64-linux-gnu-pkg-config'
+pcap-config = ''
+
+[host_machine]
+system = 'linux'
+cpu_family = 'aarch64'
+cpu = 'armv8.2-a'
+endian = 'little'
+
+[properties]
+platform = 'imx'
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 55be7c8711..6112244f2c 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -561,6 +561,18 @@ soc_hip10 = {
'numa': true
}
+soc_imx = {
+ 'description': 'NXP IMX',
+ 'implementer': '0x41',
+ 'part_number': '0xd05',
+ 'flags': [
+ ['RTE_MACHINE', '"armv8a"'],
+ ['RTE_MAX_LCORE', 6],
+ ['RTE_MAX_NUMA_NODES', 1],
+ ],
+ 'numa': false,
+}
+
soc_kunpeng920 = {
'description': 'HiSilicon Kunpeng 920',
'implementer': '0x48',
@@ -684,6 +696,7 @@ graviton2: AWS Graviton2
graviton3: AWS Graviton3
graviton4: AWS Graviton4
hip10: HiSilicon HIP10
+imx: NXP IMX
kunpeng920: HiSilicon Kunpeng 920
kunpeng930: HiSilicon Kunpeng 930
n1sdp: Arm Neoverse N1SDP
@@ -722,6 +735,7 @@ socs = {
'graviton3': soc_graviton3,
'graviton4': soc_graviton4,
'hip10': soc_hip10,
+ 'imx': soc_imx,
'kunpeng920': soc_kunpeng920,
'kunpeng930': soc_kunpeng930,
'n1sdp': soc_n1sdp,
diff --git a/doc/guides/nics/enetc4.rst b/doc/guides/nics/enetc4.rst
new file mode 100644
index 0000000000..8ffdc53376
--- /dev/null
+++ b/doc/guides/nics/enetc4.rst
@@ -0,0 +1,99 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2024 NXP
+
+ENETC4 Poll Mode Driver
+=======================
+
+The ENETC4 NIC PMD (**librte_net_enetc**) provides poll mode driver
+support for the inbuilt NIC found in the **NXP i.MX95** SoC.
+
+More information can be found at `NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-9-processors/i-mx-95-applications-processor-family-high-performance-safety-enabled-platform-with-eiq-neutron-npu:iMX95>`_.
+
+This section provides an overview of the NXP ENETC4
+and how it is integrated into the DPDK.
+
+Contents summary
+
+- ENETC4 overview
+- Supported ENETC4 SoCs
+- PCI bus driver
+- NIC driver
+- Prerequisites
+- Driver compilation and testing
+
+ENETC4 Overview
+---------------
+
+ENETC4 is a PCI Integrated End Point(IEP). IEP implements
+peripheral devices in an SoC such that software sees them as PCIe device.
+ENETC4 is an evolution of BDR(Buffer Descriptor Ring) based networking
+IPs.
+
+This infrastructure simplifies adding support for IEP and facilitates in following:
+
+- Device discovery and location
+- Resource requirement discovery and allocation (e.g. interrupt assignment,
+ device register address)
+- Event reporting
+
+Supported ENETC4 SoCs
+---------------------
+
+- i.MX95
+
+NIC Driver (PMD)
+----------------
+
+The ENETC4 PMD is a traditional DPDK PMD that bridges the RTE framework and
+ENETC4 internal drivers, supporting both Virtual Functions (VFs) and
+Physical Functions (PF). Key functionality includes:
+
+- Driver registration: The device vendor table is registered in the PCI subsystem.
+- Device discovery: The RTE framework scans the PCI bus for connected devices, triggering the ENETC4 driver's probe function.
+- Initialization: The probe function configures basic device registers and sets up Buffer Descriptor (BD) rings.
+- Receive processing: Upon packet reception, the BD Ring status bit is set, facilitating packet processing.
+- Transmission: Packet transmission precedes reception, ensuring efficient data transfer.
+
+Prerequisites
+-------------
+
+There are three main pre-requisites for executing ENETC4 PMD on ENETC4
+compatible boards:
+
+#. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* ARM Toolchain <https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-x86_64-aarch64-none-linux-gnu.tar.xz>`_.
+
+#. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://github.com/nxp-imx/linux-imx>`_.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux LF**
+
+ NXP Linux Factory (LF) includes support for family
+ of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+ and corresponding boards.
+
+ It includes the Linux board support packages (BSPs) for NXP SoCs,
+ a fully operational tool chain, kernel and board specific modules.
+
+ i.MX LF release and related information can be obtained from: `LF <https://www.nxp.com/design/design-center/software/embedded-software/i-mx-software/embedded-linux-for-i-mx-applications-processors:IMXLINUX>`_
+ Refer section: Linux Current Release.
+
+- **kpage_ncache Kernel Module**
+
+ i.MX95 platform is a IO non-cache coherent platform and driver is dependent on
+ a kernel module kpage_ncache.ko to mark the hugepage memory to non-cacheable.
+
+ The module can be obtained from: `kpage_ncache <https://github.com/nxp-qoriq/dpdk-extras/tree/main/linux/kpage_ncache>`_
+
+Driver compilation and testing
+------------------------------
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **testpmd**
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
new file mode 100644
index 0000000000..ca3b9ae992
--- /dev/null
+++ b/doc/guides/nics/features/enetc4.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetc4' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc7988a..da2af04777 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -27,6 +27,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetc4
enetfec
enic
fail_safe
diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
index d2301461ce..0f11dcbd8d 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -139,6 +139,10 @@ New Features
* Added SR-IOV VF support.
* Added recent 1400/14000 and 15000 models to the supported list.
+* **Added ENETC4 PMD**
+
+ * Added ENETC4 PMD for NXP i.MX95 platform.
+
* **Updated Marvell cnxk net driver.**
* Added ethdev driver support for CN20K SoC.
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
new file mode 100644
index 0000000000..34a4ca3b02
--- /dev/null
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -0,0 +1,111 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ *
+ * This header file defines the register offsets and bit fields
+ * of ENETC4 PF and VFs.
+ */
+
+#ifndef _ENETC4_HW_H_
+#define _ENETC4_HW_H_
+#include <rte_io.h>
+
+/* ENETC4 device IDs */
+#define ENETC4_DEV_ID 0xe101
+#define ENETC4_DEV_ID_VF 0xef00
+#define PCI_VENDOR_ID_NXP 0x1131
+
+/***************************ENETC port registers**************************/
+#define ENETC4_PMR 0x10
+#define ENETC4_PMR_EN (BIT(16) | BIT(17) | BIT(18))
+
+/* Port Station interface promiscuous MAC mode register */
+#define ENETC4_PSIPMMR 0x200
+#define PSIPMMR_SI0_MAC_UP BIT(0)
+#define PSIPMMR_SI_MAC_UP (BIT(0) | BIT(1) | BIT(2))
+#define PSIPMMR_SI0_MAC_MP BIT(16)
+#define PSIPMMR_SI_MAC_MP (BIT(16) | BIT(17) | BIT(18))
+
+/* Port Station interface a primary MAC address registers */
+#define ENETC4_PSIPMAR0(a) ((a) * 0x80 + 0x2000)
+#define ENETC4_PSIPMAR1(a) ((a) * 0x80 + 0x2004)
+
+/* Port MAC address register 0/1 */
+#define ENETC4_PMAR0 0x4020
+#define ENETC4_PMAR1 0x4024
+
+/* Port operational register */
+#define ENETC4_POR 0x4100
+
+/* Port traffic class a transmit maximum SDU register */
+#define ENETC4_PTCTMSDUR(a) ((a) * 0x20 + 0x4208)
+#define SDU_TYPE_MPDU BIT(16)
+
+#define ENETC4_PM_CMD_CFG(mac) (0x5008 + (mac) * 0x400)
+#define PM_CMD_CFG_TX_EN BIT(0)
+#define PM_CMD_CFG_RX_EN BIT(1)
+
+/* i.MX95 supports jumbo frame, but it is recommended to set the max frame
+ * size to 2000 bytes.
+ */
+#define ENETC4_MAC_MAXFRM_SIZE 2000
+
+/* Port MAC 0/1 Maximum Frame Length Register */
+#define ENETC4_PM_MAXFRM(mac) (0x5014 + (mac) * 0x400)
+
+/* Config register to reset counters */
+#define ENETC4_PM0_STAT_CONFIG 0x50e0
+/* Stats Reset Bit */
+#define ENETC4_CLEAR_STATS BIT(2)
+
+/* Port MAC 0/1 Receive Ethernet Octets Counter */
+#define ENETC4_PM_REOCT(mac) (0x5100 + (mac) * 0x400)
+
+/* Port MAC 0/1 Receive Frame Error Counter */
+#define ENETC4_PM_RERR(mac) (0x5138 + (mac) * 0x400)
+
+/* Port MAC 0/1 Receive Dropped Packets Counter */
+#define ENETC4_PM_RDRP(mac) (0x5158 + (mac) * 0x400)
+
+/* Port MAC 0/1 Receive Packets Counter */
+#define ENETC4_PM_RPKT(mac) (0x5160 + (mac) * 0x400)
+
+/* Port MAC 0/1 Transmit Frame Error Counter */
+#define ENETC4_PM_TERR(mac) (0x5238 + (mac) * 0x400)
+
+/* Port MAC 0/1 Transmit Ethernet Octets Counter */
+#define ENETC4_PM_TEOCT(mac) (0x5200 + (mac) * 0x400)
+
+/* Port MAC 0/1 Transmit Packets Counter */
+#define ENETC4_PM_TPKT(mac) (0x5260 + (mac) * 0x400)
+
+/* Port MAC 0 Interface Mode Control Register */
+#define ENETC4_PM_IF_MODE(mac) (0x5300 + (mac) * 0x400)
+#define PM_IF_MODE_IFMODE (BIT(0) | BIT(1) | BIT(2))
+#define IFMODE_XGMII 0
+#define IFMODE_RMII 3
+#define IFMODE_RGMII 4
+#define IFMODE_SGMII 5
+#define PM_IF_MODE_ENA BIT(15)
+
+/* general register accessors */
+#define enetc4_rd_reg(reg) rte_read32((void *)(reg))
+#define enetc4_wr_reg(reg, val) rte_write32((val), (void *)(reg))
+
+#define enetc4_rd(hw, off) enetc4_rd_reg((size_t)(hw)->reg + (off))
+#define enetc4_wr(hw, off, val) enetc4_wr_reg((size_t)(hw)->reg + (off), val)
+/* port register accessors - PF only */
+#define enetc4_port_rd(hw, off) enetc4_rd_reg((size_t)(hw)->port + (off))
+#define enetc4_port_wr(hw, off, val) \
+ enetc4_wr_reg((size_t)(hw)->port + (off), val)
+/* BDR register accessors, see ENETC_BDR() */
+#define enetc4_bdr_rd(hw, t, n, off) \
+ enetc4_rd(hw, ENETC_BDR(t, n, off))
+#define enetc4_bdr_wr(hw, t, n, off, val) \
+ enetc4_wr(hw, ENETC_BDR(t, n, off), val)
+#define enetc4_txbdr_rd(hw, n, off) enetc4_bdr_rd(hw, TX, n, off)
+#define enetc4_rxbdr_rd(hw, n, off) enetc4_bdr_rd(hw, RX, n, off)
+#define enetc4_txbdr_wr(hw, n, off, val) \
+ enetc4_bdr_wr(hw, TX, n, off, val)
+#define enetc4_rxbdr_wr(hw, n, off, val) \
+ enetc4_bdr_wr(hw, RX, n, off, val)
+#endif
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 66fad58e5e..2d63c54db6 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -1,10 +1,11 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2024 NXP
*/
#ifndef _ENETC_HW_H_
#define _ENETC_HW_H_
#include <rte_io.h>
+#include <ethdev_pci.h>
#define BIT(x) ((uint64_t)1 << ((x)))
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 7163633bce..87fc51b776 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -1,18 +1,21 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2019 NXP
+ * Copyright 2018-2019,2024 NXP
*/
#ifndef _ENETC_H_
#define _ENETC_H_
#include <rte_time.h>
+#include <ethdev_pci.h>
+#include "compat.h"
#include "base/enetc_hw.h"
+#include "enetc_logs.h"
#define PCI_VENDOR_ID_FREESCALE 0x1957
/* Max TX rings per ENETC. */
-#define MAX_TX_RINGS 2
+#define MAX_TX_RINGS 1
/* Max RX rings per ENTEC. */
#define MAX_RX_RINGS 1
@@ -33,21 +36,11 @@
#define ENETC_ETH_MAX_LEN (RTE_ETHER_MTU + \
RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
-/*
- * upper_32_bits - return bits 32-63 of a number
- * @n: the number we're accessing
- *
- * A basic shift-right of a 64- or 32-bit quantity. Use this to suppress
- * the "right shift count >= width of type" warning when that quantity is
- * 32-bits.
- */
-#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+/* eth name size */
+#define ENETC_ETH_NAMESIZE 20
-/*
- * lower_32_bits - return bits 0-31 of a number
- * @n: the number we're accessing
- */
-#define lower_32_bits(n) ((uint32_t)(n))
+/* size for marking hugepage non-cacheable */
+#define SIZE_2MB 0x200000
#define ENETC_TXBD(BDR, i) (&(((struct enetc_tx_bd *)((BDR).bd_base))[i]))
#define ENETC_RXBD(BDR, i) (&(((union enetc_rx_bd *)((BDR).bd_base))[i]))
@@ -74,6 +67,7 @@ struct enetc_bdr {
};
struct rte_mempool *mb_pool; /* mbuf pool to populate RX ring. */
struct rte_eth_dev *ndev;
+ const struct rte_memzone *mz;
};
/*
@@ -96,6 +90,20 @@ struct enetc_eth_adapter {
#define ENETC_DEV_PRIVATE_TO_INTR(adapter) \
(&((struct enetc_eth_adapter *)adapter)->intr)
+/*
+ * ENETC4 function prototypes
+ */
+int enetc4_pci_remove(struct rte_pci_device *pci_dev);
+int enetc4_dev_configure(struct rte_eth_dev *dev);
+int enetc4_dev_close(struct rte_eth_dev *dev);
+int enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+ struct rte_eth_dev_info *dev_info);
+
+/*
+ * enetc4_vf function prototype
+ */
+int enetc4_vf_dev_stop(struct rte_eth_dev *dev);
+
/*
* RX/TX ENETC function prototypes
*/
@@ -104,8 +112,9 @@ uint16_t enetc_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts,
uint16_t enetc_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-
int enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt);
+void enetc4_dev_hw_init(struct rte_eth_dev *eth_dev);
+void print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr);
static inline int
enetc_bd_unused(struct enetc_bdr *bdr)
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
new file mode 100644
index 0000000000..3b853fe93a
--- /dev/null
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -0,0 +1,298 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ */
+
+#include <stdbool.h>
+#include <rte_random.h>
+#include <dpaax_iova_table.h>
+
+#include "kpage_ncache_api.h"
+#include "base/enetc4_hw.h"
+#include "enetc_logs.h"
+#include "enetc.h"
+
+static int
+enetc4_dev_start(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t val;
+
+ PMD_INIT_FUNC_TRACE();
+
+ val = enetc4_port_rd(enetc_hw, ENETC4_PM_CMD_CFG(0));
+ enetc4_port_wr(enetc_hw, ENETC4_PM_CMD_CFG(0),
+ val | PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN);
+
+ val = enetc4_port_rd(enetc_hw, ENETC4_PM_CMD_CFG(1));
+ enetc4_port_wr(enetc_hw, ENETC4_PM_CMD_CFG(1),
+ val | PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN);
+
+ /* Enable port */
+ val = enetc4_port_rd(enetc_hw, ENETC4_PMR);
+ enetc4_port_wr(enetc_hw, ENETC4_PMR, val | ENETC4_PMR_EN);
+
+ /* Enable port transmit/receive */
+ enetc4_port_wr(enetc_hw, ENETC4_POR, 0);
+
+ return 0;
+}
+
+static int
+enetc4_dev_stop(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t val;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* Disable port */
+ val = enetc4_port_rd(enetc_hw, ENETC4_PMR);
+ enetc4_port_wr(enetc_hw, ENETC4_PMR, val & (~ENETC4_PMR_EN));
+
+ val = enetc4_port_rd(enetc_hw, ENETC4_PM_CMD_CFG(0));
+ enetc4_port_wr(enetc_hw, ENETC4_PM_CMD_CFG(0),
+ val & (~(PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN)));
+
+ val = enetc4_port_rd(enetc_hw, ENETC4_PM_CMD_CFG(1));
+ enetc4_port_wr(enetc_hw, ENETC4_PM_CMD_CFG(1),
+ val & (~(PM_CMD_CFG_TX_EN | PM_CMD_CFG_RX_EN)));
+
+ return 0;
+}
+
+static int
+enetc4_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
+{
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t high_mac = 0;
+ uint16_t low_mac = 0;
+ char eth_name[ENETC_ETH_NAMESIZE];
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* Enabling Station Interface */
+ enetc4_wr(enetc_hw, ENETC_SIMR, ENETC_SIMR_EN);
+
+ high_mac = (uint32_t)enetc4_port_rd(enetc_hw, ENETC4_PSIPMAR0(0));
+ low_mac = (uint16_t)enetc4_port_rd(enetc_hw, ENETC4_PSIPMAR1(0));
+
+ if ((high_mac | low_mac) == 0) {
+ ENETC_PMD_NOTICE("MAC is not available for this SI, "
+ "set random MAC");
+ rte_eth_random_addr(hw->mac.addr);
+ high_mac = *(uint32_t *)hw->mac.addr;
+ enetc4_port_wr(enetc_hw, ENETC4_PMAR0, high_mac);
+ low_mac = *(uint16_t *)(hw->mac.addr + 4);
+ enetc4_port_wr(enetc_hw, ENETC4_PMAR1, low_mac);
+ print_ethaddr("New address: ",
+ (const struct rte_ether_addr *)hw->mac.addr);
+ }
+
+ /* Allocate memory for storing MAC addresses */
+ snprintf(eth_name, sizeof(eth_name), "enetc4_eth_%d", eth_dev->data->port_id);
+ eth_dev->data->mac_addrs = rte_zmalloc(eth_name,
+ RTE_ETHER_ADDR_LEN, 0);
+ if (!eth_dev->data->mac_addrs) {
+ ENETC_PMD_ERR("Failed to allocate %d bytes needed to "
+ "store MAC addresses",
+ RTE_ETHER_ADDR_LEN * 1);
+ return -ENOMEM;
+ }
+
+ /* Copy the permanent MAC address */
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr,
+ ð_dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+int
+enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+ struct rte_eth_dev_info *dev_info)
+{
+ PMD_INIT_FUNC_TRACE();
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = MAX_BD_COUNT,
+ .nb_min = MIN_BD_COUNT,
+ .nb_align = BD_ALIGN,
+ };
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = MAX_BD_COUNT,
+ .nb_min = MIN_BD_COUNT,
+ .nb_align = BD_ALIGN,
+ };
+ dev_info->max_rx_queues = MAX_RX_RINGS;
+ dev_info->max_tx_queues = MAX_TX_RINGS;
+ dev_info->max_rx_pktlen = ENETC4_MAC_MAXFRM_SIZE;
+
+ return 0;
+}
+
+int
+enetc4_dev_close(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ if (hw->device_id == ENETC4_DEV_ID_VF)
+ ret = enetc4_vf_dev_stop(dev);
+ else
+ ret = enetc4_dev_stop(dev);
+
+ if (rte_eal_iova_mode() == RTE_IOVA_PA)
+ dpaax_iova_table_depopulate();
+
+ return ret;
+}
+
+int
+enetc4_dev_configure(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t max_len;
+ uint32_t val;
+
+ PMD_INIT_FUNC_TRACE();
+
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc4_port_wr(enetc_hw, ENETC4_PM_MAXFRM(0), ENETC_SET_MAXFRM(max_len));
+
+ val = ENETC4_MAC_MAXFRM_SIZE | SDU_TYPE_MPDU;
+ enetc4_port_wr(enetc_hw, ENETC4_PTCTMSDUR(0), val | SDU_TYPE_MPDU);
+
+ return 0;
+}
+
+
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_enetc4_map[] = {
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_NXP, ENETC4_DEV_ID) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+/* Features supported by this driver */
+static const struct eth_dev_ops enetc4_ops = {
+ .dev_configure = enetc4_dev_configure,
+ .dev_start = enetc4_dev_start,
+ .dev_stop = enetc4_dev_stop,
+ .dev_close = enetc4_dev_close,
+ .dev_infos_get = enetc4_dev_infos_get,
+};
+
+/*
+ * Storing the HW base addresses
+ *
+ * @param eth_dev
+ * - Pointer to the structure rte_eth_dev
+ */
+void
+enetc4_dev_hw_init(struct rte_eth_dev *eth_dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ eth_dev->rx_pkt_burst = &enetc_recv_pkts;
+ eth_dev->tx_pkt_burst = &enetc_xmit_pkts;
+
+ /* Retrieving and storing the HW base address of device */
+ hw->hw.reg = (void *)pci_dev->mem_resource[0].addr;
+ hw->device_id = pci_dev->id.device_id;
+
+ /* Calculating and storing the base HW addresses */
+ hw->hw.port = (void *)((size_t)hw->hw.reg + ENETC_PORT_BASE);
+ hw->hw.global = (void *)((size_t)hw->hw.reg + ENETC_GLOBAL_BASE);
+}
+
+/**
+ * Initialisation of the enetc4 device
+ *
+ * @param eth_dev
+ * - Pointer to the structure rte_eth_dev
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, negative value.
+ */
+
+static int
+enetc4_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int error = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ eth_dev->dev_ops = &enetc4_ops;
+ enetc4_dev_hw_init(eth_dev);
+
+ error = enetc4_mac_init(hw, eth_dev);
+ if (error != 0) {
+ ENETC_PMD_ERR("MAC initialization failed");
+ return -1;
+ }
+
+ /* Set MTU */
+ enetc_port_wr(&hw->hw, ENETC4_PM_MAXFRM(0),
+ ENETC_SET_MAXFRM(RTE_ETHER_MAX_LEN));
+ eth_dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+
+ if (rte_eal_iova_mode() == RTE_IOVA_PA)
+ dpaax_iova_table_populate();
+
+ ENETC_PMD_DEBUG("port_id %d vendorID=0x%x deviceID=0x%x",
+ eth_dev->data->port_id, pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
+ return 0;
+}
+
+static int
+enetc4_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ return enetc4_dev_close(eth_dev);
+}
+
+static int
+enetc4_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct enetc_eth_adapter),
+ enetc4_dev_init);
+}
+
+int
+enetc4_pci_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_remove(pci_dev, enetc4_dev_uninit);
+}
+
+static struct rte_pci_driver rte_enetc4_pmd = {
+ .id_table = pci_id_enetc4_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = enetc4_pci_probe,
+ .remove = enetc4_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_enetc4, rte_enetc4_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_enetc4, pci_id_enetc4_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_enetc4, "* vfio-pci");
+RTE_LOG_REGISTER_DEFAULT(enetc4_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
new file mode 100644
index 0000000000..7996d6decb
--- /dev/null
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -0,0 +1,146 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ */
+
+#include <stdbool.h>
+#include <rte_random.h>
+#include <dpaax_iova_table.h>
+#include "base/enetc4_hw.h"
+#include "base/enetc_hw.h"
+#include "enetc_logs.h"
+#include "enetc.h"
+
+int
+enetc4_vf_dev_stop(struct rte_eth_dev *dev __rte_unused)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ return 0;
+}
+
+static int
+enetc4_vf_dev_start(struct rte_eth_dev *dev __rte_unused)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ return 0;
+}
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_vf_id_enetc4_map[] = {
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_NXP, ENETC4_DEV_ID_VF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+/* Features supported by this driver */
+static const struct eth_dev_ops enetc4_vf_ops = {
+ .dev_configure = enetc4_dev_configure,
+ .dev_start = enetc4_vf_dev_start,
+ .dev_stop = enetc4_vf_dev_stop,
+ .dev_close = enetc4_dev_close,
+ .dev_infos_get = enetc4_dev_infos_get,
+};
+
+static int
+enetc4_vf_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
+{
+ uint32_t *mac = (uint32_t *)hw->mac.addr;
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t high_mac = 0;
+ uint16_t low_mac = 0;
+ char vf_eth_name[ENETC_ETH_NAMESIZE];
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* Enabling Station Interface */
+ enetc4_wr(enetc_hw, ENETC_SIMR, ENETC_SIMR_EN);
+ *mac = (uint32_t)enetc_rd(enetc_hw, ENETC_SIPMAR0);
+ high_mac = (uint32_t)*mac;
+ mac++;
+ *mac = (uint16_t)enetc_rd(enetc_hw, ENETC_SIPMAR1);
+ low_mac = (uint16_t)*mac;
+
+ if ((high_mac | low_mac) == 0) {
+ char *first_byte;
+ ENETC_PMD_NOTICE("MAC is not available for this SI, "
+ "set random MAC");
+ mac = (uint32_t *)hw->mac.addr;
+ *mac = (uint32_t)rte_rand();
+ first_byte = (char *)mac;
+ *first_byte &= 0xfe; /* clear multicast bit */
+ *first_byte |= 0x02; /* set local assignment bit (IEEE802) */
+ enetc4_port_wr(enetc_hw, ENETC4_PMAR0, *mac);
+ mac++;
+ *mac = (uint16_t)rte_rand();
+ enetc4_port_wr(enetc_hw, ENETC4_PMAR1, *mac);
+ print_ethaddr("New address: ",
+ (const struct rte_ether_addr *)hw->mac.addr);
+ }
+
+ /* Allocate memory for storing MAC addresses */
+ snprintf(vf_eth_name, sizeof(vf_eth_name), "enetc4_vf_eth_%d", eth_dev->data->port_id);
+ eth_dev->data->mac_addrs = rte_zmalloc(vf_eth_name,
+ RTE_ETHER_ADDR_LEN, 0);
+ if (!eth_dev->data->mac_addrs) {
+ ENETC_PMD_ERR("Failed to allocate %d bytes needed to "
+ "store MAC addresses",
+ RTE_ETHER_ADDR_LEN * 1);
+ return -ENOMEM;
+ }
+
+ /* Copy the permanent MAC address */
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr,
+ ð_dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetc4_vf_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int error = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ eth_dev->dev_ops = &enetc4_vf_ops;
+ enetc4_dev_hw_init(eth_dev);
+
+ error = enetc4_vf_mac_init(hw, eth_dev);
+ if (error != 0) {
+ ENETC_PMD_ERR("MAC initialization failed!!");
+ return -1;
+ }
+
+ if (rte_eal_iova_mode() == RTE_IOVA_PA)
+ dpaax_iova_table_populate();
+
+ ENETC_PMD_DEBUG("port_id %d vendorID=0x%x deviceID=0x%x",
+ eth_dev->data->port_id, pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
+ return 0;
+}
+
+static int
+enetc4_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct enetc_eth_adapter),
+ enetc4_vf_dev_init);
+}
+
+static struct rte_pci_driver rte_enetc4_vf_pmd = {
+ .id_table = pci_vf_id_enetc4_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = enetc4_vf_pci_probe,
+ .remove = enetc4_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_enetc4_vf, rte_enetc4_vf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_enetc4_vf, pci_vf_id_enetc4_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_enetc4_vf, "* uio_pci_generic");
+RTE_LOG_REGISTER_DEFAULT(enetc4_vf_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index ffbecc407c..d7cba1ba83 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -1,9 +1,8 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2024 NXP
*/
#include <stdbool.h>
-#include <ethdev_pci.h>
#include <rte_random.h>
#include <dpaax_iova_table.h>
@@ -145,7 +144,7 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
return rte_eth_linkstatus_set(dev, &link);
}
-static void
+void
print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
{
char buf[RTE_ETHER_ADDR_FMT_SIZE];
diff --git a/drivers/net/enetc/kpage_ncache_api.h b/drivers/net/enetc/kpage_ncache_api.h
new file mode 100644
index 0000000000..01291b1d7f
--- /dev/null
+++ b/drivers/net/enetc/kpage_ncache_api.h
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ *
+ * Copyright 2022-2024 NXP
+ *
+ */
+
+#ifndef KPG_NC_MODULE_H
+#define KPG_NC_MODULE_H
+
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+
+#include <rte_log.h>
+
+#include "enetc_logs.h"
+
+#define KPG_NC_DEVICE_NAME "page_ncache"
+#define KPG_NC_DEVICE_PATH "/dev/" KPG_NC_DEVICE_NAME
+
+/* IOCTL */
+#define KPG_NC_MAGIC_NUM 0xf0f0
+#define KPG_NC_IOCTL_UPDATE _IOWR(KPG_NC_MAGIC_NUM, 1, size_t)
+
+
+#define KNRM "\x1B[0m"
+#define KRED "\x1B[31m"
+#define KGRN "\x1B[32m"
+#define KYEL "\x1B[33m"
+#define KBLU "\x1B[34m"
+#define KMAG "\x1B[35m"
+#define KCYN "\x1B[36m"
+#define KWHT "\x1B[37m"
+
+#if defined(RTE_ARCH_ARM) && defined(RTE_ARCH_64)
+static inline void flush_tlb(void *p)
+{
+ asm volatile("dc civac, %0" ::"r"(p));
+ asm volatile("dsb ish");
+ asm volatile("isb");
+}
+#endif
+
+static inline void mark_kpage_ncache(uint64_t huge_page)
+{
+ int fd, ret;
+
+ fd = open(KPG_NC_DEVICE_PATH, O_RDONLY);
+ if (fd < 0) {
+ ENETC_PMD_ERR(KYEL "Error: " KNRM "Could not open: %s",
+ KPG_NC_DEVICE_PATH);
+ return;
+ }
+ ENETC_PMD_DEBUG(KCYN "%s: Huge_Page addr =" KNRM " 0x%" PRIX64,
+ __func__, huge_page);
+ ret = ioctl(fd, KPG_NC_IOCTL_UPDATE, (size_t)&huge_page);
+ if (ret) {
+ ENETC_PMD_ERR(KYEL "Error(%d): " KNRM "non-cachable set",
+ ret);
+ close(fd);
+ return;
+ }
+#if defined(RTE_ARCH_ARM) && defined(RTE_ARCH_64)
+ flush_tlb((void *)huge_page);
+#endif
+ ENETC_PMD_DEBUG(KYEL "Page should be non-cachable now" KNRM);
+
+ close(fd);
+}
+#endif /* KPG_NC_MODULE_H */
diff --git a/drivers/net/enetc/meson.build b/drivers/net/enetc/meson.build
index 966dc694fc..6e00758a36 100644
--- a/drivers/net/enetc/meson.build
+++ b/drivers/net/enetc/meson.build
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018 NXP
+# Copyright 2018,2024 NXP
if not is_linux
build = false
@@ -8,6 +8,8 @@ endif
deps += ['common_dpaax']
sources = files(
+ 'enetc4_ethdev.c',
+ 'enetc4_vf.c',
'enetc_ethdev.c',
'enetc_rxtx.c',
)
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
2024-10-23 6:24 ` [v2 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 03/12] net/enetc: Optimize ENETC4 data path vanshika.shukla
` (11 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Apeksha Gupta <apeksha.gupta@nxp.com>
Introduces queue setup, release, start, and stop
APIs for ENETC4 RX and TX queues, enabling:
- Queue configuration and initialization
- Queue resource management (setup, release)
- Queue operation control (start, stop)
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/enetc.h | 13 +
drivers/net/enetc/enetc4_ethdev.c | 434 ++++++++++++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 8 +
drivers/net/enetc/enetc_rxtx.c | 32 +-
5 files changed, 485 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index ca3b9ae992..37b548dcab 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Queue start/stop = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 87fc51b776..9901e434d9 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -98,6 +98,19 @@ int enetc4_dev_configure(struct rte_eth_dev *dev);
int enetc4_dev_close(struct rte_eth_dev *dev);
int enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
struct rte_eth_dev_info *dev_info);
+int enetc4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+ uint16_t nb_rx_desc, unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool);
+int enetc4_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+int enetc4_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
+void enetc4_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+int enetc4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf);
+int enetc4_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
+int enetc4_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
+void enetc4_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
/*
* enetc4_vf function prototype
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 3b853fe93a..6c077c6071 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -132,10 +132,338 @@ enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
return 0;
}
+static int
+mark_memory_ncache(struct enetc_bdr *bdr, const char *mz_name, unsigned int size)
+{
+ uint64_t huge_page;
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_reserve_aligned(mz_name,
+ size, SOCKET_ID_ANY,
+ RTE_MEMZONE_2MB, size);
+ if (mz) {
+ bdr->bd_base = mz->addr;
+ } else {
+ ENETC_PMD_ERR("Failed to allocate memzone!!,"
+ " please reserve 2MB size pages");
+ return -ENOMEM;
+ }
+ if (mz->hugepage_sz != size)
+ ENETC_PMD_WARN("Hugepage size of queue memzone %" PRIx64,
+ mz->hugepage_sz);
+ bdr->mz = mz;
+
+ /* Mark memory NON-CACHEABLE */
+ huge_page =
+ (uint64_t)RTE_PTR_ALIGN_FLOOR(bdr->bd_base, size);
+ mark_kpage_ncache(huge_page);
+
+ return 0;
+}
+
+static int
+enetc4_alloc_txbdr(uint16_t port_id, struct enetc_bdr *txr, uint16_t nb_desc)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ int size;
+
+ size = nb_desc * sizeof(struct enetc_swbd);
+ txr->q_swbd = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
+ if (txr->q_swbd == NULL)
+ return -ENOMEM;
+
+ snprintf(mz_name, sizeof(mz_name), "bdt_addr_%d", port_id);
+ if (mark_memory_ncache(txr, mz_name, SIZE_2MB)) {
+ ENETC_PMD_ERR("Failed to mark BD memory non-cacheable!");
+ rte_free(txr->q_swbd);
+ txr->q_swbd = NULL;
+ return -ENOMEM;
+ }
+ txr->bd_count = nb_desc;
+ txr->next_to_clean = 0;
+ txr->next_to_use = 0;
+
+ return 0;
+}
+
+static void
+enetc4_free_bdr(struct enetc_bdr *rxr)
+{
+ rte_memzone_free(rxr->mz);
+ rxr->mz = NULL;
+ rte_free(rxr->q_swbd);
+ rxr->q_swbd = NULL;
+ rxr->bd_base = NULL;
+}
+
+static void
+enetc4_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring)
+{
+ int idx = tx_ring->index;
+ phys_addr_t bd_address;
+
+ bd_address = (phys_addr_t)
+ rte_mem_virt2iova((const void *)tx_ring->bd_base);
+ enetc4_txbdr_wr(hw, idx, ENETC_TBBAR0,
+ lower_32_bits((uint64_t)bd_address));
+ enetc4_txbdr_wr(hw, idx, ENETC_TBBAR1,
+ upper_32_bits((uint64_t)bd_address));
+ enetc4_txbdr_wr(hw, idx, ENETC_TBLENR,
+ ENETC_RTBLENR_LEN(tx_ring->bd_count));
+
+ enetc4_txbdr_wr(hw, idx, ENETC_TBCIR, 0);
+ enetc4_txbdr_wr(hw, idx, ENETC_TBCISR, 0);
+ tx_ring->tcir = (void *)((size_t)hw->reg +
+ ENETC_BDR(TX, idx, ENETC_TBCIR));
+ tx_ring->tcisr = (void *)((size_t)hw->reg +
+ ENETC_BDR(TX, idx, ENETC_TBCISR));
+}
+
+int
+enetc4_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ int err = 0;
+ struct enetc_bdr *tx_ring;
+ struct rte_eth_dev_data *data = dev->data;
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(data->dev_private);
+
+ PMD_INIT_FUNC_TRACE();
+ if (nb_desc > MAX_BD_COUNT)
+ return -1;
+
+ tx_ring = rte_zmalloc(NULL, sizeof(struct enetc_bdr), 0);
+ if (tx_ring == NULL) {
+ ENETC_PMD_ERR("Failed to allocate TX ring memory");
+ err = -ENOMEM;
+ return -1;
+ }
+
+ tx_ring->index = queue_idx;
+ err = enetc4_alloc_txbdr(data->port_id, tx_ring, nb_desc);
+ if (err)
+ goto fail;
+
+ tx_ring->ndev = dev;
+ enetc4_setup_txbdr(&priv->hw.hw, tx_ring);
+ data->tx_queues[queue_idx] = tx_ring;
+ if (!tx_conf->tx_deferred_start) {
+ /* enable ring */
+ enetc4_txbdr_wr(&priv->hw.hw, tx_ring->index,
+ ENETC_TBMR, ENETC_TBMR_EN);
+ dev->data->tx_queue_state[tx_ring->index] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+ } else {
+ dev->data->tx_queue_state[tx_ring->index] =
+ RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+fail:
+ rte_free(tx_ring);
+
+ return err;
+}
+
+void
+enetc4_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void *txq = dev->data->tx_queues[qid];
+
+ if (txq == NULL)
+ return;
+
+ struct enetc_bdr *tx_ring = (struct enetc_bdr *)txq;
+ struct enetc_eth_hw *eth_hw =
+ ENETC_DEV_PRIVATE_TO_HW(tx_ring->ndev->data->dev_private);
+ struct enetc_hw *hw;
+ struct enetc_swbd *tx_swbd;
+ int i;
+ uint32_t val;
+
+ /* Disable the ring */
+ hw = ð_hw->hw;
+ val = enetc4_txbdr_rd(hw, tx_ring->index, ENETC_TBMR);
+ val &= (~ENETC_TBMR_EN);
+ enetc4_txbdr_wr(hw, tx_ring->index, ENETC_TBMR, val);
+
+ /* clean the ring*/
+ i = tx_ring->next_to_clean;
+ tx_swbd = &tx_ring->q_swbd[i];
+ while (tx_swbd->buffer_addr != NULL) {
+ rte_pktmbuf_free(tx_swbd->buffer_addr);
+ tx_swbd->buffer_addr = NULL;
+ tx_swbd++;
+ i++;
+ if (unlikely(i == tx_ring->bd_count)) {
+ i = 0;
+ tx_swbd = &tx_ring->q_swbd[i];
+ }
+ }
+
+ enetc4_free_bdr(tx_ring);
+ rte_free(tx_ring);
+}
+
+static int
+enetc4_alloc_rxbdr(uint16_t port_id, struct enetc_bdr *rxr,
+ uint16_t nb_desc)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ int size;
+
+ size = nb_desc * sizeof(struct enetc_swbd);
+ rxr->q_swbd = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
+ if (rxr->q_swbd == NULL)
+ return -ENOMEM;
+
+ snprintf(mz_name, sizeof(mz_name), "bdr_addr_%d", port_id);
+ if (mark_memory_ncache(rxr, mz_name, SIZE_2MB)) {
+ ENETC_PMD_ERR("Failed to mark BD memory non-cacheable!");
+ rte_free(rxr->q_swbd);
+ rxr->q_swbd = NULL;
+ return -ENOMEM;
+ }
+ rxr->bd_count = nb_desc;
+ rxr->next_to_clean = 0;
+ rxr->next_to_use = 0;
+ rxr->next_to_alloc = 0;
+
+ return 0;
+}
+
+static void
+enetc4_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring,
+ struct rte_mempool *mb_pool)
+{
+ int idx = rx_ring->index;
+ uint16_t buf_size;
+ phys_addr_t bd_address;
+
+ bd_address = (phys_addr_t)
+ rte_mem_virt2iova((const void *)rx_ring->bd_base);
+
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBBAR0,
+ lower_32_bits((uint64_t)bd_address));
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBBAR1,
+ upper_32_bits((uint64_t)bd_address));
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBLENR,
+ ENETC_RTBLENR_LEN(rx_ring->bd_count));
+
+ rx_ring->mb_pool = mb_pool;
+ rx_ring->rcir = (void *)((size_t)hw->reg +
+ ENETC_BDR(RX, idx, ENETC_RBCIR));
+ enetc_refill_rx_ring(rx_ring, (enetc_bd_unused(rx_ring)));
+ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rx_ring->mb_pool) -
+ RTE_PKTMBUF_HEADROOM);
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBBSR, buf_size);
+ enetc4_rxbdr_wr(hw, idx, ENETC_RBPIR, 0);
+}
+
+int
+enetc4_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ int err = 0;
+ struct enetc_bdr *rx_ring;
+ struct rte_eth_dev_data *data = dev->data;
+ struct enetc_eth_adapter *adapter =
+ ENETC_DEV_PRIVATE(data->dev_private);
+ uint64_t rx_offloads = data->dev_conf.rxmode.offloads;
+
+ PMD_INIT_FUNC_TRACE();
+ if (nb_rx_desc > MAX_BD_COUNT)
+ return -1;
+
+ rx_ring = rte_zmalloc(NULL, sizeof(struct enetc_bdr), 0);
+ if (rx_ring == NULL) {
+ ENETC_PMD_ERR("Failed to allocate RX ring memory");
+ err = -ENOMEM;
+ return err;
+ }
+
+ rx_ring->index = rx_queue_id;
+ err = enetc4_alloc_rxbdr(data->port_id, rx_ring, nb_rx_desc);
+ if (err)
+ goto fail;
+
+ rx_ring->ndev = dev;
+ enetc4_setup_rxbdr(&adapter->hw.hw, rx_ring, mb_pool);
+ data->rx_queues[rx_queue_id] = rx_ring;
+
+ if (!rx_conf->rx_deferred_start) {
+ /* enable ring */
+ enetc4_rxbdr_wr(&adapter->hw.hw, rx_ring->index, ENETC_RBMR,
+ ENETC_RBMR_EN);
+ dev->data->rx_queue_state[rx_ring->index] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+ } else {
+ dev->data->rx_queue_state[rx_ring->index] =
+ RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
+ RTE_ETHER_CRC_LEN : 0);
+ return 0;
+fail:
+ rte_free(rx_ring);
+
+ return err;
+}
+
+void
+enetc4_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void *rxq = dev->data->rx_queues[qid];
+
+ if (rxq == NULL)
+ return;
+
+ struct enetc_bdr *rx_ring = (struct enetc_bdr *)rxq;
+ struct enetc_eth_hw *eth_hw =
+ ENETC_DEV_PRIVATE_TO_HW(rx_ring->ndev->data->dev_private);
+ struct enetc_swbd *q_swbd;
+ struct enetc_hw *hw;
+ uint32_t val;
+ int i;
+
+ /* Disable the ring */
+ hw = ð_hw->hw;
+ val = enetc4_rxbdr_rd(hw, rx_ring->index, ENETC_RBMR);
+ val &= (~ENETC_RBMR_EN);
+ enetc4_rxbdr_wr(hw, rx_ring->index, ENETC_RBMR, val);
+
+ /* Clean the ring */
+ i = rx_ring->next_to_clean;
+ q_swbd = &rx_ring->q_swbd[i];
+ while (i != rx_ring->next_to_use) {
+ rte_pktmbuf_free(q_swbd->buffer_addr);
+ q_swbd->buffer_addr = NULL;
+ q_swbd++;
+ i++;
+ if (unlikely(i == rx_ring->bd_count)) {
+ i = 0;
+ q_swbd = &rx_ring->q_swbd[i];
+ }
+ }
+
+ enetc4_free_bdr(rx_ring);
+ rte_free(rx_ring);
+}
+
int
enetc4_dev_close(struct rte_eth_dev *dev)
{
struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ uint16_t i;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -147,6 +475,18 @@ enetc4_dev_close(struct rte_eth_dev *dev)
else
ret = enetc4_dev_stop(dev);
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ enetc4_rx_queue_release(dev, i);
+ dev->data->rx_queues[i] = NULL;
+ }
+ dev->data->nb_rx_queues = 0;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ enetc4_tx_queue_release(dev, i);
+ dev->data->tx_queues[i] = NULL;
+ }
+ dev->data->nb_tx_queues = 0;
+
if (rte_eal_iova_mode() == RTE_IOVA_PA)
dpaax_iova_table_depopulate();
@@ -174,7 +514,93 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
return 0;
}
+int
+enetc4_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(dev->data->dev_private);
+ struct enetc_bdr *rx_ring;
+ uint32_t rx_data;
+ PMD_INIT_FUNC_TRACE();
+ rx_ring = dev->data->rx_queues[qidx];
+ if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) {
+ rx_data = enetc4_rxbdr_rd(&priv->hw.hw, rx_ring->index,
+ ENETC_RBMR);
+ rx_data = rx_data | ENETC_RBMR_EN;
+ enetc4_rxbdr_wr(&priv->hw.hw, rx_ring->index, ENETC_RBMR,
+ rx_data);
+ dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ return 0;
+}
+
+int
+enetc4_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(dev->data->dev_private);
+ struct enetc_bdr *rx_ring;
+ uint32_t rx_data;
+
+ PMD_INIT_FUNC_TRACE();
+ rx_ring = dev->data->rx_queues[qidx];
+ if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) {
+ rx_data = enetc4_rxbdr_rd(&priv->hw.hw, rx_ring->index,
+ ENETC_RBMR);
+ rx_data = rx_data & (~ENETC_RBMR_EN);
+ enetc4_rxbdr_wr(&priv->hw.hw, rx_ring->index, ENETC_RBMR,
+ rx_data);
+ dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
+
+int
+enetc4_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx)
+{
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(dev->data->dev_private);
+ struct enetc_bdr *tx_ring;
+ uint32_t tx_data;
+
+ PMD_INIT_FUNC_TRACE();
+ tx_ring = dev->data->tx_queues[qidx];
+ if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) {
+ tx_data = enetc4_txbdr_rd(&priv->hw.hw, tx_ring->index,
+ ENETC_TBMR);
+ tx_data = tx_data | ENETC_TBMR_EN;
+ enetc4_txbdr_wr(&priv->hw.hw, tx_ring->index, ENETC_TBMR,
+ tx_data);
+ dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ return 0;
+}
+
+int
+enetc4_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
+{
+ struct enetc_eth_adapter *priv =
+ ENETC_DEV_PRIVATE(dev->data->dev_private);
+ struct enetc_bdr *tx_ring;
+ uint32_t tx_data;
+
+ PMD_INIT_FUNC_TRACE();
+ tx_ring = dev->data->tx_queues[qidx];
+ if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) {
+ tx_data = enetc4_txbdr_rd(&priv->hw.hw, tx_ring->index,
+ ENETC_TBMR);
+ tx_data = tx_data & (~ENETC_TBMR_EN);
+ enetc4_txbdr_wr(&priv->hw.hw, tx_ring->index, ENETC_TBMR,
+ tx_data);
+ dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
/*
* The set of PCI devices this driver supports
@@ -191,6 +617,14 @@ static const struct eth_dev_ops enetc4_ops = {
.dev_stop = enetc4_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .rx_queue_setup = enetc4_rx_queue_setup,
+ .rx_queue_start = enetc4_rx_queue_start,
+ .rx_queue_stop = enetc4_rx_queue_stop,
+ .rx_queue_release = enetc4_rx_queue_release,
+ .tx_queue_setup = enetc4_tx_queue_setup,
+ .tx_queue_start = enetc4_tx_queue_start,
+ .tx_queue_stop = enetc4_tx_queue_stop,
+ .tx_queue_release = enetc4_tx_queue_release,
};
/*
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 7996d6decb..0c68229a8d 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -41,6 +41,14 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_stop = enetc4_vf_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .rx_queue_setup = enetc4_rx_queue_setup,
+ .rx_queue_start = enetc4_rx_queue_start,
+ .rx_queue_stop = enetc4_rx_queue_stop,
+ .rx_queue_release = enetc4_rx_queue_release,
+ .tx_queue_setup = enetc4_tx_queue_setup,
+ .tx_queue_start = enetc4_tx_queue_start,
+ .tx_queue_stop = enetc4_tx_queue_stop,
+ .tx_queue_release = enetc4_tx_queue_release,
};
static int
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index ea64c9f682..1fc5f11339 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2024 NXP
*/
#include <stdbool.h>
@@ -11,6 +11,7 @@
#include "rte_memzone.h"
#include "base/enetc_hw.h"
+#include "base/enetc4_hw.h"
#include "enetc.h"
#include "enetc_logs.h"
@@ -85,6 +86,12 @@ enetc_xmit_pkts(void *tx_queue,
int i, start, bds_to_use;
struct enetc_tx_bd *txbd;
struct enetc_bdr *tx_ring = (struct enetc_bdr *)tx_queue;
+ unsigned short buflen;
+ uint8_t *data;
+ int j;
+
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(tx_ring->ndev->data->dev_private);
i = tx_ring->next_to_use;
@@ -95,6 +102,13 @@ enetc_xmit_pkts(void *tx_queue,
start = 0;
while (nb_pkts--) {
tx_ring->q_swbd[i].buffer_addr = tx_pkts[start];
+
+ if (hw->device_id == ENETC4_DEV_ID || hw->device_id == ENETC4_DEV_ID_VF) {
+ buflen = rte_pktmbuf_pkt_len(tx_ring->q_swbd[i].buffer_addr);
+ data = rte_pktmbuf_mtod(tx_ring->q_swbd[i].buffer_addr, void *);
+ for (j = 0; j <= buflen; j += RTE_CACHE_LINE_SIZE)
+ dcbf(data + j);
+ }
txbd = ENETC_TXBD(*tx_ring, i);
tx_swbd = &tx_ring->q_swbd[i];
txbd->frm_len = tx_pkts[start]->pkt_len;
@@ -326,6 +340,12 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
int cleaned_cnt, i, bd_count;
struct enetc_swbd *rx_swbd;
union enetc_rx_bd *rxbd;
+ uint32_t bd_status;
+ uint8_t *data;
+ uint32_t j;
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(rx_ring->ndev->data->dev_private);
+
/* next descriptor to process */
i = rx_ring->next_to_clean;
@@ -351,9 +371,8 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
cleaned_cnt = enetc_bd_unused(rx_ring);
rx_swbd = &rx_ring->q_swbd[i];
- while (likely(rx_frm_cnt < work_limit)) {
- uint32_t bd_status;
+ while (likely(rx_frm_cnt < work_limit)) {
bd_status = rte_le_to_cpu_32(rxbd->r.lstatus);
if (!bd_status)
break;
@@ -366,6 +385,13 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
rx_swbd->buffer_addr->ol_flags = 0;
enetc_dev_rx_parse(rx_swbd->buffer_addr,
rxbd->r.parse_summary);
+
+ if (hw->device_id == ENETC4_DEV_ID || hw->device_id == ENETC4_DEV_ID_VF) {
+ data = rte_pktmbuf_mtod(rx_swbd->buffer_addr, void *);
+ for (j = 0; j <= rx_swbd->buffer_addr->pkt_len; j += RTE_CACHE_LINE_SIZE)
+ dccivac(data + j);
+ }
+
rx_pkts[rx_frm_cnt] = rx_swbd->buffer_addr;
cleaned_cnt++;
rx_swbd++;
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 03/12] net/enetc: Optimize ENETC4 data path
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
2024-10-23 6:24 ` [v2 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
2024-10-23 6:24 ` [v2 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 04/12] net/enetc: Add TX checksum offload and RX checksum validation vanshika.shukla
` (10 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Apeksha Gupta <apeksha.gupta@nxp.com>
Improves ENETC4 data path on i.MX95 Non-cache coherent platform by:
- Adding separate RX and TX functions.
- Reducing memory accesses
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/enetc/base/enetc4_hw.h | 2 +
drivers/net/enetc/enetc.h | 5 +
drivers/net/enetc/enetc4_ethdev.c | 4 +-
drivers/net/enetc/enetc_rxtx.c | 147 ++++++++++++++++++++++++-----
4 files changed, 132 insertions(+), 26 deletions(-)
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 34a4ca3b02..759cfaba28 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -14,6 +14,8 @@
#define ENETC4_DEV_ID_VF 0xef00
#define PCI_VENDOR_ID_NXP 0x1131
+#define ENETC4_TXBD_FLAGS_F BIT(7)
+
/***************************ENETC port registers**************************/
#define ENETC4_PMR 0x10
#define ENETC4_PMR_EN (BIT(16) | BIT(17) | BIT(18))
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 9901e434d9..79c158513c 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -68,6 +68,7 @@ struct enetc_bdr {
struct rte_mempool *mb_pool; /* mbuf pool to populate RX ring. */
struct rte_eth_dev *ndev;
const struct rte_memzone *mz;
+ uint64_t ierrors;
};
/*
@@ -122,8 +123,12 @@ int enetc4_vf_dev_stop(struct rte_eth_dev *dev);
*/
uint16_t enetc_xmit_pkts(void *txq, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+uint16_t enetc_xmit_pkts_nc(void *txq, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
uint16_t enetc_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+uint16_t enetc_recv_pkts_nc(void *rxq, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
int enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt);
void enetc4_dev_hw_init(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 6c077c6071..ac5e0e5fdb 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -640,8 +640,8 @@ enetc4_dev_hw_init(struct rte_eth_dev *eth_dev)
ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- eth_dev->rx_pkt_burst = &enetc_recv_pkts;
- eth_dev->tx_pkt_burst = &enetc_xmit_pkts;
+ eth_dev->rx_pkt_burst = &enetc_recv_pkts_nc;
+ eth_dev->tx_pkt_burst = &enetc_xmit_pkts_nc;
/* Retrieving and storing the HW base address of device */
hw->hw.reg = (void *)pci_dev->mem_resource[0].addr;
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 1fc5f11339..9922915cf5 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -86,12 +86,6 @@ enetc_xmit_pkts(void *tx_queue,
int i, start, bds_to_use;
struct enetc_tx_bd *txbd;
struct enetc_bdr *tx_ring = (struct enetc_bdr *)tx_queue;
- unsigned short buflen;
- uint8_t *data;
- int j;
-
- struct enetc_eth_hw *hw =
- ENETC_DEV_PRIVATE_TO_HW(tx_ring->ndev->data->dev_private);
i = tx_ring->next_to_use;
@@ -103,12 +97,6 @@ enetc_xmit_pkts(void *tx_queue,
while (nb_pkts--) {
tx_ring->q_swbd[i].buffer_addr = tx_pkts[start];
- if (hw->device_id == ENETC4_DEV_ID || hw->device_id == ENETC4_DEV_ID_VF) {
- buflen = rte_pktmbuf_pkt_len(tx_ring->q_swbd[i].buffer_addr);
- data = rte_pktmbuf_mtod(tx_ring->q_swbd[i].buffer_addr, void *);
- for (j = 0; j <= buflen; j += RTE_CACHE_LINE_SIZE)
- dcbf(data + j);
- }
txbd = ENETC_TXBD(*tx_ring, i);
tx_swbd = &tx_ring->q_swbd[i];
txbd->frm_len = tx_pkts[start]->pkt_len;
@@ -136,6 +124,61 @@ enetc_xmit_pkts(void *tx_queue,
return start;
}
+uint16_t
+enetc_xmit_pkts_nc(void *tx_queue,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct enetc_swbd *tx_swbd;
+ int i, start, bds_to_use;
+ struct enetc_tx_bd *txbd;
+ struct enetc_bdr *tx_ring = (struct enetc_bdr *)tx_queue;
+ unsigned int buflen, j;
+ uint8_t *data;
+
+ i = tx_ring->next_to_use;
+
+ bds_to_use = enetc_bd_unused(tx_ring);
+ if (bds_to_use < nb_pkts)
+ nb_pkts = bds_to_use;
+
+ start = 0;
+ while (nb_pkts--) {
+ tx_ring->q_swbd[i].buffer_addr = tx_pkts[start];
+
+ buflen = rte_pktmbuf_pkt_len(tx_ring->q_swbd[i].buffer_addr);
+ data = rte_pktmbuf_mtod(tx_ring->q_swbd[i].buffer_addr, void *);
+ for (j = 0; j <= buflen; j += RTE_CACHE_LINE_SIZE)
+ dcbf(data + j);
+
+ txbd = ENETC_TXBD(*tx_ring, i);
+ txbd->flags = rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_F);
+
+ tx_swbd = &tx_ring->q_swbd[i];
+ txbd->frm_len = buflen;
+ txbd->buf_len = txbd->frm_len;
+ txbd->addr = (uint64_t)(uintptr_t)
+ rte_cpu_to_le_64((size_t)tx_swbd->buffer_addr->buf_iova +
+ tx_swbd->buffer_addr->data_off);
+ i++;
+ start++;
+ if (unlikely(i == tx_ring->bd_count))
+ i = 0;
+ }
+
+ /* we're only cleaning up the Tx ring here, on the assumption that
+ * software is slower than hardware and hardware completed sending
+ * older frames out by now.
+ * We're also cleaning up the ring before kicking off Tx for the new
+ * batch to minimize chances of contention on the Tx ring
+ */
+ enetc_clean_tx_ring(tx_ring);
+
+ tx_ring->next_to_use = i;
+ enetc_wr_reg(tx_ring->tcir, i);
+ return start;
+}
+
int
enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
{
@@ -171,7 +214,7 @@ enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
k++;
if (unlikely(i == rx_ring->bd_count)) {
i = 0;
- rxbd = ENETC_RXBD(*rx_ring, 0);
+ rxbd = ENETC_RXBD(*rx_ring, i);
rx_swbd = &rx_ring->q_swbd[i];
}
}
@@ -341,11 +384,6 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
struct enetc_swbd *rx_swbd;
union enetc_rx_bd *rxbd;
uint32_t bd_status;
- uint8_t *data;
- uint32_t j;
- struct enetc_eth_hw *hw =
- ENETC_DEV_PRIVATE_TO_HW(rx_ring->ndev->data->dev_private);
-
/* next descriptor to process */
i = rx_ring->next_to_clean;
@@ -386,12 +424,6 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
enetc_dev_rx_parse(rx_swbd->buffer_addr,
rxbd->r.parse_summary);
- if (hw->device_id == ENETC4_DEV_ID || hw->device_id == ENETC4_DEV_ID_VF) {
- data = rte_pktmbuf_mtod(rx_swbd->buffer_addr, void *);
- for (j = 0; j <= rx_swbd->buffer_addr->pkt_len; j += RTE_CACHE_LINE_SIZE)
- dccivac(data + j);
- }
-
rx_pkts[rx_frm_cnt] = rx_swbd->buffer_addr;
cleaned_cnt++;
rx_swbd++;
@@ -417,6 +449,73 @@ enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
return rx_frm_cnt;
}
+static int
+enetc_clean_rx_ring_nc(struct enetc_bdr *rx_ring,
+ struct rte_mbuf **rx_pkts,
+ int work_limit)
+{
+ int rx_frm_cnt = 0;
+ int cleaned_cnt, i;
+ struct enetc_swbd *rx_swbd;
+ union enetc_rx_bd *rxbd, rxbd_temp;
+ uint32_t bd_status;
+ uint8_t *data;
+ uint32_t j;
+
+ /* next descriptor to process */
+ i = rx_ring->next_to_clean;
+ /* next descriptor to process */
+ rxbd = ENETC_RXBD(*rx_ring, i);
+
+ cleaned_cnt = enetc_bd_unused(rx_ring);
+ rx_swbd = &rx_ring->q_swbd[i];
+
+ while (likely(rx_frm_cnt < work_limit)) {
+ rxbd_temp = *rxbd;
+ bd_status = rte_le_to_cpu_32(rxbd_temp.r.lstatus);
+ if (!bd_status)
+ break;
+ if (rxbd_temp.r.error)
+ rx_ring->ierrors++;
+
+ rx_swbd->buffer_addr->pkt_len = rxbd_temp.r.buf_len -
+ rx_ring->crc_len;
+ rx_swbd->buffer_addr->data_len = rx_swbd->buffer_addr->pkt_len;
+ rx_swbd->buffer_addr->hash.rss = rxbd_temp.r.rss_hash;
+ enetc_dev_rx_parse(rx_swbd->buffer_addr,
+ rxbd_temp.r.parse_summary);
+
+ data = rte_pktmbuf_mtod(rx_swbd->buffer_addr, void *);
+ for (j = 0; j <= rx_swbd->buffer_addr->pkt_len; j += RTE_CACHE_LINE_SIZE)
+ dccivac(data + j);
+
+ rx_pkts[rx_frm_cnt] = rx_swbd->buffer_addr;
+ cleaned_cnt++;
+ rx_swbd++;
+ i++;
+ if (unlikely(i == rx_ring->bd_count)) {
+ i = 0;
+ rx_swbd = &rx_ring->q_swbd[i];
+ }
+ rxbd = ENETC_RXBD(*rx_ring, i);
+ rx_frm_cnt++;
+ }
+
+ rx_ring->next_to_clean = i;
+ enetc_refill_rx_ring(rx_ring, cleaned_cnt);
+
+ return rx_frm_cnt;
+}
+
+uint16_t
+enetc_recv_pkts_nc(void *rxq, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct enetc_bdr *rx_ring = (struct enetc_bdr *)rxq;
+
+ return enetc_clean_rx_ring_nc(rx_ring, rx_pkts, nb_pkts);
+}
+
uint16_t
enetc_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts)
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 04/12] net/enetc: Add TX checksum offload and RX checksum validation
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (2 preceding siblings ...)
2024-10-23 6:24 ` [v2 03/12] net/enetc: Optimize ENETC4 data path vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 05/12] net/enetc: Add basic statistics vanshika.shukla
` (9 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Apeksha Gupta <apeksha.gupta@nxp.com>
This patch add support for:
- L3 (IPv4, IPv6) TX checksum offload
- L4 (TCP, UDP) TX checksum offload
- RX checksum validation for IPv4, IPv6, TCP, UDP
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 2 ++
drivers/net/enetc/base/enetc4_hw.h | 14 ++++++++++
drivers/net/enetc/base/enetc_hw.h | 18 ++++++++++---
drivers/net/enetc/enetc.h | 5 ++++
drivers/net/enetc/enetc4_ethdev.c | 40 +++++++++++++++++++++++++++++
drivers/net/enetc/enetc_rxtx.c | 22 ++++++++++++++++
6 files changed, 97 insertions(+), 4 deletions(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 37b548dcab..55b3b95953 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+L3 checksum offload = Y
+L4 checksum offload = Y
Queue start/stop = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 759cfaba28..114d27f34b 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -14,12 +14,26 @@
#define ENETC4_DEV_ID_VF 0xef00
#define PCI_VENDOR_ID_NXP 0x1131
+/* enetc4 txbd flags */
+#define ENETC4_TXBD_FLAGS_L4CS BIT(0)
+#define ENETC4_TXBD_FLAGS_L_TX_CKSUM BIT(3)
#define ENETC4_TXBD_FLAGS_F BIT(7)
+/* L4 type */
+#define ENETC4_TXBD_L4T_UDP BIT(0)
+#define ENETC4_TXBD_L4T_TCP BIT(1)
+/* L3 type is set to 0 for IPv4 and 1 for IPv6 */
+#define ENETC4_TXBD_L3T 0
+/* IPv4 checksum */
+#define ENETC4_TXBD_IPCS 1
/***************************ENETC port registers**************************/
#define ENETC4_PMR 0x10
#define ENETC4_PMR_EN (BIT(16) | BIT(17) | BIT(18))
+#define ENETC4_PARCSCR 0x9c
+#define L3_CKSUM BIT(0)
+#define L4_CKSUM BIT(1)
+
/* Port Station interface promiscuous MAC mode register */
#define ENETC4_PSIPMMR 0x200
#define PSIPMMR_SI0_MAC_UP BIT(0)
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 2d63c54db6..3cdfe23fc0 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -189,8 +189,7 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_TX_ADDR(txq, addr) ((void *)((txq)->enetc_txbdr + (addr)))
-#define ENETC_TXBD_FLAGS_IE BIT(13)
-#define ENETC_TXBD_FLAGS_F BIT(15)
+#define ENETC_TXBD_FLAGS_F BIT(7)
/* ENETC Parsed values (Little Endian) */
#define ENETC_PARSE_ERROR 0x8000
@@ -249,8 +248,19 @@ struct enetc_tx_bd {
uint64_t addr;
uint16_t buf_len;
uint16_t frm_len;
- uint16_t err_csum;
- uint16_t flags;
+ union {
+ struct {
+ uint8_t l3_start:7;
+ uint8_t ipcs:1;
+ uint8_t l3_hdr_size:7;
+ uint8_t l3t:1;
+ uint8_t resv:5;
+ uint8_t l4t:3;
+ uint8_t flags;
+ };/* default layout */
+ uint32_t txstart;
+ uint32_t lstatus;
+ };
};
/* RX buffer descriptor */
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 79c158513c..c29353a89b 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -45,6 +45,11 @@
#define ENETC_TXBD(BDR, i) (&(((struct enetc_tx_bd *)((BDR).bd_base))[i]))
#define ENETC_RXBD(BDR, i) (&(((union enetc_rx_bd *)((BDR).bd_base))[i]))
+#define ENETC4_MBUF_F_TX_IP_IPV4 (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4)
+#define ENETC4_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_TCP_CKSUM | \
+ RTE_MBUF_F_TX_UDP_CKSUM)
+
struct enetc_swbd {
struct rte_mbuf *buffer_addr;
};
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index ac5e0e5fdb..ee66da742f 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -11,6 +11,18 @@
#include "enetc_logs.h"
#include "enetc.h"
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
+
+/* Supported Tx offloads */
+static uint64_t dev_tx_offloads_sup =
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
+
static int
enetc4_dev_start(struct rte_eth_dev *dev)
{
@@ -128,6 +140,8 @@ enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
dev_info->max_rx_queues = MAX_RX_RINGS;
dev_info->max_tx_queues = MAX_TX_RINGS;
dev_info->max_rx_pktlen = ENETC4_MAC_MAXFRM_SIZE;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ dev_info->tx_offload_capa = dev_tx_offloads_sup;
return 0;
}
@@ -498,6 +512,10 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
{
struct enetc_eth_hw *hw =
ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
+ uint64_t tx_offloads = eth_conf->txmode.offloads;
+ uint32_t checksum = L3_CKSUM | L4_CKSUM;
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t max_len;
uint32_t val;
@@ -511,6 +529,28 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
val = ENETC4_MAC_MAXFRM_SIZE | SDU_TYPE_MPDU;
enetc4_port_wr(enetc_hw, ENETC4_PTCTMSDUR(0), val | SDU_TYPE_MPDU);
+ /* Rx offloads which are enabled by default */
+ if (dev_rx_offloads_sup & ~rx_offloads) {
+ ENETC_PMD_INFO("Some of rx offloads enabled by default"
+ " - requested 0x%" PRIx64 " fixed are 0x%" PRIx64,
+ rx_offloads, dev_rx_offloads_sup);
+ }
+
+ /* Tx offloads which are enabled by default */
+ if (dev_tx_offloads_sup & ~tx_offloads) {
+ ENETC_PMD_INFO("Some of tx offloads enabled by default"
+ " - requested 0x%" PRIx64 " fixed are 0x%" PRIx64,
+ tx_offloads, dev_tx_offloads_sup);
+ }
+
+ if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
+ checksum &= ~L3_CKSUM;
+
+ if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ checksum &= ~L4_CKSUM;
+
+ enetc4_port_wr(enetc_hw, ENETC4_PARCSCR, checksum);
+
return 0;
}
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 9922915cf5..6680b46103 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -124,6 +124,26 @@ enetc_xmit_pkts(void *tx_queue,
return start;
}
+static void
+enetc4_tx_offload_checksum(struct rte_mbuf *mbuf, struct enetc_tx_bd *txbd)
+{
+ if ((mbuf->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4))
+ == ENETC4_MBUF_F_TX_IP_IPV4) {
+ txbd->l3t = ENETC4_TXBD_L3T;
+ txbd->ipcs = ENETC4_TXBD_IPCS;
+ txbd->l3_start = mbuf->l2_len;
+ txbd->l3_hdr_size = mbuf->l3_len / 4;
+ txbd->flags |= rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_L_TX_CKSUM);
+ if ((mbuf->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) == RTE_MBUF_F_TX_UDP_CKSUM) {
+ txbd->l4t = rte_cpu_to_le_16(ENETC4_TXBD_L4T_UDP);
+ txbd->flags |= rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_L4CS);
+ } else if ((mbuf->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) == RTE_MBUF_F_TX_TCP_CKSUM) {
+ txbd->l4t = rte_cpu_to_le_16(ENETC4_TXBD_L4T_TCP);
+ txbd->flags |= rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_L4CS);
+ }
+ }
+}
+
uint16_t
enetc_xmit_pkts_nc(void *tx_queue,
struct rte_mbuf **tx_pkts,
@@ -153,6 +173,8 @@ enetc_xmit_pkts_nc(void *tx_queue,
txbd = ENETC_TXBD(*tx_ring, i);
txbd->flags = rte_cpu_to_le_16(ENETC4_TXBD_FLAGS_F);
+ if (tx_ring->q_swbd[i].buffer_addr->ol_flags & ENETC4_TX_CKSUM_OFFLOAD_MASK)
+ enetc4_tx_offload_checksum(tx_ring->q_swbd[i].buffer_addr, txbd);
tx_swbd = &tx_ring->q_swbd[i];
txbd->frm_len = buflen;
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 05/12] net/enetc: Add basic statistics
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (3 preceding siblings ...)
2024-10-23 6:24 ` [v2 04/12] net/enetc: Add TX checksum offload and RX checksum validation vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 06/12] net/enetc: Add packet type parsing support vanshika.shukla
` (8 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Apeksha Gupta <apeksha.gupta@nxp.com>
Introduces basic statistics collection for ENETC4 PMD, including:
- Packet transmit/receive counts
- Byte transmit/receive counts
- Error counters (TX/RX drops, errors)
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/base/enetc4_hw.h | 7 +++++
drivers/net/enetc/base/enetc_hw.h | 5 +++-
drivers/net/enetc/enetc4_ethdev.c | 42 +++++++++++++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 24 +++++++++++++++++
5 files changed, 78 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 55b3b95953..e814852d2d 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Basic stats = Y
L3 checksum offload = Y
L4 checksum offload = Y
Queue start/stop = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 114d27f34b..874cdc4775 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -103,6 +103,13 @@
#define IFMODE_SGMII 5
#define PM_IF_MODE_ENA BIT(15)
+/* Station interface statistics */
+#define ENETC4_SIROCT0 0x300
+#define ENETC4_SIRFRM0 0x308
+#define ENETC4_SITOCT0 0x320
+#define ENETC4_SITFRM0 0x328
+#define ENETC4_SITDFCR 0x340
+
/* general register accessors */
#define enetc4_rd_reg(reg) rte_read32((void *)(reg))
#define enetc4_wr_reg(reg, val) rte_write32((val), (void *)(reg))
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 3cdfe23fc0..3208d91bc5 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -278,7 +278,10 @@ union enetc_rx_bd {
union {
struct {
uint16_t flags;
- uint16_t error;
+ uint8_t error;
+ uint8_t resv:6;
+ uint8_t r:1;
+ uint8_t f:1;
};
uint32_t lstatus;
};
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index ee66da742f..6a165f2ff2 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -473,6 +473,46 @@ enetc4_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
rte_free(rx_ring);
}
+static
+int enetc4_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+
+ /*
+ * Total received packets, bad + good, if we want to get counters
+ * of only good received packets then use ENETC4_PM_RFRM,
+ * ENETC4_PM_TFRM registers.
+ */
+ stats->ipackets = enetc4_port_rd(enetc_hw, ENETC4_PM_RPKT(0));
+ stats->opackets = enetc4_port_rd(enetc_hw, ENETC4_PM_TPKT(0));
+ stats->ibytes = enetc4_port_rd(enetc_hw, ENETC4_PM_REOCT(0));
+ stats->obytes = enetc4_port_rd(enetc_hw, ENETC4_PM_TEOCT(0));
+ /*
+ * Dropped + Truncated packets, use ENETC4_PM_RDRNTP(0) for without
+ * truncated packets
+ */
+ stats->imissed = enetc4_port_rd(enetc_hw, ENETC4_PM_RDRP(0));
+ stats->ierrors = enetc4_port_rd(enetc_hw, ENETC4_PM_RERR(0));
+ stats->oerrors = enetc4_port_rd(enetc_hw, ENETC4_PM_TERR(0));
+
+ return 0;
+}
+
+static int
+enetc4_stats_reset(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+
+ enetc4_port_wr(enetc_hw, ENETC4_PM0_STAT_CONFIG, ENETC4_CLEAR_STATS);
+
+ return 0;
+}
+
int
enetc4_dev_close(struct rte_eth_dev *dev)
{
@@ -657,6 +697,8 @@ static const struct eth_dev_ops enetc4_ops = {
.dev_stop = enetc4_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .stats_get = enetc4_stats_get,
+ .stats_reset = enetc4_stats_reset,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 0c68229a8d..0d35fc2e1c 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -26,6 +26,29 @@ enetc4_vf_dev_start(struct rte_eth_dev *dev __rte_unused)
return 0;
}
+static int
+enetc4_vf_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_bdr *rx_ring;
+ uint8_t i;
+
+ PMD_INIT_FUNC_TRACE();
+ stats->ipackets = enetc4_rd(enetc_hw, ENETC4_SIRFRM0);
+ stats->opackets = enetc4_rd(enetc_hw, ENETC4_SITFRM0);
+ stats->ibytes = enetc4_rd(enetc_hw, ENETC4_SIROCT0);
+ stats->obytes = enetc4_rd(enetc_hw, ENETC4_SITOCT0);
+ stats->oerrors = enetc4_rd(enetc_hw, ENETC4_SITDFCR);
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rx_ring = dev->data->rx_queues[i];
+ stats->ierrors += rx_ring->ierrors;
+ }
+ return 0;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -41,6 +64,7 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_stop = enetc4_vf_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .stats_get = enetc4_vf_stats_get,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 06/12] net/enetc: Add packet type parsing support
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (4 preceding siblings ...)
2024-10-23 6:24 ` [v2 05/12] net/enetc: Add basic statistics vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 07/12] net/enetc: Add support for multiple queues with RSS vanshika.shukla
` (7 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla; +Cc: Apeksha Gupta
From: Apeksha Gupta <apeksha.gupta@nxp.com>
Introduces packet type parsing for ENETC4 PMD, supporting:
- RTE_PTYPE_L2_ETHER (Ethernet II)
- RTE_PTYPE_L3_IPV4 (IPv4)
- RTE_PTYPE_L3_IPV6 (IPv6)
- RTE_PTYPE_L4_TCP (TCP)
- RTE_PTYPE_L4_UDP (UDP)
- RTE_PTYPE_L4_SCTP (SCTP)
- RTE_PTYPE_L4_ICMP (ICMP)
- RTE_PTYPE_L4_FRAG (IPv4/IPv6 fragmentation)
- RTE_PTYPE_TUNNEL_ESP (ESP tunneling)
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/base/enetc_hw.h | 5 +++++
drivers/net/enetc/enetc.h | 2 ++
drivers/net/enetc/enetc4_ethdev.c | 23 +++++++++++++++++++++++
drivers/net/enetc/enetc4_vf.c | 1 +
drivers/net/enetc/enetc_rxtx.c | 10 ++++++++++
6 files changed, 42 insertions(+)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index e814852d2d..3356475317 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Packet type parsing = Y
Basic stats = Y
L3 checksum offload = Y
L4 checksum offload = Y
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 3208d91bc5..10bd3c050c 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -196,6 +196,7 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_PKT_TYPE_ETHER 0x0060
#define ENETC_PKT_TYPE_IPV4 0x0000
#define ENETC_PKT_TYPE_IPV6 0x0020
+#define ENETC_PKT_TYPE_IPV6_EXT 0x0080
#define ENETC_PKT_TYPE_IPV4_TCP \
(0x0010 | ENETC_PKT_TYPE_IPV4)
#define ENETC_PKT_TYPE_IPV6_TCP \
@@ -208,6 +209,10 @@ enum enetc_bdr_type {TX, RX};
(0x0013 | ENETC_PKT_TYPE_IPV4)
#define ENETC_PKT_TYPE_IPV6_SCTP \
(0x0013 | ENETC_PKT_TYPE_IPV6)
+#define ENETC_PKT_TYPE_IPV4_FRAG \
+ (0x0001 | ENETC_PKT_TYPE_IPV4)
+#define ENETC_PKT_TYPE_IPV6_FRAG \
+ (0x0001 | ENETC_PKT_TYPE_IPV6_EXT | ENETC_PKT_TYPE_IPV6)
#define ENETC_PKT_TYPE_IPV4_ICMP \
(0x0003 | ENETC_PKT_TYPE_IPV4)
#define ENETC_PKT_TYPE_IPV6_ICMP \
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index c29353a89b..8d4e432426 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -117,6 +117,8 @@ int enetc4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
int enetc4_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx);
int enetc4_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx);
void enetc4_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+const uint32_t *enetc4_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused,
+ size_t *no_of_elements);
/*
* enetc4_vf function prototype
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 6a165f2ff2..f920493176 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -682,6 +682,28 @@ enetc4_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx)
return 0;
}
+const uint32_t *
+enetc4_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused,
+ size_t *no_of_elements)
+{
+ PMD_INIT_FUNC_TRACE();
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4,
+ RTE_PTYPE_L3_IPV6,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_TUNNEL_ESP,
+ RTE_PTYPE_UNKNOWN
+ };
+
+ *no_of_elements = RTE_DIM(ptypes);
+ return ptypes;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -707,6 +729,7 @@ static const struct eth_dev_ops enetc4_ops = {
.tx_queue_start = enetc4_tx_queue_start,
.tx_queue_stop = enetc4_tx_queue_stop,
.tx_queue_release = enetc4_tx_queue_release,
+ .dev_supported_ptypes_get = enetc4_supported_ptypes_get,
};
/*
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 0d35fc2e1c..360bb0c710 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -73,6 +73,7 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.tx_queue_start = enetc4_tx_queue_start,
.tx_queue_stop = enetc4_tx_queue_stop,
.tx_queue_release = enetc4_tx_queue_release,
+ .dev_supported_ptypes_get = enetc4_supported_ptypes_get,
};
static int
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 6680b46103..a2b8153085 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -389,6 +389,16 @@ enetc_dev_rx_parse(struct rte_mbuf *m, uint16_t parse_results)
RTE_PTYPE_L3_IPV6 |
RTE_PTYPE_L4_ICMP;
return;
+ case ENETC_PKT_TYPE_IPV4_FRAG:
+ m->packet_type = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_FRAG;
+ return;
+ case ENETC_PKT_TYPE_IPV6_FRAG:
+ m->packet_type = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_FRAG;
+ return;
/* More switch cases can be added */
default:
enetc_slow_parsing(m, parse_results);
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 07/12] net/enetc: Add support for multiple queues with RSS
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (5 preceding siblings ...)
2024-10-23 6:24 ` [v2 06/12] net/enetc: Add packet type parsing support vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 08/12] net/enetc: Add VF to PF messaging support and primary MAC setup vanshika.shukla
` (6 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Gagandeep Singh <g.singh@nxp.com>
Introduces support for multiple transmit and receive queues in ENETC4
PMD, enabling scalable packet processing, improved throughput, and
latency. Packet distribution is handled through Receive Side Scaling
(RSS).
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/base/enetc4_hw.h | 11 +
drivers/net/enetc/base/enetc_hw.h | 21 +-
drivers/net/enetc/enetc.h | 37 +++-
drivers/net/enetc/enetc4_ethdev.c | 145 +++++++++++--
drivers/net/enetc/enetc4_vf.c | 10 +-
drivers/net/enetc/enetc_cbdr.c | 311 ++++++++++++++++++++++++++++
drivers/net/enetc/meson.build | 5 +-
drivers/net/enetc/ntmp.h | 110 ++++++++++
9 files changed, 617 insertions(+), 34 deletions(-)
create mode 100644 drivers/net/enetc/enetc_cbdr.c
create mode 100644 drivers/net/enetc/ntmp.h
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 3356475317..79430d0018 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+RSS hash = Y
Packet type parsing = Y
Basic stats = Y
L3 checksum offload = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 874cdc4775..49446f2cb4 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -110,6 +110,17 @@
#define ENETC4_SITFRM0 0x328
#define ENETC4_SITDFCR 0x340
+/* Control BDR regs */
+#define ENETC4_SICBDRMR 0x800
+#define ENETC4_SICBDRSR 0x804 /* RO */
+#define ENETC4_SICBDRBAR0 0x810
+#define ENETC4_SICBDRBAR1 0x814
+#define ENETC4_SICBDRPIR 0x818
+#define ENETC4_SICBDRCIR 0x81c
+#define ENETC4_SICBDRLENR 0x820
+#define ENETC4_SICTR0 0x18
+#define ENETC4_SICTR1 0x1c
+
/* general register accessors */
#define enetc4_rd_reg(reg) rte_read32((void *)(reg))
#define enetc4_wr_reg(reg, val) rte_write32((val), (void *)(reg))
diff --git a/drivers/net/enetc/base/enetc_hw.h b/drivers/net/enetc/base/enetc_hw.h
index 10bd3c050c..3cb56cd851 100644
--- a/drivers/net/enetc/base/enetc_hw.h
+++ b/drivers/net/enetc/base/enetc_hw.h
@@ -22,6 +22,10 @@
/* SI regs, offset: 0h */
#define ENETC_SIMR 0x0
#define ENETC_SIMR_EN BIT(31)
+#define ENETC_SIMR_RSSE BIT(0)
+
+/* BDR grouping*/
+#define ENETC_SIRBGCR 0x38
#define ENETC_SICAR0 0x40
#define ENETC_SICAR0_COHERENT 0x2B2B6727
@@ -29,6 +33,7 @@
#define ENETC_SIPMAR1 0x84
#define ENETC_SICAPR0 0x900
+#define ENETC_SICAPR0_BDR_MASK 0xFF
#define ENETC_SICAPR1 0x904
#define ENETC_SIMSITRV(n) (0xB00 + (n) * 0x4)
@@ -36,6 +41,11 @@
#define ENETC_SICCAPR 0x1200
+#define ENETC_SIPCAPR0 0x20
+#define ENETC_SIPCAPR0_RSS BIT(8)
+#define ENETC_SIRSSCAPR 0x1600
+#define ENETC_SIRSSCAPR_GET_NUM_RSS(val) (BIT((val) & 0xf) * 32)
+
/* enum for BD type */
enum enetc_bdr_type {TX, RX};
@@ -44,6 +54,7 @@ enum enetc_bdr_type {TX, RX};
/* RX BDR reg offsets */
#define ENETC_RBMR 0x0 /* RX BDR mode register*/
#define ENETC_RBMR_EN BIT(31)
+#define ENETC_BMR_RESET 0x0 /* BDR reset*/
#define ENETC_RBSR 0x4 /* Rx BDR status register*/
#define ENETC_RBBSR 0x8 /* Rx BDR buffer size register*/
@@ -231,15 +242,6 @@ struct enetc_eth_mac_info {
uint8_t get_link_status;
};
-struct enetc_eth_hw {
- struct rte_eth_dev *ndev;
- struct enetc_hw hw;
- uint16_t device_id;
- uint16_t vendor_id;
- uint8_t revision_id;
- struct enetc_eth_mac_info mac;
-};
-
/* Transmit Descriptor */
struct enetc_tx_desc {
uint64_t addr;
@@ -292,5 +294,4 @@ union enetc_rx_bd {
};
} r;
};
-
#endif
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 8d4e432426..354cd761d7 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -10,7 +10,9 @@
#include "compat.h"
#include "base/enetc_hw.h"
+#include "base/enetc4_hw.h"
#include "enetc_logs.h"
+#include "ntmp.h"
#define PCI_VENDOR_ID_FREESCALE 0x1957
@@ -50,6 +52,18 @@
RTE_MBUF_F_TX_TCP_CKSUM | \
RTE_MBUF_F_TX_UDP_CKSUM)
+#define ENETC_CBD(R, i) (&(((struct enetc_cbd *)((R).bd_base))[i]))
+#define ENETC_CBDR_TIMEOUT 1000 /* In multiple of ENETC_CBDR_DELAY */
+#define ENETC_CBDR_DELAY 100 /* usecs */
+#define ENETC_CBDR_SIZE 64
+#define ENETC_CBDR_ALIGN 128
+
+/* supported RSS */
+#define ENETC_RSS_OFFLOAD_ALL ( \
+ RTE_ETH_RSS_IP | \
+ RTE_ETH_RSS_UDP | \
+ RTE_ETH_RSS_TCP)
+
struct enetc_swbd {
struct rte_mbuf *buffer_addr;
};
@@ -76,6 +90,19 @@ struct enetc_bdr {
uint64_t ierrors;
};
+struct enetc_eth_hw {
+ struct rte_eth_dev *ndev;
+ struct enetc_hw hw;
+ uint16_t device_id;
+ uint16_t vendor_id;
+ uint8_t revision_id;
+ struct enetc_eth_mac_info mac;
+ struct netc_cbdr cbdr;
+ uint32_t num_rss;
+ uint32_t max_rx_queues;
+ uint32_t max_tx_queues;
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
@@ -102,7 +129,7 @@ struct enetc_eth_adapter {
int enetc4_pci_remove(struct rte_pci_device *pci_dev);
int enetc4_dev_configure(struct rte_eth_dev *dev);
int enetc4_dev_close(struct rte_eth_dev *dev);
-int enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+int enetc4_dev_infos_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
int enetc4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id __rte_unused,
@@ -149,4 +176,12 @@ enetc_bd_unused(struct enetc_bdr *bdr)
return bdr->bd_count + bdr->next_to_clean - bdr->next_to_use - 1;
}
+
+/* CBDR prototypes */
+int enetc4_setup_cbdr(struct rte_eth_dev *dev, struct enetc_hw *hw,
+ int bd_count, struct netc_cbdr *cbdr);
+void netc_free_cbdr(struct netc_cbdr *cbdr);
+int ntmp_rsst_query_or_update_entry(struct netc_cbdr *cbdr, uint32_t *table,
+ int count, bool query);
+
#endif /* _ENETC_H_ */
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index f920493176..a09744e277 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -7,7 +7,6 @@
#include <dpaax_iova_table.h>
#include "kpage_ncache_api.h"
-#include "base/enetc4_hw.h"
#include "enetc_logs.h"
#include "enetc.h"
@@ -123,10 +122,14 @@ enetc4_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
}
int
-enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+enetc4_dev_infos_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
PMD_INIT_FUNC_TRACE();
+
dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = MAX_BD_COUNT,
.nb_min = MIN_BD_COUNT,
@@ -137,11 +140,12 @@ enetc4_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
.nb_min = MIN_BD_COUNT,
.nb_align = BD_ALIGN,
};
- dev_info->max_rx_queues = MAX_RX_RINGS;
- dev_info->max_tx_queues = MAX_TX_RINGS;
+ dev_info->max_rx_queues = hw->max_rx_queues;
+ dev_info->max_tx_queues = hw->max_tx_queues;
dev_info->max_rx_pktlen = ENETC4_MAC_MAXFRM_SIZE;
dev_info->rx_offload_capa = dev_rx_offloads_sup;
dev_info->tx_offload_capa = dev_tx_offloads_sup;
+ dev_info->flow_type_rss_offloads = ENETC_RSS_OFFLOAD_ALL;
return 0;
}
@@ -167,6 +171,11 @@ mark_memory_ncache(struct enetc_bdr *bdr, const char *mz_name, unsigned int size
mz->hugepage_sz);
bdr->mz = mz;
+ /* Double check memzone alignment and hugepage size */
+ if (!rte_is_aligned(bdr->bd_base, size))
+ ENETC_PMD_WARN("Memzone is not aligned to %x", size);
+
+ ENETC_PMD_DEBUG("Ring Hugepage start address = %p", bdr->bd_base);
/* Mark memory NON-CACHEABLE */
huge_page =
(uint64_t)RTE_PTR_ALIGN_FLOOR(bdr->bd_base, size);
@@ -186,7 +195,7 @@ enetc4_alloc_txbdr(uint16_t port_id, struct enetc_bdr *txr, uint16_t nb_desc)
if (txr->q_swbd == NULL)
return -ENOMEM;
- snprintf(mz_name, sizeof(mz_name), "bdt_addr_%d", port_id);
+ snprintf(mz_name, sizeof(mz_name), "bdt_addr_%d_%d", port_id, txr->index);
if (mark_memory_ncache(txr, mz_name, SIZE_2MB)) {
ENETC_PMD_ERR("Failed to mark BD memory non-cacheable!");
rte_free(txr->q_swbd);
@@ -287,17 +296,20 @@ void
enetc4_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
void *txq = dev->data->tx_queues[qid];
+ struct enetc_hw *hw;
+ struct enetc_swbd *tx_swbd;
+ int i;
+ uint32_t val;
+ struct enetc_bdr *tx_ring;
+ struct enetc_eth_hw *eth_hw;
+ PMD_INIT_FUNC_TRACE();
if (txq == NULL)
return;
- struct enetc_bdr *tx_ring = (struct enetc_bdr *)txq;
- struct enetc_eth_hw *eth_hw =
+ tx_ring = (struct enetc_bdr *)txq;
+ eth_hw =
ENETC_DEV_PRIVATE_TO_HW(tx_ring->ndev->data->dev_private);
- struct enetc_hw *hw;
- struct enetc_swbd *tx_swbd;
- int i;
- uint32_t val;
/* Disable the ring */
hw = ð_hw->hw;
@@ -335,7 +347,7 @@ enetc4_alloc_rxbdr(uint16_t port_id, struct enetc_bdr *rxr,
if (rxr->q_swbd == NULL)
return -ENOMEM;
- snprintf(mz_name, sizeof(mz_name), "bdr_addr_%d", port_id);
+ snprintf(mz_name, sizeof(mz_name), "bdr_addr_%d_%d", port_id, rxr->index);
if (mark_memory_ncache(rxr, mz_name, SIZE_2MB)) {
ENETC_PMD_ERR("Failed to mark BD memory non-cacheable!");
rte_free(rxr->q_swbd);
@@ -437,17 +449,20 @@ void
enetc4_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
{
void *rxq = dev->data->rx_queues[qid];
+ struct enetc_swbd *q_swbd;
+ struct enetc_hw *hw;
+ uint32_t val;
+ int i;
+ struct enetc_bdr *rx_ring;
+ struct enetc_eth_hw *eth_hw;
+ PMD_INIT_FUNC_TRACE();
if (rxq == NULL)
return;
- struct enetc_bdr *rx_ring = (struct enetc_bdr *)rxq;
- struct enetc_eth_hw *eth_hw =
+ rx_ring = (struct enetc_bdr *)rxq;
+ eth_hw =
ENETC_DEV_PRIVATE_TO_HW(rx_ring->ndev->data->dev_private);
- struct enetc_swbd *q_swbd;
- struct enetc_hw *hw;
- uint32_t val;
- int i;
/* Disable the ring */
hw = ð_hw->hw;
@@ -513,10 +528,22 @@ enetc4_stats_reset(struct rte_eth_dev *dev)
return 0;
}
+static void
+enetc4_rss_configure(struct enetc_hw *hw, int enable)
+{
+ uint32_t reg;
+
+ reg = enetc4_rd(hw, ENETC_SIMR);
+ reg &= ~ENETC_SIMR_RSSE;
+ reg |= (enable) ? ENETC_SIMR_RSSE : 0;
+ enetc4_wr(hw, ENETC_SIMR, reg);
+}
+
int
enetc4_dev_close(struct rte_eth_dev *dev)
{
struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
uint16_t i;
int ret;
@@ -529,6 +556,13 @@ enetc4_dev_close(struct rte_eth_dev *dev)
else
ret = enetc4_dev_stop(dev);
+ if (dev->data->nb_rx_queues > 1) {
+ /* Disable RSS */
+ enetc4_rss_configure(enetc_hw, false);
+ /* Free CBDR */
+ netc_free_cbdr(&hw->cbdr);
+ }
+
for (i = 0; i < dev->data->nb_rx_queues; i++) {
enetc4_rx_queue_release(dev, i);
dev->data->rx_queues[i] = NULL;
@@ -558,7 +592,9 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
uint32_t checksum = L3_CKSUM | L4_CKSUM;
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t max_len;
- uint32_t val;
+ uint32_t val, num_rss;
+ uint32_t ret = 0, i;
+ uint32_t *rss_table;
PMD_INIT_FUNC_TRACE();
@@ -591,6 +627,69 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
enetc4_port_wr(enetc_hw, ENETC4_PARCSCR, checksum);
+ /* Disable and reset RX and TX rings */
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ enetc4_rxbdr_wr(enetc_hw, i, ENETC_RBMR, ENETC_BMR_RESET);
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ enetc4_rxbdr_wr(enetc_hw, i, ENETC_TBMR, ENETC_BMR_RESET);
+
+ if (dev->data->nb_rx_queues <= 1)
+ return 0;
+
+ /* Setup RSS */
+ /* Setup control BDR */
+ ret = enetc4_setup_cbdr(dev, enetc_hw, ENETC_CBDR_SIZE, &hw->cbdr);
+ if (ret) {
+ /* Disable RSS */
+ enetc4_rss_configure(enetc_hw, false);
+ return ret;
+ }
+
+ /* Reset CIR again after enable CBDR*/
+ rte_delay_us(ENETC_CBDR_DELAY);
+ ENETC_PMD_DEBUG("CIR %x after CBDR enable", rte_read32(hw->cbdr.regs.cir));
+ rte_write32(0, hw->cbdr.regs.cir);
+ ENETC_PMD_DEBUG("CIR %x after reset", rte_read32(hw->cbdr.regs.cir));
+
+ val = enetc_rd(enetc_hw, ENETC_SIPCAPR0);
+ if (val & ENETC_SIPCAPR0_RSS) {
+ num_rss = enetc_rd(enetc_hw, ENETC_SIRSSCAPR);
+ hw->num_rss = ENETC_SIRSSCAPR_GET_NUM_RSS(num_rss);
+ ENETC_PMD_DEBUG("num_rss = %d", hw->num_rss);
+
+ /* Add number of BDR groups */
+ enetc4_wr(enetc_hw, ENETC_SIRBGCR, dev->data->nb_rx_queues);
+
+
+ /* Configuring indirecton table with default values
+ * Hash algorithm and RSS secret key to be filled by PF
+ */
+ rss_table = rte_malloc(NULL, hw->num_rss * sizeof(*rss_table), ENETC_CBDR_ALIGN);
+ if (!rss_table) {
+ enetc4_rss_configure(enetc_hw, false);
+ netc_free_cbdr(&hw->cbdr);
+ return -ENOMEM;
+ }
+
+ ENETC_PMD_DEBUG("Enabling RSS for port %s with queues = %d", dev->device->name,
+ dev->data->nb_rx_queues);
+ for (i = 0; i < hw->num_rss; i++)
+ rss_table[i] = i % dev->data->nb_rx_queues;
+
+ ret = ntmp_rsst_query_or_update_entry(&hw->cbdr, rss_table, hw->num_rss, false);
+ if (ret) {
+ ENETC_PMD_WARN("RSS indirection table update fails,"
+ "Scaling behaviour is undefined");
+ enetc4_rss_configure(enetc_hw, false);
+ netc_free_cbdr(&hw->cbdr);
+ }
+ rte_free(rss_table);
+
+ /* Enable RSS */
+ enetc4_rss_configure(enetc_hw, true);
+ }
+
return 0;
}
@@ -775,11 +874,19 @@ enetc4_dev_init(struct rte_eth_dev *eth_dev)
ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
int error = 0;
+ uint32_t si_cap;
+ struct enetc_hw *enetc_hw = &hw->hw;
PMD_INIT_FUNC_TRACE();
eth_dev->dev_ops = &enetc4_ops;
enetc4_dev_hw_init(eth_dev);
+ si_cap = enetc_rd(enetc_hw, ENETC_SICAPR0);
+ hw->max_tx_queues = si_cap & ENETC_SICAPR0_BDR_MASK;
+ hw->max_rx_queues = (si_cap >> 16) & ENETC_SICAPR0_BDR_MASK;
+
+ ENETC_PMD_DEBUG("Max RX queues = %d Max TX queues = %d",
+ hw->max_rx_queues, hw->max_tx_queues);
error = enetc4_mac_init(hw, eth_dev);
if (error != 0) {
ENETC_PMD_ERR("MAC initialization failed");
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 360bb0c710..a9fb33c432 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -5,8 +5,6 @@
#include <stdbool.h>
#include <rte_random.h>
#include <dpaax_iova_table.h>
-#include "base/enetc4_hw.h"
-#include "base/enetc_hw.h"
#include "enetc_logs.h"
#include "enetc.h"
@@ -137,11 +135,19 @@ enetc4_vf_dev_init(struct rte_eth_dev *eth_dev)
ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
int error = 0;
+ uint32_t si_cap;
+ struct enetc_hw *enetc_hw = &hw->hw;
PMD_INIT_FUNC_TRACE();
eth_dev->dev_ops = &enetc4_vf_ops;
enetc4_dev_hw_init(eth_dev);
+ si_cap = enetc_rd(enetc_hw, ENETC_SICAPR0);
+ hw->max_tx_queues = si_cap & ENETC_SICAPR0_BDR_MASK;
+ hw->max_rx_queues = (si_cap >> 16) & ENETC_SICAPR0_BDR_MASK;
+
+ ENETC_PMD_DEBUG("Max RX queues = %d Max TX queues = %d",
+ hw->max_rx_queues, hw->max_tx_queues);
error = enetc4_vf_mac_init(hw, eth_dev);
if (error != 0) {
ENETC_PMD_ERR("MAC initialization failed!!");
diff --git a/drivers/net/enetc/enetc_cbdr.c b/drivers/net/enetc/enetc_cbdr.c
new file mode 100644
index 0000000000..021090775f
--- /dev/null
+++ b/drivers/net/enetc/enetc_cbdr.c
@@ -0,0 +1,311 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ */
+
+#include <ethdev_pci.h>
+
+#include "enetc_logs.h"
+#include "enetc.h"
+
+#define NTMP_RSST_ID 3
+
+/* Define NTMP Access Method */
+#define NTMP_AM_ENTRY_ID 0
+#define NTMP_AM_EXACT_KEY 1
+#define NTMP_AM_SEARCH 2
+#define NTMP_AM_TERNARY_KEY 3
+
+/* Define NTMP Header Version */
+#define NTMP_HEADER_VERSION2 2
+
+#define NTMP_REQ_HDR_NPF BIT(15)
+
+#define NTMP_RESP_LEN_MASK GENMASK(19, 0)
+#define NTMP_REQ_LEN_MASK GENMASK(31, 20)
+
+#define ENETC_NTMP_ENTRY_ID_SIZE 4
+
+#define ENETC_RSS_TABLE_ENTRY_NUM 64
+#define ENETC_RSS_CFGEU BIT(0)
+#define ENETC_RSS_STSEU BIT(1)
+#define ENETC_RSS_STSE_DATA_SIZE(n) ((n) * 8)
+#define ENETC_RSS_CFGE_DATA_SIZE(n) (n)
+
+#define NTMP_REQ_RESP_LEN(req, resp) (((req) << 20 & NTMP_REQ_LEN_MASK) | \
+ ((resp) & NTMP_RESP_LEN_MASK))
+
+static inline uint32_t
+netc_cbdr_read(void *reg)
+{
+ return rte_read32(reg);
+}
+
+static inline void
+netc_cbdr_write(void *reg, uint32_t val)
+{
+ rte_write32(val, reg);
+}
+
+static inline void
+ntmp_fill_request_headr(union netc_cbd *cbd, dma_addr_t dma,
+ int len, int table_id, int cmd,
+ int access_method)
+{
+ dma_addr_t dma_align;
+
+ memset(cbd, 0, sizeof(*cbd));
+ dma_align = dma;
+ cbd->ntmp_req_hdr.addr = dma_align;
+ cbd->ntmp_req_hdr.len = len;
+ cbd->ntmp_req_hdr.cmd = cmd;
+ cbd->ntmp_req_hdr.access_method = access_method;
+ cbd->ntmp_req_hdr.table_id = table_id;
+ cbd->ntmp_req_hdr.hdr_ver = NTMP_HEADER_VERSION2;
+ cbd->ntmp_req_hdr.cci = 0;
+ cbd->ntmp_req_hdr.rr = 0; /* Must be set to 0 by SW. */
+ /* For NTMP version 2.0 or later version */
+ cbd->ntmp_req_hdr.npf = NTMP_REQ_HDR_NPF;
+}
+
+static inline int
+netc_get_free_cbd_num(struct netc_cbdr *cbdr)
+{
+ return (cbdr->next_to_clean - cbdr->next_to_use - 1 + cbdr->bd_num) %
+ cbdr->bd_num;
+}
+
+static inline union
+netc_cbd *netc_get_cbd(struct netc_cbdr *cbdr, int index)
+{
+ return &((union netc_cbd *)(cbdr->addr_base_align))[index];
+}
+
+static void
+netc_clean_cbdr(struct netc_cbdr *cbdr)
+{
+ union netc_cbd *cbd;
+ uint32_t i;
+
+ i = cbdr->next_to_clean;
+ while (netc_cbdr_read(cbdr->regs.cir) != i) {
+ cbd = netc_get_cbd(cbdr, i);
+ memset(cbd, 0, sizeof(*cbd));
+ dcbf(cbd);
+ i = (i + 1) % cbdr->bd_num;
+ }
+
+ cbdr->next_to_clean = i;
+}
+
+static int
+netc_xmit_ntmp_cmd(struct netc_cbdr *cbdr, union netc_cbd *cbd)
+{
+ union netc_cbd *ring_cbd;
+ uint32_t i, err = 0;
+ uint16_t status;
+ uint32_t timeout = cbdr->timeout;
+ uint32_t delay = cbdr->delay;
+
+ if (unlikely(!cbdr->addr_base))
+ return -EFAULT;
+
+ rte_spinlock_lock(&cbdr->ring_lock);
+
+ if (unlikely(!netc_get_free_cbd_num(cbdr)))
+ netc_clean_cbdr(cbdr);
+
+ i = cbdr->next_to_use;
+ ring_cbd = netc_get_cbd(cbdr, i);
+
+ /* Copy command BD to the ring */
+ *ring_cbd = *cbd;
+ /* Update producer index of both software and hardware */
+ i = (i + 1) % cbdr->bd_num;
+ dcbf(ring_cbd);
+ cbdr->next_to_use = i;
+ netc_cbdr_write(cbdr->regs.pir, i);
+ ENETC_PMD_DEBUG("Control msg sent PIR = %d, CIR = %d", netc_cbdr_read(cbdr->regs.pir),
+ netc_cbdr_read(cbdr->regs.cir));
+ do {
+ if (netc_cbdr_read(cbdr->regs.cir) == i) {
+ dccivac(ring_cbd);
+ ENETC_PMD_DEBUG("got response");
+ ENETC_PMD_DEBUG("Matched = %d, status = 0x%x",
+ ring_cbd->ntmp_resp_hdr.num_matched,
+ ring_cbd->ntmp_resp_hdr.error_rr);
+ break;
+ }
+ rte_delay_us(delay);
+ } while (timeout--);
+
+ if (timeout <= 0)
+ ENETC_PMD_ERR("no response of RSS configuration");
+
+ ENETC_PMD_DEBUG("CIR after receive = %d, SICBDRSR = 0x%x",
+ netc_cbdr_read(cbdr->regs.cir),
+ netc_cbdr_read(cbdr->regs.st));
+ /* Check the writeback error status */
+ status = ring_cbd->ntmp_resp_hdr.error_rr & NTMP_RESP_HDR_ERR;
+ if (unlikely(status)) {
+ ENETC_PMD_ERR("Command BD error: 0x%04x", status);
+ err = -EIO;
+ }
+
+ netc_clean_cbdr(cbdr);
+ rte_spinlock_unlock(&cbdr->ring_lock);
+
+ return err;
+}
+
+int
+ntmp_rsst_query_or_update_entry(struct netc_cbdr *cbdr, uint32_t *table,
+ int count, bool query)
+{
+ struct rsst_req_update *requ;
+ struct rsst_req_query *req;
+ union netc_cbd cbd;
+ uint32_t len, data_size;
+ dma_addr_t dma;
+ int err, i;
+ void *tmp;
+
+ if (count != ENETC_RSS_TABLE_ENTRY_NUM)
+ /* HW only takes in a full 64 entry table */
+ return -EINVAL;
+
+ if (query)
+ data_size = ENETC_NTMP_ENTRY_ID_SIZE + ENETC_RSS_STSE_DATA_SIZE(count) +
+ ENETC_RSS_CFGE_DATA_SIZE(count);
+ else
+ data_size = sizeof(*requ) + count * sizeof(uint8_t);
+
+ tmp = rte_malloc(NULL, data_size, ENETC_CBDR_ALIGN);
+ if (!tmp)
+ return -ENOMEM;
+
+ dma = rte_mem_virt2iova(tmp);
+ req = tmp;
+ /* Set the request data buffer */
+ if (query) {
+ len = NTMP_REQ_RESP_LEN(sizeof(*req), data_size);
+ ntmp_fill_request_headr(&cbd, dma, len, NTMP_RSST_ID,
+ NTMP_CMD_QUERY, NTMP_AM_ENTRY_ID);
+ } else {
+ requ = (struct rsst_req_update *)req;
+ requ->crd.update_act = (ENETC_RSS_CFGEU | ENETC_RSS_STSEU);
+ for (i = 0; i < count; i++)
+ requ->groups[i] = (uint8_t)(table[i]);
+
+ len = NTMP_REQ_RESP_LEN(data_size, 0);
+ ntmp_fill_request_headr(&cbd, dma, len, NTMP_RSST_ID,
+ NTMP_CMD_UPDATE, NTMP_AM_ENTRY_ID);
+ dcbf(requ);
+ }
+
+ err = netc_xmit_ntmp_cmd(cbdr, &cbd);
+ if (err) {
+ ENETC_PMD_ERR("%s RSS table entry failed (%d)!",
+ query ? "Query" : "Update", err);
+ goto end;
+ }
+
+ if (query) {
+ uint8_t *group = (uint8_t *)req;
+
+ group += ENETC_NTMP_ENTRY_ID_SIZE + ENETC_RSS_STSE_DATA_SIZE(count);
+ for (i = 0; i < count; i++)
+ table[i] = group[i];
+ }
+end:
+ rte_free(tmp);
+
+ return err;
+}
+
+static int
+netc_setup_cbdr(struct rte_eth_dev *dev, int cbd_num,
+ struct netc_cbdr_regs *regs,
+ struct netc_cbdr *cbdr)
+{
+ int size;
+
+ size = cbd_num * sizeof(union netc_cbd) +
+ NETC_CBDR_BASE_ADDR_ALIGN;
+
+ cbdr->addr_base = rte_malloc(NULL, size, ENETC_CBDR_ALIGN);
+ if (!cbdr->addr_base)
+ return -ENOMEM;
+
+ cbdr->dma_base = rte_mem_virt2iova(cbdr->addr_base);
+ cbdr->dma_size = size;
+ cbdr->bd_num = cbd_num;
+ cbdr->regs = *regs;
+ cbdr->dma_dev = dev;
+ cbdr->timeout = ENETC_CBDR_TIMEOUT;
+ cbdr->delay = ENETC_CBDR_DELAY;
+
+ if (getenv("ENETC4_CBDR_TIMEOUT"))
+ cbdr->timeout = atoi(getenv("ENETC4_CBDR_TIMEOUT"));
+
+ if (getenv("ENETC4_CBDR_DELAY"))
+ cbdr->delay = atoi(getenv("ENETC4_CBDR_DELAY"));
+
+
+ ENETC_PMD_DEBUG("CBDR timeout = %u and delay = %u", cbdr->timeout,
+ cbdr->delay);
+ /* The base address of the Control BD Ring must be 128 bytes aligned */
+ cbdr->dma_base_align = cbdr->dma_base;
+ cbdr->addr_base_align = cbdr->addr_base;
+
+ cbdr->next_to_clean = 0;
+ cbdr->next_to_use = 0;
+ rte_spinlock_init(&cbdr->ring_lock);
+
+ netc_cbdr_write(cbdr->regs.mr, ~((uint32_t)NETC_CBDRMR_EN));
+ /* Step 1: Configure the base address of the Control BD Ring */
+ netc_cbdr_write(cbdr->regs.bar0, lower_32_bits(cbdr->dma_base_align));
+ netc_cbdr_write(cbdr->regs.bar1, upper_32_bits(cbdr->dma_base_align));
+
+ /* Step 2: Configure the producer index register */
+ netc_cbdr_write(cbdr->regs.pir, cbdr->next_to_clean);
+
+ /* Step 3: Configure the consumer index register */
+ netc_cbdr_write(cbdr->regs.cir, cbdr->next_to_use);
+ /* Step4: Configure the number of BDs of the Control BD Ring */
+ netc_cbdr_write(cbdr->regs.lenr, cbdr->bd_num);
+
+ /* Step 5: Enable the Control BD Ring */
+ netc_cbdr_write(cbdr->regs.mr, NETC_CBDRMR_EN);
+
+ return 0;
+}
+
+void
+netc_free_cbdr(struct netc_cbdr *cbdr)
+{
+ /* Disable the Control BD Ring */
+ if (cbdr->regs.mr != NULL) {
+ netc_cbdr_write(cbdr->regs.mr, 0);
+ rte_free(cbdr->addr_base);
+ memset(cbdr, 0, sizeof(*cbdr));
+ }
+}
+
+int
+enetc4_setup_cbdr(struct rte_eth_dev *dev, struct enetc_hw *hw,
+ int bd_count, struct netc_cbdr *cbdr)
+{
+ struct netc_cbdr_regs regs;
+
+ regs.pir = (void *)((size_t)hw->reg + ENETC4_SICBDRPIR);
+ regs.cir = (void *)((size_t)hw->reg + ENETC4_SICBDRCIR);
+ regs.mr = (void *)((size_t)hw->reg + ENETC4_SICBDRMR);
+ regs.st = (void *)((size_t)hw->reg + ENETC4_SICBDRSR);
+ regs.bar0 = (void *)((size_t)hw->reg + ENETC4_SICBDRBAR0);
+ regs.bar1 = (void *)((size_t)hw->reg + ENETC4_SICBDRBAR1);
+ regs.lenr = (void *)((size_t)hw->reg + ENETC4_SICBDRLENR);
+ regs.sictr0 = (void *)((size_t)hw->reg + ENETC4_SICTR0);
+ regs.sictr1 = (void *)((size_t)hw->reg + ENETC4_SICTR1);
+
+ return netc_setup_cbdr(dev, bd_count, ®s, cbdr);
+}
diff --git a/drivers/net/enetc/meson.build b/drivers/net/enetc/meson.build
index 6e00758a36..fe8fdc07f3 100644
--- a/drivers/net/enetc/meson.build
+++ b/drivers/net/enetc/meson.build
@@ -8,10 +8,11 @@ endif
deps += ['common_dpaax']
sources = files(
- 'enetc4_ethdev.c',
- 'enetc4_vf.c',
+ 'enetc4_ethdev.c',
+ 'enetc4_vf.c',
'enetc_ethdev.c',
'enetc_rxtx.c',
+ 'enetc_cbdr.c',
)
includes += include_directories('base')
diff --git a/drivers/net/enetc/ntmp.h b/drivers/net/enetc/ntmp.h
new file mode 100644
index 0000000000..0dbc006f26
--- /dev/null
+++ b/drivers/net/enetc/ntmp.h
@@ -0,0 +1,110 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2024 NXP
+ */
+
+#ifndef ENETC_NTMP_H
+#define ENETC_NTMP_H
+
+#include "compat.h"
+#include <linux/types.h>
+
+#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
+#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8)
+#define GENMASK(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+
+/* define NTMP Operation Commands */
+#define NTMP_CMD_DELETE BIT(0)
+#define NTMP_CMD_UPDATE BIT(1)
+#define NTMP_CMD_QUERY BIT(2)
+
+#define NETC_CBDR_TIMEOUT 1000 /* us */
+#define NETC_CBDR_BD_NUM 256
+#define NETC_CBDR_BASE_ADDR_ALIGN 128
+#define NETC_CBD_DATA_ADDR_ALIGN 16
+#define NETC_CBDRMR_EN BIT(31)
+
+#define NTMP_RESP_HDR_ERR GENMASK(11, 0)
+
+struct common_req_data {
+ uint16_t update_act;
+ uint8_t dbg_opt;
+ uint8_t query_act:4;
+ uint8_t tbl_ver:4;
+};
+
+/* RSS Table Request and Response Data Buffer Format */
+struct rsst_req_query {
+ struct common_req_data crd;
+ uint32_t entry_id;
+};
+
+/* struct for update operation */
+struct rsst_req_update {
+ struct common_req_data crd;
+ uint32_t entry_id;
+ uint8_t groups[];
+};
+
+/* The format of conctrol buffer descriptor */
+union netc_cbd {
+ struct {
+ uint64_t addr;
+ uint32_t len;
+ uint8_t cmd;
+ uint8_t resv1:4;
+ uint8_t access_method:4;
+ uint8_t table_id;
+ uint8_t hdr_ver:6;
+ uint8_t cci:1;
+ uint8_t rr:1;
+ uint32_t resv2[3];
+ uint32_t npf;
+ } ntmp_req_hdr; /* NTMP Request Message Header Format */
+
+ struct {
+ uint32_t resv1[3];
+ uint16_t num_matched;
+ uint16_t error_rr; /* bit0~11: error, bit12~14: reserved, bit15: rr */
+ uint32_t resv3[4];
+ } ntmp_resp_hdr; /* NTMP Response Message Header Format */
+};
+
+struct netc_cbdr_regs {
+ void *pir;
+ void *cir;
+ void *mr;
+ void *st;
+
+ void *bar0;
+ void *bar1;
+ void *lenr;
+
+ /* station interface current time register */
+ void *sictr0;
+ void *sictr1;
+};
+
+struct netc_cbdr {
+ struct netc_cbdr_regs regs;
+
+ int bd_num;
+ int next_to_use;
+ int next_to_clean;
+
+ int dma_size;
+ void *addr_base;
+ void *addr_base_align;
+ dma_addr_t dma_base;
+ dma_addr_t dma_base_align;
+ struct rte_eth_dev *dma_dev;
+
+ rte_spinlock_t ring_lock; /* Avoid race condition */
+
+ /* bitmap of used words of SGCL table */
+ unsigned long *sgclt_used_words;
+ uint32_t sgclt_words_num;
+ uint32_t timeout;
+ uint32_t delay;
+};
+
+#endif /* ENETC_NTMP_H */
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 08/12] net/enetc: Add VF to PF messaging support and primary MAC setup
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (6 preceding siblings ...)
2024-10-23 6:24 ` [v2 07/12] net/enetc: Add support for multiple queues with RSS vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 09/12] net/enetc: Add multicast and promiscuous mode support vanshika.shukla
` (5 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Introduces Virtual Function (VF) to Physical Function (PF) messaging,
enabling VFs to communicate with the Linux PF driver for feature
enablement.
This patch also adds primary MAC address setup capability,
allowing VFs to configure their MAC addresses.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/enetc/base/enetc4_hw.h | 22 +++
drivers/net/enetc/enetc.h | 99 +++++++++++
drivers/net/enetc/enetc4_vf.c | 260 +++++++++++++++++++++++++++++
3 files changed, 381 insertions(+)
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 49446f2cb4..f0b7563d22 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -14,6 +14,12 @@
#define ENETC4_DEV_ID_VF 0xef00
#define PCI_VENDOR_ID_NXP 0x1131
+struct enetc_msg_swbd {
+ void *vaddr;
+ uint64_t dma;
+ int size;
+};
+
/* enetc4 txbd flags */
#define ENETC4_TXBD_FLAGS_L4CS BIT(0)
#define ENETC4_TXBD_FLAGS_L_TX_CKSUM BIT(3)
@@ -103,6 +109,9 @@
#define IFMODE_SGMII 5
#define PM_IF_MODE_ENA BIT(15)
+#define ENETC4_DEF_VSI_WAIT_TIMEOUT_UPDATE 100
+#define ENETC4_DEF_VSI_WAIT_DELAY_UPDATE 2000 /* us */
+
/* Station interface statistics */
#define ENETC4_SIROCT0 0x300
#define ENETC4_SIRFRM0 0x308
@@ -110,6 +119,19 @@
#define ENETC4_SITFRM0 0x328
#define ENETC4_SITDFCR 0x340
+/* VSI MSG Registers */
+#define ENETC4_VSIMSGSR 0x204 /* RO */
+#define ENETC4_VSIMSGSR_MB BIT(0)
+#define ENETC4_VSIMSGSR_MS BIT(1)
+#define ENETC4_VSIMSGSNDAR0 0x210
+#define ENETC4_VSIMSGSNDAR1 0x214
+
+#define ENETC4_VSIMSGRR 0x208
+#define ENETC4_VSIMSGRR_MR BIT(0)
+
+#define ENETC_SIMSGSR_SET_MC(val) ((val) << 16)
+#define ENETC_SIMSGSR_GET_MC(val) ((val) >> 16)
+
/* Control BDR regs */
#define ENETC4_SICBDRMR 0x800
#define ENETC4_SICBDRSR 0x804 /* RO */
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 354cd761d7..c0fba9d618 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -41,6 +41,11 @@
/* eth name size */
#define ENETC_ETH_NAMESIZE 20
+#define ENETC_DEFAULT_MSG_SIZE 1024 /* max size */
+
+/* Message length is in multiple of 32 bytes */
+#define ENETC_VSI_PSI_MSG_SIZE 32
+
/* size for marking hugepage non-cacheable */
#define SIZE_2MB 0x200000
@@ -123,6 +128,100 @@ struct enetc_eth_adapter {
#define ENETC_DEV_PRIVATE_TO_INTR(adapter) \
(&((struct enetc_eth_adapter *)adapter)->intr)
+/* Class ID for PSI-TO-VSI messages */
+#define ENETC_MSG_CLASS_ID_CMD_SUCCESS 0x1
+#define ENETC_MSG_CLASS_ID_PERMISSION_DENY 0x2
+#define ENETC_MSG_CLASS_ID_CMD_NOT_SUPPORT 0x3
+#define ENETC_MSG_CLASS_ID_PSI_BUSY 0x4
+#define ENETC_MSG_CLASS_ID_CRC_ERROR 0x5
+#define ENETC_MSG_CLASS_ID_PROTO_NOT_SUPPORT 0x6
+#define ENETC_MSG_CLASS_ID_INVALID_MSG_LEN 0x7
+#define ENETC_MSG_CLASS_ID_CMD_TIMEOUT 0x8
+#define ENETC_MSG_CLASS_ID_CMD_DEFERED 0xf
+
+#define ENETC_PROMISC_DISABLE 0x41
+#define ENETC_PROMISC_ENABLE 0x43
+#define ENETC_ALLMULTI_PROMISC_DIS 0x81
+#define ENETC_ALLMULTI_PROMISC_EN 0x83
+
+
+/* Enum for class IDs */
+enum enetc_msg_cmd_class_id {
+ ENETC_CLASS_ID_MAC_FILTER = 0x20,
+};
+
+/* Enum for command IDs */
+enum enetc_msg_cmd_id {
+ ENETC_CMD_ID_SET_PRIMARY_MAC = 0,
+};
+
+enum mac_addr_status {
+ ENETC_INVALID_MAC_ADDR = 0x0,
+ ENETC_DUPLICATE_MAC_ADDR = 0X1,
+ ENETC_MAC_ADDR_NOT_FOUND = 0X2,
+};
+
+/* PSI-VSI command header format */
+struct enetc_msg_cmd_header {
+ uint16_t csum; /* INET_CHECKSUM */
+ uint8_t class_id; /* Command class type */
+ uint8_t cmd_id; /* Denotes the specific required action */
+ uint8_t proto_ver; /* Supported VSI-PSI command protocol version */
+ uint8_t len; /* Extended message body length */
+ uint8_t reserved_1;
+ uint8_t cookie; /* Control command execution asynchronously on PSI side */
+ uint64_t reserved_2;
+};
+
+/* VF-PF set primary MAC address message format */
+struct enetc_msg_cmd_set_primary_mac {
+ struct enetc_msg_cmd_header header;
+ uint8_t count; /* number of MAC addresses */
+ uint8_t reserved_1;
+ uint16_t reserved_2;
+ struct rte_ether_addr addr;
+};
+
+struct enetc_msg_cmd_set_promisc {
+ struct enetc_msg_cmd_header header;
+ uint8_t op_type;
+};
+
+struct enetc_msg_cmd_get_link_status {
+ struct enetc_msg_cmd_header header;
+};
+
+struct enetc_msg_cmd_get_link_speed {
+ struct enetc_msg_cmd_header header;
+};
+
+struct enetc_msg_cmd_set_vlan_promisc {
+ struct enetc_msg_cmd_header header;
+ uint8_t op;
+ uint8_t reserved;
+};
+
+struct enetc_msg_vlan_exact_filter {
+ struct enetc_msg_cmd_header header;
+ uint8_t vlan_count;
+ uint8_t reserved_1;
+ uint16_t reserved_2;
+ uint16_t vlan_id;
+ uint8_t tpid;
+ uint8_t reserved2;
+};
+
+struct enetc_psi_reply_msg {
+ uint8_t class_id;
+ uint8_t status;
+};
+
+/* msg size encoding: default and max msg value of 1024B encoded as 0 */
+static inline uint32_t enetc_vsi_set_msize(uint32_t size)
+{
+ return size < ENETC_DEFAULT_MSG_SIZE ? size >> 5 : 0;
+}
+
/*
* ENETC4 function prototypes
*/
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index a9fb33c432..6bdd476f0a 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -8,6 +8,51 @@
#include "enetc_logs.h"
#include "enetc.h"
+#define ENETC_CRC_TABLE_SIZE 256
+#define ENETC_POLY 0x1021
+#define ENETC_CRC_INIT 0xffff
+#define ENETC_BYTE_SIZE 8
+#define ENETC_MSB_BIT 0x8000
+
+uint16_t enetc_crc_table[ENETC_CRC_TABLE_SIZE];
+bool enetc_crc_gen;
+
+static void
+enetc_gen_crc_table(void)
+{
+ uint16_t crc = 0;
+ uint16_t c;
+
+ for (int i = 0; i < ENETC_CRC_TABLE_SIZE; i++) {
+ crc = 0;
+ c = i << ENETC_BYTE_SIZE;
+ for (int j = 0; j < ENETC_BYTE_SIZE; j++) {
+ if ((crc ^ c) & ENETC_MSB_BIT)
+ crc = (crc << 1) ^ ENETC_POLY;
+ else
+ crc = crc << 1;
+ c = c << 1;
+ }
+
+ enetc_crc_table[i] = crc;
+ }
+
+ enetc_crc_gen = true;
+}
+
+static uint16_t
+enetc_crc_calc(uint16_t crc, const uint8_t *buffer, size_t len)
+{
+ uint8_t data;
+
+ while (len--) {
+ data = *buffer;
+ crc = (crc << 8) ^ enetc_crc_table[((crc >> 8) ^ data) & 0xff];
+ buffer++;
+ }
+ return crc;
+}
+
int
enetc4_vf_dev_stop(struct rte_eth_dev *dev __rte_unused)
{
@@ -47,6 +92,217 @@ enetc4_vf_stats_get(struct rte_eth_dev *dev,
return 0;
}
+
+static void
+enetc_msg_vf_fill_common_hdr(struct enetc_msg_swbd *msg,
+ uint8_t class_id, uint8_t cmd_id, uint8_t proto_ver,
+ uint8_t len, uint8_t cookie)
+{
+ struct enetc_msg_cmd_header *hdr = msg->vaddr;
+
+ hdr->class_id = class_id;
+ hdr->cmd_id = cmd_id;
+ hdr->proto_ver = proto_ver;
+ hdr->len = len;
+ hdr->cookie = cookie;
+ /* Incrementing msg 2 bytes ahead as the first two bytes are for CRC */
+ hdr->csum = rte_cpu_to_be_16(enetc_crc_calc(ENETC_CRC_INIT,
+ (uint8_t *)msg->vaddr + sizeof(uint16_t),
+ msg->size - sizeof(uint16_t)));
+
+ dcbf(hdr);
+}
+
+/* Messaging */
+static void
+enetc4_msg_vsi_write_msg(struct enetc_hw *hw,
+ struct enetc_msg_swbd *msg)
+{
+ uint32_t val;
+
+ val = enetc_vsi_set_msize(msg->size) | lower_32_bits(msg->dma);
+ enetc_wr(hw, ENETC4_VSIMSGSNDAR1, upper_32_bits(msg->dma));
+ enetc_wr(hw, ENETC4_VSIMSGSNDAR0, val);
+}
+
+static void
+enetc4_msg_vsi_reply_msg(struct enetc_hw *enetc_hw, struct enetc_psi_reply_msg *reply_msg)
+{
+ int vsimsgsr;
+ int8_t class_id = 0;
+ uint8_t status = 0;
+
+ vsimsgsr = enetc_rd(enetc_hw, ENETC4_VSIMSGSR);
+
+ /* Extracting 8 bits of message result in class_id */
+ class_id |= ((ENETC_SIMSGSR_GET_MC(vsimsgsr) >> 8) & 0xff);
+
+ /* Extracting 4 bits of message result in status */
+ status |= ((ENETC_SIMSGSR_GET_MC(vsimsgsr) >> 4) & 0xf);
+
+ reply_msg->class_id = class_id;
+ reply_msg->status = status;
+}
+
+static int
+enetc4_msg_vsi_send(struct enetc_hw *enetc_hw, struct enetc_msg_swbd *msg)
+{
+ int timeout = ENETC4_DEF_VSI_WAIT_TIMEOUT_UPDATE;
+ int delay_us = ENETC4_DEF_VSI_WAIT_DELAY_UPDATE;
+ uint8_t class_id = 0;
+ int err = 0;
+ int vsimsgsr;
+
+ enetc4_msg_vsi_write_msg(enetc_hw, msg);
+
+ do {
+ vsimsgsr = enetc_rd(enetc_hw, ENETC4_VSIMSGSR);
+ if (!(vsimsgsr & ENETC4_VSIMSGSR_MB))
+ break;
+ rte_delay_us(delay_us);
+ } while (--timeout);
+
+ if (!timeout) {
+ ENETC_PMD_ERR("Message not processed by PSI");
+ return -ETIMEDOUT;
+ }
+ /* check for message delivery error */
+ if (vsimsgsr & ENETC4_VSIMSGSR_MS) {
+ ENETC_PMD_ERR("Transfer error when copying the data");
+ return -EIO;
+ }
+
+ class_id |= ((ENETC_SIMSGSR_GET_MC(vsimsgsr) >> 8) & 0xff);
+
+ /* Check the user-defined completion status. */
+ if (class_id != ENETC_MSG_CLASS_ID_CMD_SUCCESS) {
+ switch (class_id) {
+ case ENETC_MSG_CLASS_ID_PERMISSION_DENY:
+ ENETC_PMD_ERR("Permission denied");
+ err = -EACCES;
+ break;
+ case ENETC_MSG_CLASS_ID_CMD_NOT_SUPPORT:
+ ENETC_PMD_ERR("Command not supported");
+ err = -EOPNOTSUPP;
+ break;
+ case ENETC_MSG_CLASS_ID_PSI_BUSY:
+ ENETC_PMD_ERR("PSI Busy");
+ err = -EBUSY;
+ break;
+ case ENETC_MSG_CLASS_ID_CMD_TIMEOUT:
+ ENETC_PMD_ERR("Command timeout");
+ err = -ETIME;
+ break;
+ case ENETC_MSG_CLASS_ID_CRC_ERROR:
+ ENETC_PMD_ERR("CRC error");
+ err = -EIO;
+ break;
+ case ENETC_MSG_CLASS_ID_PROTO_NOT_SUPPORT:
+ ENETC_PMD_ERR("Protocol Version not supported");
+ err = -EOPNOTSUPP;
+ break;
+ case ENETC_MSG_CLASS_ID_INVALID_MSG_LEN:
+ ENETC_PMD_ERR("Invalid message length");
+ err = -EINVAL;
+ break;
+ case ENETC_CLASS_ID_MAC_FILTER:
+ break;
+ default:
+ err = -EIO;
+ }
+ }
+
+ return err;
+}
+
+static int
+enetc4_vf_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_primary_mac *cmd;
+ struct enetc_msg_swbd *msg;
+ struct enetc_psi_reply_msg *reply_msg;
+ int msg_size;
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ reply_msg = rte_zmalloc(NULL, sizeof(*reply_msg), RTE_CACHE_LINE_SIZE);
+ if (!reply_msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for reply_msg");
+ return -ENOMEM;
+ }
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ rte_free(reply_msg);
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_primary_mac),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ rte_free(reply_msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ cmd = (struct enetc_msg_cmd_set_primary_mac *)msg->vaddr;
+
+ cmd->count = 0;
+ memcpy(&cmd->addr.addr_bytes, addr, sizeof(struct rte_ether_addr));
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_MAC_FILTER,
+ ENETC_CMD_ID_SET_PRIMARY_MAC, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_MAC_FILTER) {
+ switch (reply_msg->status) {
+ case ENETC_INVALID_MAC_ADDR:
+ ENETC_PMD_ERR("Invalid MAC address");
+ err = -EINVAL;
+ break;
+ case ENETC_DUPLICATE_MAC_ADDR:
+ ENETC_PMD_ERR("Duplicate MAC address");
+ err = -EINVAL;
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ if (err) {
+ ENETC_PMD_ERR("VSI command execute error!");
+ goto end;
+ }
+
+ rte_ether_addr_copy((struct rte_ether_addr *)&cmd->addr,
+ &dev->data->mac_addrs[0]);
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(reply_msg);
+ rte_free(msg);
+ return err;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -63,6 +319,7 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
.stats_get = enetc4_vf_stats_get,
+ .mac_addr_set = enetc4_vf_set_mac_addr,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
@@ -121,6 +378,9 @@ enetc4_vf_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
+ if (!enetc_crc_gen)
+ enetc_gen_crc_table();
+
/* Copy the permanent MAC address */
rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr,
ð_dev->data->mac_addrs[0]);
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 09/12] net/enetc: Add multicast and promiscuous mode support
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (7 preceding siblings ...)
2024-10-23 6:24 ` [v2 08/12] net/enetc: Add VF to PF messaging support and primary MAC setup vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 10/12] net/enetc: Add link speed and status support vanshika.shukla
` (4 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Enables ENETC4 PMD to handle multicast and promiscuous modes.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 2 +
drivers/net/enetc/enetc.h | 5 +
drivers/net/enetc/enetc4_ethdev.c | 40 +++++
drivers/net/enetc/enetc4_vf.c | 265 ++++++++++++++++++++++++++++
4 files changed, 312 insertions(+)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 79430d0018..36d536d1f2 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Promiscuous mode = Y
+Allmulticast mode = Y
RSS hash = Y
Packet type parsing = Y
Basic stats = Y
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index c0fba9d618..902912f4fb 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -144,15 +144,20 @@ struct enetc_eth_adapter {
#define ENETC_ALLMULTI_PROMISC_DIS 0x81
#define ENETC_ALLMULTI_PROMISC_EN 0x83
+#define ENETC_PROMISC_VLAN_DISABLE 0x1
+#define ENETC_PROMISC_VLAN_ENABLE 0x3
/* Enum for class IDs */
enum enetc_msg_cmd_class_id {
ENETC_CLASS_ID_MAC_FILTER = 0x20,
+ ENETC_CLASS_ID_VLAN_FILTER = 0x21,
};
/* Enum for command IDs */
enum enetc_msg_cmd_id {
ENETC_CMD_ID_SET_PRIMARY_MAC = 0,
+ ENETC_CMD_ID_SET_MAC_PROMISCUOUS = 5,
+ ENETC_CMD_ID_SET_VLAN_PROMISCUOUS = 4,
};
enum mac_addr_status {
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index a09744e277..5d8dd2760a 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -581,6 +581,44 @@ enetc4_dev_close(struct rte_eth_dev *dev)
return ret;
}
+static int
+enetc4_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t psipmr = 0;
+
+ psipmr = enetc4_port_rd(enetc_hw, ENETC4_PSIPMMR);
+
+ /* Setting to enable promiscuous mode for all ports*/
+ psipmr |= PSIPMMR_SI_MAC_UP | PSIPMMR_SI_MAC_MP;
+
+ enetc4_port_wr(enetc_hw, ENETC4_PSIPMMR, psipmr);
+
+ return 0;
+}
+
+static int
+enetc4_promiscuous_disable(struct rte_eth_dev *dev)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t psipmr = 0;
+
+ /* Setting to disable promiscuous mode for SI0*/
+ psipmr = enetc4_port_rd(enetc_hw, ENETC4_PSIPMMR);
+ psipmr &= (~PSIPMMR_SI_MAC_UP);
+
+ if (dev->data->all_multicast == 0)
+ psipmr &= (~PSIPMMR_SI_MAC_MP);
+
+ enetc4_port_wr(enetc_hw, ENETC4_PSIPMMR, psipmr);
+
+ return 0;
+}
+
int
enetc4_dev_configure(struct rte_eth_dev *dev)
{
@@ -820,6 +858,8 @@ static const struct eth_dev_ops enetc4_ops = {
.dev_infos_get = enetc4_dev_infos_get,
.stats_get = enetc4_stats_get,
.stats_reset = enetc4_stats_reset,
+ .promiscuous_enable = enetc4_promiscuous_enable,
+ .promiscuous_disable = enetc4_promiscuous_disable,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 6bdd476f0a..28cf83077c 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -303,6 +303,266 @@ enetc4_vf_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
return err;
}
+static int
+enetc4_vf_promisc_send_message(struct rte_eth_dev *dev, bool promisc_en)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_promisc *cmd;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_promisc), ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ cmd = (struct enetc_msg_cmd_set_promisc *)msg->vaddr;
+
+ /* op_type is based on the result of message format
+ * 7 6 1 0
+ type promisc flush
+ */
+
+ if (promisc_en)
+ cmd->op_type = ENETC_PROMISC_ENABLE;
+ else
+ cmd->op_type = ENETC_PROMISC_DISABLE;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_MAC_FILTER,
+ ENETC_CMD_ID_SET_MAC_PROMISCUOUS, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+static int
+enetc4_vf_allmulti_send_message(struct rte_eth_dev *dev, bool mc_promisc)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_promisc *cmd;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_promisc),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ cmd = (struct enetc_msg_cmd_set_promisc *)msg->vaddr;
+
+ /* op_type is based on the result of message format
+ * 7 6 1 0
+ type promisc flush
+ */
+
+ if (mc_promisc)
+ cmd->op_type = ENETC_ALLMULTI_PROMISC_EN;
+ else
+ cmd->op_type = ENETC_ALLMULTI_PROMISC_DIS;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_MAC_FILTER,
+ ENETC_CMD_ID_SET_MAC_PROMISCUOUS, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+
+static int
+enetc4_vf_multicast_enable(struct rte_eth_dev *dev)
+{
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ err = enetc4_vf_allmulti_send_message(dev, true);
+ if (err) {
+ ENETC_PMD_ERR("Failed to enable multicast promiscuous mode");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+enetc4_vf_multicast_disable(struct rte_eth_dev *dev)
+{
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ err = enetc4_vf_allmulti_send_message(dev, false);
+ if (err) {
+ ENETC_PMD_ERR("Failed to disable multicast promiscuous mode");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+enetc4_vf_promisc_enable(struct rte_eth_dev *dev)
+{
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ err = enetc4_vf_promisc_send_message(dev, true);
+ if (err) {
+ ENETC_PMD_ERR("Failed to enable promiscuous mode");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+enetc4_vf_promisc_disable(struct rte_eth_dev *dev)
+{
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ err = enetc4_vf_promisc_send_message(dev, false);
+ if (err) {
+ ENETC_PMD_ERR("Failed to disable promiscuous mode");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+enetc4_vf_vlan_promisc(struct rte_eth_dev *dev, bool promisc_en)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_vlan_promisc *cmd;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_vlan_promisc),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ cmd = (struct enetc_msg_cmd_set_vlan_promisc *)msg->vaddr;
+ /* op is based on the result of message format
+ * 1 0
+ * promisc flush
+ */
+
+ if (promisc_en)
+ cmd->op = ENETC_PROMISC_VLAN_ENABLE;
+ else
+ cmd->op = ENETC_PROMISC_VLAN_DISABLE;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_VLAN_FILTER,
+ ENETC_CMD_ID_SET_VLAN_PROMISCUOUS, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+static int enetc4_vf_vlan_offload_set(struct rte_eth_dev *dev, int mask __rte_unused)
+{
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->dev_conf.rxmode.offloads) {
+ ENETC_PMD_DEBUG("VLAN filter table entry inserted:"
+ "Disabling VLAN promisc mode");
+ err = enetc4_vf_vlan_promisc(dev, false);
+ if (err) {
+ ENETC_PMD_ERR("Added VLAN filter table entry:"
+ "Failed to disable promiscuous mode");
+ return err;
+ }
+ } else {
+ ENETC_PMD_DEBUG("Enabling VLAN promisc mode");
+ err = enetc4_vf_vlan_promisc(dev, true);
+ if (err) {
+ ENETC_PMD_ERR("Vlan filter table empty:"
+ "Failed to enable promiscuous mode");
+ return err;
+ }
+ }
+
+ return 0;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -320,6 +580,11 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_infos_get = enetc4_dev_infos_get,
.stats_get = enetc4_vf_stats_get,
.mac_addr_set = enetc4_vf_set_mac_addr,
+ .promiscuous_enable = enetc4_vf_promisc_enable,
+ .promiscuous_disable = enetc4_vf_promisc_disable,
+ .allmulticast_enable = enetc4_vf_multicast_enable,
+ .allmulticast_disable = enetc4_vf_multicast_disable,
+ .vlan_offload_set = enetc4_vf_vlan_offload_set,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
.rx_queue_stop = enetc4_rx_queue_stop,
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 10/12] net/enetc: Add link speed and status support
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (8 preceding siblings ...)
2024-10-23 6:24 ` [v2 09/12] net/enetc: Add multicast and promiscuous mode support vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 11/12] net/enetc: Add link status notification support vanshika.shukla
` (3 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch add support for link update operation.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 2 +
drivers/net/enetc/base/enetc4_hw.h | 9 ++
drivers/net/enetc/enetc.h | 25 ++++
drivers/net/enetc/enetc4_ethdev.c | 44 ++++++
drivers/net/enetc/enetc4_vf.c | 216 ++++++++++++++++++++++++++++
5 files changed, 296 insertions(+)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 36d536d1f2..78b06e9841 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Link status = Y
Promiscuous mode = Y
Allmulticast mode = Y
RSS hash = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index f0b7563d22..d899b82b9c 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -109,6 +109,15 @@ struct enetc_msg_swbd {
#define IFMODE_SGMII 5
#define PM_IF_MODE_ENA BIT(15)
+/* Port MAC 0 Interface Status Register */
+#define ENETC4_PM_IF_STATUS(mac) (0x5304 + (mac) * 0x400)
+#define ENETC4_LINK_MODE 0x0000000000080000ULL
+#define ENETC4_LINK_STATUS 0x0000000000010000ULL
+#define ENETC4_LINK_SPEED_MASK 0x0000000000060000ULL
+#define ENETC4_LINK_SPEED_10M 0x0ULL
+#define ENETC4_LINK_SPEED_100M 0x0000000000020000ULL
+#define ENETC4_LINK_SPEED_1G 0x0000000000040000ULL
+
#define ENETC4_DEF_VSI_WAIT_TIMEOUT_UPDATE 100
#define ENETC4_DEF_VSI_WAIT_DELAY_UPDATE 2000 /* us */
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 902912f4fb..7f5329de33 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -151,6 +151,8 @@ struct enetc_eth_adapter {
enum enetc_msg_cmd_class_id {
ENETC_CLASS_ID_MAC_FILTER = 0x20,
ENETC_CLASS_ID_VLAN_FILTER = 0x21,
+ ENETC_CLASS_ID_LINK_STATUS = 0x80,
+ ENETC_CLASS_ID_LINK_SPEED = 0x81
};
/* Enum for command IDs */
@@ -158,6 +160,8 @@ enum enetc_msg_cmd_id {
ENETC_CMD_ID_SET_PRIMARY_MAC = 0,
ENETC_CMD_ID_SET_MAC_PROMISCUOUS = 5,
ENETC_CMD_ID_SET_VLAN_PROMISCUOUS = 4,
+ ENETC_CMD_ID_GET_LINK_STATUS = 0,
+ ENETC_CMD_ID_GET_LINK_SPEED = 0
};
enum mac_addr_status {
@@ -166,6 +170,27 @@ enum mac_addr_status {
ENETC_MAC_ADDR_NOT_FOUND = 0X2,
};
+enum link_status {
+ ENETC_LINK_UP = 0x0,
+ ENETC_LINK_DOWN = 0x1
+};
+
+enum speed {
+ ENETC_SPEED_UNKNOWN = 0x0,
+ ENETC_SPEED_10_HALF_DUPLEX = 0x1,
+ ENETC_SPEED_10_FULL_DUPLEX = 0x2,
+ ENETC_SPEED_100_HALF_DUPLEX = 0x3,
+ ENETC_SPEED_100_FULL_DUPLEX = 0x4,
+ ENETC_SPEED_1000 = 0x5,
+ ENETC_SPEED_2500 = 0x6,
+ ENETC_SPEED_5000 = 0x7,
+ ENETC_SPEED_10G = 0x8,
+ ENETC_SPEED_25G = 0x9,
+ ENETC_SPEED_50G = 0xA,
+ ENETC_SPEED_100G = 0xB,
+ ENETC_SPEED_NOT_SUPPORTED = 0xF
+};
+
/* PSI-VSI command header format */
struct enetc_msg_cmd_header {
uint16_t csum; /* INET_CHECKSUM */
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 5d8dd2760a..08580420bf 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -75,6 +75,49 @@ enetc4_dev_stop(struct rte_eth_dev *dev)
return 0;
}
+/* return 0 means link status changed, -1 means not changed */
+static int
+enetc4_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct rte_eth_link link;
+ uint32_t status;
+
+ PMD_INIT_FUNC_TRACE();
+
+ memset(&link, 0, sizeof(link));
+
+ status = enetc4_port_rd(enetc_hw, ENETC4_PM_IF_STATUS(0));
+
+ if (status & ENETC4_LINK_MODE)
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ else
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+
+ if (status & ENETC4_LINK_STATUS)
+ link.link_status = RTE_ETH_LINK_UP;
+ else
+ link.link_status = RTE_ETH_LINK_DOWN;
+
+ switch (status & ENETC4_LINK_SPEED_MASK) {
+ case ENETC4_LINK_SPEED_1G:
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
+ break;
+
+ case ENETC4_LINK_SPEED_100M:
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ break;
+
+ default:
+ case ENETC4_LINK_SPEED_10M:
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
+ }
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
static int
enetc4_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
{
@@ -856,6 +899,7 @@ static const struct eth_dev_ops enetc4_ops = {
.dev_stop = enetc4_dev_stop,
.dev_close = enetc4_dev_close,
.dev_infos_get = enetc4_dev_infos_get,
+ .link_update = enetc4_link_update,
.stats_get = enetc4_stats_get,
.stats_reset = enetc4_stats_reset,
.promiscuous_enable = enetc4_promiscuous_enable,
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 28cf83077c..307fabf2c6 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -206,6 +206,8 @@ enetc4_msg_vsi_send(struct enetc_hw *enetc_hw, struct enetc_msg_swbd *msg)
err = -EINVAL;
break;
case ENETC_CLASS_ID_MAC_FILTER:
+ case ENETC_CLASS_ID_LINK_STATUS:
+ case ENETC_CLASS_ID_LINK_SPEED:
break;
default:
err = -EIO;
@@ -479,6 +481,216 @@ enetc4_vf_promisc_disable(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetc4_vf_get_link_status(struct rte_eth_dev *dev, struct enetc_psi_reply_msg *reply_msg)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_get_link_status),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_LINK_STATUS,
+ ENETC_CMD_ID_GET_LINK_STATUS, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+static int
+enetc4_vf_get_link_speed(struct rte_eth_dev *dev, struct enetc_psi_reply_msg *reply_msg)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_swbd *msg;
+ int msg_size;
+ int err = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_get_link_speed),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_LINK_SPEED,
+ ENETC_CMD_ID_GET_LINK_SPEED, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+ return err;
+}
+
+static int
+enetc4_vf_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+{
+ struct enetc_psi_reply_msg *reply_msg;
+ struct rte_eth_link link;
+ int err;
+
+ PMD_INIT_FUNC_TRACE();
+ reply_msg = rte_zmalloc(NULL, sizeof(*reply_msg), RTE_CACHE_LINE_SIZE);
+ if (!reply_msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for reply_msg");
+ return -ENOMEM;
+ }
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ err = enetc4_vf_get_link_status(dev, reply_msg);
+ if (err) {
+ ENETC_PMD_ERR("Failed to get link status");
+ rte_free(reply_msg);
+ return err;
+ }
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_LINK_STATUS) {
+ switch (reply_msg->status) {
+ case ENETC_LINK_UP:
+ link.link_status = RTE_ETH_LINK_UP;
+ break;
+ case ENETC_LINK_DOWN:
+ link.link_status = RTE_ETH_LINK_DOWN;
+ break;
+ default:
+ ENETC_PMD_ERR("Unknown link status");
+ break;
+ }
+ } else {
+ ENETC_PMD_ERR("Wrong reply message");
+ return -1;
+ }
+
+ err = enetc4_vf_get_link_speed(dev, reply_msg);
+ if (err) {
+ ENETC_PMD_ERR("Failed to get link speed");
+ rte_free(reply_msg);
+ return err;
+ }
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_LINK_SPEED) {
+ switch (reply_msg->status) {
+ case ENETC_SPEED_UNKNOWN:
+ ENETC_PMD_DEBUG("Speed unknown");
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ break;
+ case ENETC_SPEED_10_HALF_DUPLEX:
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ break;
+ case ENETC_SPEED_10_FULL_DUPLEX:
+ link.link_speed = RTE_ETH_SPEED_NUM_10M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_100_HALF_DUPLEX:
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ break;
+ case ENETC_SPEED_100_FULL_DUPLEX:
+ link.link_speed = RTE_ETH_SPEED_NUM_100M;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_1000:
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_2500:
+ link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_5000:
+ link.link_speed = RTE_ETH_SPEED_NUM_5G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_10G:
+ link.link_speed = RTE_ETH_SPEED_NUM_10G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_25G:
+ link.link_speed = RTE_ETH_SPEED_NUM_25G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_50G:
+ link.link_speed = RTE_ETH_SPEED_NUM_50G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_100G:
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ break;
+ case ENETC_SPEED_NOT_SUPPORTED:
+ ENETC_PMD_DEBUG("Speed not supported");
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ break;
+ default:
+ ENETC_PMD_ERR("Unknown speed status");
+ break;
+ }
+ } else {
+ ENETC_PMD_ERR("Wrong reply message");
+ return -1;
+ }
+
+ link.link_autoneg = 1;
+
+ rte_eth_linkstatus_set(dev, &link);
+
+ rte_free(reply_msg);
+ return 0;
+}
+
static int
enetc4_vf_vlan_promisc(struct rte_eth_dev *dev, bool promisc_en)
{
@@ -584,6 +796,7 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.promiscuous_disable = enetc4_vf_promisc_disable,
.allmulticast_enable = enetc4_vf_multicast_enable,
.allmulticast_disable = enetc4_vf_multicast_disable,
+ .link_update = enetc4_vf_link_update,
.vlan_offload_set = enetc4_vf_vlan_offload_set,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
@@ -685,6 +898,9 @@ enetc4_vf_dev_init(struct rte_eth_dev *eth_dev)
ENETC_PMD_DEBUG("port_id %d vendorID=0x%x deviceID=0x%x",
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
+ /* update link */
+ enetc4_vf_link_update(eth_dev, 0);
+
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 11/12] net/enetc: Add link status notification support
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (9 preceding siblings ...)
2024-10-23 6:24 ` [v2 10/12] net/enetc: Add link speed and status support vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-10-23 6:24 ` [v2 12/12] net/enetc: Add MAC and VLAN filter support vanshika.shukla
` (2 subsequent siblings)
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch supports link event notifications for ENETC4 PMD, enabling:
- Link up/down event notifications
- Notification of link speed changes
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 1 +
drivers/net/enetc/base/enetc4_hw.h | 9 +-
drivers/net/enetc/enetc.h | 3 +
drivers/net/enetc/enetc4_ethdev.c | 16 ++-
drivers/net/enetc/enetc4_vf.c | 215 +++++++++++++++++++++++++++-
5 files changed, 239 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 78b06e9841..31a1955215 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Link status event = Y
Speed capabilities = Y
Link status = Y
Promiscuous mode = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index d899b82b9c..2da779e351 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -128,7 +128,14 @@ struct enetc_msg_swbd {
#define ENETC4_SITFRM0 0x328
#define ENETC4_SITDFCR 0x340
-/* VSI MSG Registers */
+/* Station interface interrupts */
+#define ENETC4_SIMSIVR 0xA30
+#define ENETC4_VSIIER 0xA00
+#define ENETC4_VSIIDR 0xA08
+#define ENETC4_VSIIER_MRIE BIT(9)
+#define ENETC4_SI_INT_IDX 0
+
+/* VSI Registers */
#define ENETC4_VSIMSGSR 0x204 /* RO */
#define ENETC4_VSIMSGSR_MB BIT(0)
#define ENETC4_VSIMSGSR_MS BIT(1)
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 7f5329de33..6b37cd95dd 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -161,6 +161,8 @@ enum enetc_msg_cmd_id {
ENETC_CMD_ID_SET_MAC_PROMISCUOUS = 5,
ENETC_CMD_ID_SET_VLAN_PROMISCUOUS = 4,
ENETC_CMD_ID_GET_LINK_STATUS = 0,
+ ENETC_CMD_ID_REGISTER_LINK_NOTIF = 1,
+ ENETC_CMD_ID_UNREGISTER_LINK_NOTIF = 2,
ENETC_CMD_ID_GET_LINK_SPEED = 0
};
@@ -280,6 +282,7 @@ const uint32_t *enetc4_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused
* enetc4_vf function prototype
*/
int enetc4_vf_dev_stop(struct rte_eth_dev *dev);
+int enetc4_vf_dev_intr(struct rte_eth_dev *eth_dev, bool enable);
/*
* RX/TX ENETC function prototypes
diff --git a/drivers/net/enetc/enetc4_ethdev.c b/drivers/net/enetc/enetc4_ethdev.c
index 08580420bf..69e105a8f8 100644
--- a/drivers/net/enetc/enetc4_ethdev.c
+++ b/drivers/net/enetc/enetc4_ethdev.c
@@ -594,10 +594,13 @@ enetc4_dev_close(struct rte_eth_dev *dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
- if (hw->device_id == ENETC4_DEV_ID_VF)
+ if (hw->device_id == ENETC4_DEV_ID_VF) {
+ if (dev->data->dev_conf.intr_conf.lsc != 0)
+ enetc4_vf_dev_intr(dev, false);
ret = enetc4_vf_dev_stop(dev);
- else
+ } else {
ret = enetc4_dev_stop(dev);
+ }
if (dev->data->nb_rx_queues > 1) {
/* Disable RSS */
@@ -708,6 +711,15 @@ enetc4_dev_configure(struct rte_eth_dev *dev)
enetc4_port_wr(enetc_hw, ENETC4_PARCSCR, checksum);
+ /* Enable interrupts */
+ if (hw->device_id == ENETC4_DEV_ID_VF) {
+ if (dev->data->dev_conf.intr_conf.lsc != 0) {
+ ret = enetc4_vf_dev_intr(dev, true);
+ if (ret)
+ ENETC_PMD_WARN("Failed to setup link interrupts");
+ }
+ }
+
/* Disable and reset RX and TX rings */
for (i = 0; i < dev->data->nb_rx_queues; i++)
enetc4_rxbdr_wr(enetc_hw, i, ENETC_RBMR, ENETC_BMR_RESET);
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 307fabf2c6..22266188ee 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -144,6 +144,69 @@ enetc4_msg_vsi_reply_msg(struct enetc_hw *enetc_hw, struct enetc_psi_reply_msg *
reply_msg->status = status;
}
+static void
+enetc4_msg_get_psi_msg(struct enetc_hw *enetc_hw, struct enetc_psi_reply_msg *reply_msg)
+{
+ int vsimsgrr;
+ int8_t class_id = 0;
+ uint8_t status = 0;
+
+ vsimsgrr = enetc_rd(enetc_hw, ENETC4_VSIMSGRR);
+
+ /* Extracting 8 bits of message result in class_id */
+ class_id |= ((ENETC_SIMSGSR_GET_MC(vsimsgrr) >> 8) & 0xff);
+
+ /* Extracting 4 bits of message result in status */
+ status |= ((ENETC_SIMSGSR_GET_MC(vsimsgrr) >> 4) & 0xf);
+
+ reply_msg->class_id = class_id;
+ reply_msg->status = status;
+}
+
+static void
+enetc4_process_psi_msg(struct rte_eth_dev *eth_dev, struct enetc_hw *enetc_hw)
+{
+ struct enetc_psi_reply_msg *msg;
+ struct rte_eth_link link;
+ int ret = 0;
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ return;
+ }
+
+ rte_eth_linkstatus_get(eth_dev, &link);
+ enetc4_msg_get_psi_msg(enetc_hw, msg);
+
+ if (msg->class_id == ENETC_CLASS_ID_LINK_STATUS) {
+ switch (msg->status) {
+ case ENETC_LINK_UP:
+ ENETC_PMD_DEBUG("Link is up");
+ link.link_status = RTE_ETH_LINK_UP;
+ break;
+ case ENETC_LINK_DOWN:
+ ENETC_PMD_DEBUG("Link is down");
+ link.link_status = RTE_ETH_LINK_DOWN;
+ break;
+ default:
+ ENETC_PMD_ERR("Unknown link status 0x%x", msg->status);
+ break;
+ }
+ ret = rte_eth_linkstatus_set(eth_dev, &link);
+ if (!ret)
+ ENETC_PMD_DEBUG("Link status has been changed");
+
+ /* Process user registered callback */
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_INTR_LSC, NULL);
+ } else {
+ ENETC_PMD_ERR("Wrong message 0x%x", msg->class_id);
+ }
+
+ rte_free(msg);
+}
+
static int
enetc4_msg_vsi_send(struct enetc_hw *enetc_hw, struct enetc_msg_swbd *msg)
{
@@ -775,6 +838,55 @@ static int enetc4_vf_vlan_offload_set(struct rte_eth_dev *dev, int mask __rte_un
return 0;
}
+static int
+enetc4_vf_link_register_notif(struct rte_eth_dev *dev, bool enable)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_swbd *msg;
+ struct rte_eth_link link;
+ int msg_size;
+ int err = 0;
+ uint8_t cmd;
+
+ PMD_INIT_FUNC_TRACE();
+ memset(&link, 0, sizeof(struct rte_eth_link));
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_get_link_status), ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ return -ENOMEM;
+ }
+
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+ if (enable)
+ cmd = ENETC_CMD_ID_REGISTER_LINK_NOTIF;
+ else
+ cmd = ENETC_CMD_ID_UNREGISTER_LINK_NOTIF;
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_LINK_STATUS,
+ cmd, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err)
+ ENETC_PMD_ERR("VSI msg error for link status notification");
+
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(msg);
+
+ return err;
+}
+
/*
* The set of PCI devices this driver supports
*/
@@ -866,6 +978,45 @@ enetc4_vf_mac_init(struct enetc_eth_hw *hw, struct rte_eth_dev *eth_dev)
return 0;
}
+static void
+enetc_vf_enable_mr_int(struct enetc_hw *hw, bool en)
+{
+ uint32_t val;
+
+ val = enetc_rd(hw, ENETC4_VSIIER);
+ val &= ~ENETC4_VSIIER_MRIE;
+ val |= (en) ? ENETC4_VSIIER_MRIE : 0;
+ enetc_wr(hw, ENETC4_VSIIER, val);
+ ENETC_PMD_DEBUG("Interrupt enable status (VSIIER) = 0x%x", val);
+}
+
+static void
+enetc4_dev_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ uint32_t status;
+
+ /* Disable interrupts before process */
+ enetc_vf_enable_mr_int(enetc_hw, false);
+
+ status = enetc_rd(enetc_hw, ENETC4_VSIIDR);
+ ENETC_PMD_DEBUG("Got INTR VSIIDR status = 0x%0x", status);
+ /* Check for PSI to VSI message interrupt */
+ if (!(status & ENETC4_VSIIER_MRIE)) {
+ ENETC_PMD_ERR("Interrupt is not PSI to VSI");
+ goto intr_clear;
+ }
+
+ enetc4_process_psi_msg(eth_dev, enetc_hw);
+intr_clear:
+ /* Clear Interrupts */
+ enetc_wr(enetc_hw, ENETC4_VSIIDR, 0xffffffff);
+ enetc_vf_enable_mr_int(enetc_hw, true);
+}
+
static int
enetc4_vf_dev_init(struct rte_eth_dev *eth_dev)
{
@@ -913,14 +1064,74 @@ enetc4_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
enetc4_vf_dev_init);
}
+int
+enetc4_vf_dev_intr(struct rte_eth_dev *eth_dev, bool enable)
+{
+ struct enetc_eth_hw *hw =
+ ENETC_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ if (!(intr_handle && rte_intr_fd_get(intr_handle))) {
+ ENETC_PMD_ERR("No INTR handle");
+ return -1;
+ }
+ if (enable) {
+ /* if the interrupts were configured on this devices*/
+ ret = rte_intr_callback_register(intr_handle,
+ enetc4_dev_interrupt_handler, eth_dev);
+ if (ret) {
+ ENETC_PMD_ERR("Failed to register INTR callback %d", ret);
+ return ret;
+ }
+ /* set one IRQ entry for PSI-to-VSI messaging */
+ /* Vector index 0 */
+ enetc_wr(enetc_hw, ENETC4_SIMSIVR, ENETC4_SI_INT_IDX);
+
+ /* enable uio/vfio intr/eventfd mapping */
+ ret = rte_intr_enable(intr_handle);
+ if (ret) {
+ ENETC_PMD_ERR("Failed to enable INTR %d", ret);
+ goto intr_enable_fail;
+ }
+
+ /* Enable message received interrupt */
+ enetc_vf_enable_mr_int(enetc_hw, true);
+ ret = enetc4_vf_link_register_notif(eth_dev, true);
+ if (ret) {
+ ENETC_PMD_ERR("Failed to register link notifications %d", ret);
+ goto disable;
+ }
+
+ return ret;
+ }
+
+ ret = enetc4_vf_link_register_notif(eth_dev, false);
+ if (ret)
+ ENETC_PMD_WARN("Failed to un-register link notification %d", ret);
+disable:
+ enetc_vf_enable_mr_int(enetc_hw, false);
+ ret = rte_intr_disable(intr_handle);
+ if (ret)
+ ENETC_PMD_WARN("Failed to disable INTR %d", ret);
+intr_enable_fail:
+ rte_intr_callback_unregister(intr_handle,
+ enetc4_dev_interrupt_handler, eth_dev);
+
+ return ret;
+}
+
static struct rte_pci_driver rte_enetc4_vf_pmd = {
.id_table = pci_vf_id_enetc4_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
.probe = enetc4_vf_pci_probe,
.remove = enetc4_pci_remove,
};
RTE_PMD_REGISTER_PCI(net_enetc4_vf, rte_enetc4_vf_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_enetc4_vf, pci_vf_id_enetc4_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_enetc4_vf, "* uio_pci_generic");
+RTE_PMD_REGISTER_KMOD_DEP(net_enetc4_vf, "* igb_uio | uio_pci_generic");
RTE_LOG_REGISTER_DEFAULT(enetc4_vf_logtype_pmd, NOTICE);
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [v2 12/12] net/enetc: Add MAC and VLAN filter support
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (10 preceding siblings ...)
2024-10-23 6:24 ` [v2 11/12] net/enetc: Add link status notification support vanshika.shukla
@ 2024-10-23 6:24 ` vanshika.shukla
2024-11-07 10:24 ` [v2 00/12] ENETC4 PMD support Ferruh Yigit
2024-11-07 19:29 ` Stephen Hemminger
13 siblings, 0 replies; 36+ messages in thread
From: vanshika.shukla @ 2024-10-23 6:24 UTC (permalink / raw)
To: dev, Gagandeep Singh, Sachin Saxena, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Introduces support for:
- Up to 4 MAC addresses filtering
- Up to 4 VLAN filters
Enhances packet filtering capabilities for ENETC4 PMD.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
doc/guides/nics/features/enetc4.ini | 2 +
drivers/net/enetc/base/enetc4_hw.h | 3 +
drivers/net/enetc/enetc.h | 11 ++
drivers/net/enetc/enetc4_vf.c | 229 +++++++++++++++++++++++++++-
4 files changed, 244 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/enetc4.ini b/doc/guides/nics/features/enetc4.ini
index 31a1955215..87425f45c9 100644
--- a/doc/guides/nics/features/enetc4.ini
+++ b/doc/guides/nics/features/enetc4.ini
@@ -9,6 +9,8 @@ Speed capabilities = Y
Link status = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
+VLAN filter = Y
RSS hash = Y
Packet type parsing = Y
Basic stats = Y
diff --git a/drivers/net/enetc/base/enetc4_hw.h b/drivers/net/enetc/base/enetc4_hw.h
index 2da779e351..e3eef6fe19 100644
--- a/drivers/net/enetc/base/enetc4_hw.h
+++ b/drivers/net/enetc/base/enetc4_hw.h
@@ -71,6 +71,9 @@ struct enetc_msg_swbd {
*/
#define ENETC4_MAC_MAXFRM_SIZE 2000
+/* Number of MAC Address Filter table entries */
+#define ENETC4_MAC_ENTRIES 4
+
/* Port MAC 0/1 Maximum Frame Length Register */
#define ENETC4_PM_MAXFRM(mac) (0x5014 + (mac) * 0x400)
diff --git a/drivers/net/enetc/enetc.h b/drivers/net/enetc/enetc.h
index 6b37cd95dd..e79a0bf0a9 100644
--- a/drivers/net/enetc/enetc.h
+++ b/drivers/net/enetc/enetc.h
@@ -158,7 +158,10 @@ enum enetc_msg_cmd_class_id {
/* Enum for command IDs */
enum enetc_msg_cmd_id {
ENETC_CMD_ID_SET_PRIMARY_MAC = 0,
+ ENETC_MSG_ADD_EXACT_MAC_ENTRIES = 1,
ENETC_CMD_ID_SET_MAC_PROMISCUOUS = 5,
+ ENETC_MSG_ADD_EXACT_VLAN_ENTRIES = 0,
+ ENETC_MSG_REMOVE_EXACT_VLAN_ENTRIES = 1,
ENETC_CMD_ID_SET_VLAN_PROMISCUOUS = 4,
ENETC_CMD_ID_GET_LINK_STATUS = 0,
ENETC_CMD_ID_REGISTER_LINK_NOTIF = 1,
@@ -170,6 +173,14 @@ enum mac_addr_status {
ENETC_INVALID_MAC_ADDR = 0x0,
ENETC_DUPLICATE_MAC_ADDR = 0X1,
ENETC_MAC_ADDR_NOT_FOUND = 0X2,
+ ENETC_MAC_FILTER_NO_RESOURCE = 0x3
+};
+
+enum vlan_status {
+ ENETC_INVALID_VLAN_ENTRY = 0x0,
+ ENETC_DUPLICATE_VLAN_ENTRY = 0X1,
+ ENETC_VLAN_ENTRY_NOT_FOUND = 0x2,
+ ENETC_VLAN_NO_RESOURCE = 0x3
};
enum link_status {
diff --git a/drivers/net/enetc/enetc4_vf.c b/drivers/net/enetc/enetc4_vf.c
index 22266188ee..fb27557378 100644
--- a/drivers/net/enetc/enetc4_vf.c
+++ b/drivers/net/enetc/enetc4_vf.c
@@ -17,6 +17,10 @@
uint16_t enetc_crc_table[ENETC_CRC_TABLE_SIZE];
bool enetc_crc_gen;
+/* Supported Rx offloads */
+static uint64_t dev_vf_rx_offloads_sup =
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
static void
enetc_gen_crc_table(void)
{
@@ -53,6 +57,25 @@ enetc_crc_calc(uint16_t crc, const uint8_t *buffer, size_t len)
return crc;
}
+static int
+enetc4_vf_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = enetc4_dev_infos_get(dev, dev_info);
+ if (ret)
+ return ret;
+
+ dev_info->max_mtu = dev_info->max_rx_pktlen - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ dev_info->max_mac_addrs = ENETC4_MAC_ENTRIES;
+ dev_info->rx_offload_capa |= dev_vf_rx_offloads_sup;
+
+ return 0;
+}
+
int
enetc4_vf_dev_stop(struct rte_eth_dev *dev __rte_unused)
{
@@ -810,6 +833,201 @@ enetc4_vf_vlan_promisc(struct rte_eth_dev *dev, bool promisc_en)
return err;
}
+static int
+enetc4_vf_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *addr,
+ uint32_t index __rte_unused, uint32_t pool __rte_unused)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_cmd_set_primary_mac *cmd;
+ struct enetc_msg_swbd *msg;
+ struct enetc_psi_reply_msg *reply_msg;
+ int msg_size;
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (!rte_is_valid_assigned_ether_addr(addr))
+ return -EINVAL;
+
+ reply_msg = rte_zmalloc(NULL, sizeof(*reply_msg), RTE_CACHE_LINE_SIZE);
+ if (!reply_msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for reply_msg");
+ return -ENOMEM;
+ }
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ rte_free(reply_msg);
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_cmd_set_primary_mac),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ rte_free(reply_msg);
+ return -ENOMEM;
+ }
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+ cmd = (struct enetc_msg_cmd_set_primary_mac *)msg->vaddr;
+ memcpy(&cmd->addr.addr_bytes, addr, sizeof(struct rte_ether_addr));
+ cmd->count = 1;
+
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_MAC_FILTER,
+ ENETC_MSG_ADD_EXACT_MAC_ENTRIES, 0, 0, 0);
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_MAC_FILTER) {
+ switch (reply_msg->status) {
+ case ENETC_INVALID_MAC_ADDR:
+ ENETC_PMD_ERR("Invalid MAC address");
+ err = -EINVAL;
+ break;
+ case ENETC_DUPLICATE_MAC_ADDR:
+ ENETC_PMD_ERR("Duplicate MAC address");
+ err = -EINVAL;
+ break;
+ case ENETC_MAC_FILTER_NO_RESOURCE:
+ ENETC_PMD_ERR("Not enough exact-match entries available");
+ err = -EINVAL;
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ if (err) {
+ ENETC_PMD_ERR("VSI command execute error!");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(reply_msg);
+ rte_free(msg);
+ return err;
+}
+
+static int enetc4_vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct enetc_eth_hw *hw = ENETC_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct enetc_hw *enetc_hw = &hw->hw;
+ struct enetc_msg_vlan_exact_filter *cmd;
+ struct enetc_msg_swbd *msg;
+ struct enetc_psi_reply_msg *reply_msg;
+ int msg_size;
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ reply_msg = rte_zmalloc(NULL, sizeof(*reply_msg), RTE_CACHE_LINE_SIZE);
+ if (!reply_msg) {
+ ENETC_PMD_ERR("Failed to alloc memory for reply_msg");
+ return -ENOMEM;
+ }
+
+ msg = rte_zmalloc(NULL, sizeof(*msg), RTE_CACHE_LINE_SIZE);
+ if (!msg) {
+ ENETC_PMD_ERR("Failed to alloc msg");
+ err = -ENOMEM;
+ rte_free(reply_msg);
+ return err;
+ }
+
+ msg_size = RTE_ALIGN(sizeof(struct enetc_msg_vlan_exact_filter),
+ ENETC_VSI_PSI_MSG_SIZE);
+ msg->vaddr = rte_zmalloc(NULL, msg_size, 0);
+ if (!msg->vaddr) {
+ ENETC_PMD_ERR("Failed to alloc memory for msg");
+ rte_free(msg);
+ rte_free(reply_msg);
+ return -ENOMEM;
+ }
+ msg->dma = rte_mem_virt2iova((const void *)msg->vaddr);
+ msg->size = msg_size;
+ cmd = (struct enetc_msg_vlan_exact_filter *)msg->vaddr;
+ cmd->vlan_count = 1;
+ cmd->vlan_id = vlan_id;
+
+ /* TPID 2-bit encoding value is taken from the H/W block guide:
+ * 00b Standard C-VLAN 0x8100
+ * 01b Standard S-VLAN 0x88A8
+ * 10b Custom VLAN as defined by CVLANR1[ETYPE]
+ * 11b Custom VLAN as defined by CVLANR2[ETYPE]
+ * Currently Standard C-VLAN is supported. To support others in future.
+ */
+ cmd->tpid = 0;
+
+ if (on) {
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_VLAN_FILTER,
+ ENETC_MSG_ADD_EXACT_VLAN_ENTRIES, 0, 0, 0);
+ } else {
+ enetc_msg_vf_fill_common_hdr(msg, ENETC_CLASS_ID_VLAN_FILTER,
+ ENETC_MSG_REMOVE_EXACT_VLAN_ENTRIES, 0, 0, 0);
+ }
+
+ /* send the command and wait */
+ err = enetc4_msg_vsi_send(enetc_hw, msg);
+ if (err) {
+ ENETC_PMD_ERR("VSI message send error");
+ goto end;
+ }
+
+ enetc4_msg_vsi_reply_msg(enetc_hw, reply_msg);
+
+ if (reply_msg->class_id == ENETC_CLASS_ID_VLAN_FILTER) {
+ switch (reply_msg->status) {
+ case ENETC_INVALID_VLAN_ENTRY:
+ ENETC_PMD_ERR("VLAN entry not valid");
+ err = -EINVAL;
+ break;
+ case ENETC_DUPLICATE_VLAN_ENTRY:
+ ENETC_PMD_ERR("Duplicated VLAN entry");
+ err = -EINVAL;
+ break;
+ case ENETC_VLAN_ENTRY_NOT_FOUND:
+ ENETC_PMD_ERR("VLAN entry not found");
+ err = -EINVAL;
+ break;
+ case ENETC_VLAN_NO_RESOURCE:
+ ENETC_PMD_ERR("Not enough exact-match entries available");
+ err = -EINVAL;
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ if (err) {
+ ENETC_PMD_ERR("VSI command execute error!");
+ goto end;
+ }
+
+end:
+ /* free memory no longer required */
+ rte_free(msg->vaddr);
+ rte_free(reply_msg);
+ rte_free(msg);
+ return err;
+}
+
static int enetc4_vf_vlan_offload_set(struct rte_eth_dev *dev, int mask __rte_unused)
{
int err = 0;
@@ -838,6 +1056,12 @@ static int enetc4_vf_vlan_offload_set(struct rte_eth_dev *dev, int mask __rte_un
return 0;
}
+static int
+enetc4_vf_mtu_set(struct rte_eth_dev *dev __rte_unused, uint16_t mtu __rte_unused)
+{
+ return 0;
+}
+
static int
enetc4_vf_link_register_notif(struct rte_eth_dev *dev, bool enable)
{
@@ -901,14 +1125,17 @@ static const struct eth_dev_ops enetc4_vf_ops = {
.dev_start = enetc4_vf_dev_start,
.dev_stop = enetc4_vf_dev_stop,
.dev_close = enetc4_dev_close,
- .dev_infos_get = enetc4_dev_infos_get,
.stats_get = enetc4_vf_stats_get,
+ .dev_infos_get = enetc4_vf_dev_infos_get,
+ .mtu_set = enetc4_vf_mtu_set,
.mac_addr_set = enetc4_vf_set_mac_addr,
+ .mac_addr_add = enetc4_vf_mac_addr_add,
.promiscuous_enable = enetc4_vf_promisc_enable,
.promiscuous_disable = enetc4_vf_promisc_disable,
.allmulticast_enable = enetc4_vf_multicast_enable,
.allmulticast_disable = enetc4_vf_multicast_disable,
.link_update = enetc4_vf_link_update,
+ .vlan_filter_set = enetc4_vf_vlan_filter_set,
.vlan_offload_set = enetc4_vf_vlan_offload_set,
.rx_queue_setup = enetc4_rx_queue_setup,
.rx_queue_start = enetc4_rx_queue_start,
--
2.25.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v2 00/12] ENETC4 PMD support
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (11 preceding siblings ...)
2024-10-23 6:24 ` [v2 12/12] net/enetc: Add MAC and VLAN filter support vanshika.shukla
@ 2024-11-07 10:24 ` Ferruh Yigit
2024-11-07 19:29 ` Stephen Hemminger
13 siblings, 0 replies; 36+ messages in thread
From: Ferruh Yigit @ 2024-11-07 10:24 UTC (permalink / raw)
To: vanshika.shukla, Gagandeep Singh, Sachin Saxena
Cc: dev, Hemant Agrawal, Stephen Hemminger
On 10/23/2024 7:24 AM, vanshika.shukla@nxp.com wrote:
> From: Vanshika Shukla <vanshika.shukla@nxp.com>
>
> This series introduces a new ENETC4 PMD driver for NXP's i.MX95
> SoC, enabling basic network operations.
>
> V2 changes:
> Handled code comments by the reviewer in:
> "net/enetc: Add initial ENETC4 PMD driver support"
> "net/enetc: Optimize ENETC4 data path"
>
> Apeksha Gupta (6):
> net/enetc: Add initial ENETC4 PMD driver support
> net/enetc: Add RX and TX queue APIs for ENETC4 PMD
> net/enetc: Optimize ENETC4 data path
> net/enetc: Add TX checksum offload and RX checksum validation
> net/enetc: Add basic statistics
> net/enetc: Add packet type parsing support
>
> Gagandeep Singh (1):
> net/enetc: Add support for multiple queues with RSS
>
> Vanshika Shukla (5):
> net/enetc: Add VF to PF messaging support and primary MAC setup
> net/enetc: Add multicast and promiscuous mode support
> net/enetc: Add link speed and status support
> net/enetc: Add link status notification support
> net/enetc: Add MAC and VLAN filter support
>
Hi Vanshika,
Driver sent late, it was even after -rc1, and I did not have time to
review the set, so it won't able to make this release, postponing to
next release.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v2 00/12] ENETC4 PMD support
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
` (12 preceding siblings ...)
2024-11-07 10:24 ` [v2 00/12] ENETC4 PMD support Ferruh Yigit
@ 2024-11-07 19:29 ` Stephen Hemminger
13 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-11-07 19:29 UTC (permalink / raw)
To: vanshika.shukla; +Cc: dev
On Wed, 23 Oct 2024 11:54:21 +0530
vanshika.shukla@nxp.com wrote:
> From: Vanshika Shukla <vanshika.shukla@nxp.com>
>
> This series introduces a new ENETC4 PMD driver for NXP's i.MX95
> SoC, enabling basic network operations.
>
> V2 changes:
> Handled code comments by the reviewer in:
> "net/enetc: Add initial ENETC4 PMD driver support"
> "net/enetc: Optimize ENETC4 data path"
>
> Apeksha Gupta (6):
> net/enetc: Add initial ENETC4 PMD driver support
> net/enetc: Add RX and TX queue APIs for ENETC4 PMD
> net/enetc: Optimize ENETC4 data path
> net/enetc: Add TX checksum offload and RX checksum validation
> net/enetc: Add basic statistics
> net/enetc: Add packet type parsing support
>
> Gagandeep Singh (1):
> net/enetc: Add support for multiple queues with RSS
>
> Vanshika Shukla (5):
> net/enetc: Add VF to PF messaging support and primary MAC setup
> net/enetc: Add multicast and promiscuous mode support
> net/enetc: Add link speed and status support
> net/enetc: Add link status notification support
> net/enetc: Add MAC and VLAN filter support
>
> MAINTAINERS | 3 +
> config/arm/arm64_imx_linux_gcc | 17 +
> config/arm/meson.build | 14 +
> doc/guides/nics/enetc4.rst | 99 ++
> doc/guides/nics/features/enetc4.ini | 22 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/rel_notes/release_24_11.rst | 4 +
> drivers/net/enetc/base/enetc4_hw.h | 186 ++++
> drivers/net/enetc/base/enetc_hw.h | 52 +-
> drivers/net/enetc/enetc.h | 246 ++++-
> drivers/net/enetc/enetc4_ethdev.c | 1040 ++++++++++++++++++
> drivers/net/enetc/enetc4_vf.c | 1364 ++++++++++++++++++++++++
> drivers/net/enetc/enetc_cbdr.c | 311 ++++++
> drivers/net/enetc/enetc_ethdev.c | 5 +-
> drivers/net/enetc/enetc_rxtx.c | 165 ++-
> drivers/net/enetc/kpage_ncache_api.h | 70 ++
> drivers/net/enetc/meson.build | 5 +-
> drivers/net/enetc/ntmp.h | 110 ++
> 18 files changed, 3673 insertions(+), 41 deletions(-)
> create mode 100644 config/arm/arm64_imx_linux_gcc
> create mode 100644 doc/guides/nics/enetc4.rst
> create mode 100644 doc/guides/nics/features/enetc4.ini
> create mode 100644 drivers/net/enetc/base/enetc4_hw.h
> create mode 100644 drivers/net/enetc/enetc4_ethdev.c
> create mode 100644 drivers/net/enetc/enetc4_vf.c
> create mode 100644 drivers/net/enetc/enetc_cbdr.c
> create mode 100644 drivers/net/enetc/kpage_ncache_api.h
> create mode 100644 drivers/net/enetc/ntmp.h
>
Tcd he files using rte_malloc are not the ones including rte_malloc.h
$ git grep rte_malloc
enetc4_ethdev.c: txr->q_swbd = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
enetc4_ethdev.c: rxr->q_swbd = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
enetc4_ethdev.c: rss_table = rte_malloc(NULL, hw->num_rss * sizeof(*rss_table), ENETC_CBDR_ALIGN);
enetc_cbdr.c: tmp = rte_malloc(NULL, data_size, ENETC_CBDR_ALIGN);
enetc_cbdr.c: cbdr->addr_base = rte_malloc(NULL, size, ENETC_CBDR_ALIGN);
enetc_ethdev.c: txr->q_swbd = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
enetc_ethdev.c: txr->bd_base = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
enetc_ethdev.c: rxr->q_swbd = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
enetc_ethdev.c: rxr->bd_base = rte_malloc(NULL, size, ENETC_BD_RING_ALIGN);
enetc_rxtx.c:#include "rte_malloc.h"
The took iwyu gives hint as to recommended includes. Don't trust it blindly
but for example:
$ iwyu -I drivers/bus/pci -I drivers/common/dpaax -I lib/ethdev -I lib/net -I lib/eal/include -I drivers/bus/vdev -I lib/kvargs drivers/net/enetc/enetc_ethdev.c
drivers/net/enetc/enetc_ethdev.c should add these lines:
#include <errno.h> // for ENOMEM, EINVAL
#include <rte_build_config.h> // for RTE_PKTMBUF_HEADROOM
#include <rte_log.h> // for RTE_LOG_DEBUG, RTE_LOG_ERR, RTE_L...
#include <rte_mbuf.h> // for rte_pktmbuf_free, rte_pktmbuf_dat...
#include <rte_mbuf_ptype.h> // for RTE_PTYPE_L2_ETHER, RTE_PTYPE_L3_...
#include <rte_pci.h> // for rte_pci_id
#include <stdint.h> // for uint32_t, uint16_t, uint64_t, uin...
#include <string.h> // for NULL, size_t, memset
#include "base/enetc4_hw.h" // for L3_CKSUM, L4_CKSUM
#include "base/enetc_hw.h" // for enetc_port_wr, enetc_port_rd, ene...
#include "bus_pci_driver.h" // for rte_pci_device, RTE_PCI_DEVICE
#include "compat.h" // for lower_32_bits, upper_32_bits
#include "ethdev_driver.h" // for rte_eth_dev, rte_eth_dev_data
#include "ethdev_pci.h" // for rte_eth_dev_pci_generic_probe
#include "rte_branch_prediction.h" // for unlikely
#include "rte_common.h" // for __rte_unused, phys_addr_t, RTE_PR...
#include "rte_dev.h" // for rte_mem_resource, RTE_PMD_REGISTE...
#include "rte_eal.h" // for rte_eal_iova_mode, rte_eal_proces...
#include "rte_ethdev.h" // for rte_eth_link, RTE_ETH_QUEUE_STATE...
#include "rte_ether.h" // for RTE_ETHER_CRC_LEN, RTE_ETHER_HDR_LEN
#include "rte_malloc.h" // for rte_free, rte_malloc, rte_zmalloc
#include "rte_memory.h" // for rte_mem_virt2iova
struct rte_mempool;
drivers/net/enetc/enetc_ethdev.c should remove these lines:
- #include <stdbool.h> // lines 5-5
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v2 01/12] net/enetc: Add initial ENETC4 PMD driver support
2024-10-23 6:24 ` [v2 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
@ 2024-11-07 19:39 ` Stephen Hemminger
0 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-11-07 19:39 UTC (permalink / raw)
To: vanshika.shukla
Cc: dev, Thomas Monjalon, Wathsala Vithanage, Bruce Richardson,
Gagandeep Singh, Sachin Saxena, Anatoly Burakov, Apeksha Gupta
On Wed, 23 Oct 2024 11:54:22 +0530
vanshika.shukla@nxp.com wrote:
> +/* IOCTL */
> +#define KPG_NC_MAGIC_NUM 0xf0f0
> +#define KPG_NC_IOCTL_UPDATE _IOWR(KPG_NC_MAGIC_NUM, 1, size_t)
> +
> +
> +#define KNRM "\x1B[0m"
> +#define KRED "\x1B[31m"
> +#define KGRN "\x1B[32m"
> +#define KYEL "\x1B[33m"
> +#define KBLU "\x1B[34m"
> +#define KMAG "\x1B[35m"
> +#define KCYN "\x1B[36m"
> +#define KWHT "\x1B[37m"
> +
> +#if defined(RTE_ARCH_ARM) && defined(RTE_ARCH_64)
> +static inline void flush_tlb(void *p)
> +{
> + asm volatile("dc civac, %0" ::"r"(p));
> + asm volatile("dsb ish");
> + asm volatile("isb");
> +}
> +#endif
> +
> +static inline void mark_kpage_ncache(uint64_t huge_page)
> +{
> + int fd, ret;
> +
> + fd = open(KPG_NC_DEVICE_PATH, O_RDONLY);
> + if (fd < 0) {
> + ENETC_PMD_ERR(KYEL "Error: " KNRM "Could not open: %s",
> + KPG_NC_DEVICE_PATH);
> +
Do not add your own color stuff into logging!
It will mess up when log goes to syslog or journal.
There is a better more complete set of patchs to add generic
color support to log (still under review).
Also directly manipulating kernel page cache via non-upstream
ioctl's is a bad idea from security and portability point of view.
Do you really want to make Christoph Hellwig, and Al Viro come
after you?
If you have to do this it should be wrapped in some API in EAL, not
in the driver.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
2024-10-20 23:39 ` Stephen Hemminger
2024-10-20 23:52 ` Stephen Hemminger
@ 2024-12-02 22:26 ` Stephen Hemminger
2024-12-02 22:30 ` Stephen Hemminger
2024-12-03 22:57 ` Stephen Hemminger
4 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-12-02 22:26 UTC (permalink / raw)
To: vanshika.shukla
Cc: dev, Thomas Monjalon, Wathsala Vithanage, Bruce Richardson,
Gagandeep Singh, Sachin Saxena, Anatoly Burakov, Apeksha Gupta
On Fri, 18 Oct 2024 12:56:33 +0530
vanshika.shukla@nxp.com wrote:
> +#. **Linux Kernel**
> +
> + It can be obtained from `NXP's Github hosting <https://github.com/nxp-imx/linux-imx>`_.
> +
IF the driver only exists for the DPDK then shouldn't it be in the DPDK kmods repo?
I looked at the git hub version and the driver looks like it is not very polished.
Mostly things that would be flagged during a code review.
Lots of printk's like the developer was not sure.
Lots of casts which hide potential bugs.
Unsafe copy from user space.
Module license and SPDX license mismatch.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
` (2 preceding siblings ...)
2024-12-02 22:26 ` Stephen Hemminger
@ 2024-12-02 22:30 ` Stephen Hemminger
2024-12-03 22:57 ` Stephen Hemminger
4 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-12-02 22:30 UTC (permalink / raw)
To: vanshika.shukla
Cc: dev, Thomas Monjalon, Wathsala Vithanage, Bruce Richardson,
Gagandeep Singh, Sachin Saxena, Anatoly Burakov, Apeksha Gupta
On Fri, 18 Oct 2024 12:56:33 +0530
vanshika.shukla@nxp.com wrote:
> if ((high_mac | low_mac) == 0) {
> + char *first_byte;
> +
> + ENETC_PMD_NOTICE("MAC is not available for this SI, "
> + "set random MAC");
> + mac = (uint32_t *)hw->mac.addr;
> + *mac = (uint32_t)rte_rand();
> + first_byte = (char *)mac;
> + *first_byte &= 0xfe; /* clear multicast bit */
> + *first_byte |= 0x02; /* set local assignment bit (IEEE802) */
Why do you need to reinvent rte_eth_random_addr()?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
` (3 preceding siblings ...)
2024-12-02 22:30 ` Stephen Hemminger
@ 2024-12-03 22:57 ` Stephen Hemminger
4 siblings, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2024-12-03 22:57 UTC (permalink / raw)
To: vanshika.shukla
Cc: dev, Thomas Monjalon, Wathsala Vithanage, Bruce Richardson,
Gagandeep Singh, Sachin Saxena, Anatoly Burakov, Apeksha Gupta
On Fri, 18 Oct 2024 12:56:33 +0530
vanshika.shukla@nxp.com wrote:
> +
> +- **kpage_ncache Kernel Module**
> +
> + i.MX95 platform is a IO non-cache coherent platform and driver is dependent on
> + a kernel module kpage_ncache.ko to mark the hugepage memory to non-cacheable.
> +
> + The module can be obtained from: `kpage_ncache <https://github.com/nxp-qoriq/dpdk-extras/tree/main/linux/kpage_ncache>`_
> +
Rather than an out of tree driver kludge, have you considered working with
kernel memory developers to support this through conventional API such as madvise or mmap?
There are lots of madvise flags, it fits with various security modules, and could also
work on other future non-coherent platforms.
^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2024-12-03 22:58 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-18 7:26 [v1 00/12] ENETC4 PMD support vanshika.shukla
2024-10-18 7:26 ` [v1 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
2024-10-20 23:39 ` Stephen Hemminger
2024-10-20 23:52 ` Stephen Hemminger
2024-12-02 22:26 ` Stephen Hemminger
2024-12-02 22:30 ` Stephen Hemminger
2024-12-03 22:57 ` Stephen Hemminger
2024-10-18 7:26 ` [v1 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD vanshika.shukla
2024-10-20 23:40 ` Stephen Hemminger
2024-10-18 7:26 ` [v1 03/12] net/enetc: Optimize ENETC4 data path vanshika.shukla
2024-10-21 0:06 ` Stephen Hemminger
2024-10-18 7:26 ` [v1 04/12] net/enetc: Add TX checksum offload and RX checksum validation vanshika.shukla
2024-10-18 7:26 ` [v1 05/12] net/enetc: Add basic statistics vanshika.shukla
2024-10-18 7:26 ` [v1 06/12] net/enetc: Add packet type parsing support vanshika.shukla
2024-10-18 7:26 ` [v1 07/12] net/enetc: Add support for multiple queues with RSS vanshika.shukla
2024-10-18 7:26 ` [v1 08/12] net/enetc: Add VF to PF messaging support and primary MAC setup vanshika.shukla
2024-10-18 7:26 ` [v1 09/12] net/enetc: Add multicast and promiscuous mode support vanshika.shukla
2024-10-18 7:26 ` [v1 10/12] net/enetc: Add link speed and status support vanshika.shukla
2024-10-18 7:26 ` [v1 11/12] net/enetc: Add link status notification support vanshika.shukla
2024-10-18 7:26 ` [v1 12/12] net/enetc: Add MAC and VLAN filter support vanshika.shukla
2024-10-23 6:24 ` [v2 00/12] ENETC4 PMD support vanshika.shukla
2024-10-23 6:24 ` [v2 01/12] net/enetc: Add initial ENETC4 PMD driver support vanshika.shukla
2024-11-07 19:39 ` Stephen Hemminger
2024-10-23 6:24 ` [v2 02/12] net/enetc: Add RX and TX queue APIs for ENETC4 PMD vanshika.shukla
2024-10-23 6:24 ` [v2 03/12] net/enetc: Optimize ENETC4 data path vanshika.shukla
2024-10-23 6:24 ` [v2 04/12] net/enetc: Add TX checksum offload and RX checksum validation vanshika.shukla
2024-10-23 6:24 ` [v2 05/12] net/enetc: Add basic statistics vanshika.shukla
2024-10-23 6:24 ` [v2 06/12] net/enetc: Add packet type parsing support vanshika.shukla
2024-10-23 6:24 ` [v2 07/12] net/enetc: Add support for multiple queues with RSS vanshika.shukla
2024-10-23 6:24 ` [v2 08/12] net/enetc: Add VF to PF messaging support and primary MAC setup vanshika.shukla
2024-10-23 6:24 ` [v2 09/12] net/enetc: Add multicast and promiscuous mode support vanshika.shukla
2024-10-23 6:24 ` [v2 10/12] net/enetc: Add link speed and status support vanshika.shukla
2024-10-23 6:24 ` [v2 11/12] net/enetc: Add link status notification support vanshika.shukla
2024-10-23 6:24 ` [v2 12/12] net/enetc: Add MAC and VLAN filter support vanshika.shukla
2024-11-07 10:24 ` [v2 00/12] ENETC4 PMD support Ferruh Yigit
2024-11-07 19:29 ` Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).