* [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver
@ 2021-10-01 11:42 Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 1/5] net/enetfec: introduce " Apeksha Gupta
` (5 more replies)
0 siblings, 6 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-01 11:42 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch series introduce the enetfec driver, ENETFEC
(Fast Ethernet Controller) is a network poll mode driver for
the inbuilt NIC found in the NXP i.MX 8M Mini SoC.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO interface so that user space directly communicate with
a UIO based hardware device. UIO interface mmap the Control and Status
Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
gives access to non-cacheble memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Patch 5 adds checksum and VLAN features.
Apeksha Gupta (5):
net/enetfec: introduce NXP ENETFEC driver
net/enetfec: add UIO support
net/enetfec: support queue configuration
net/enetfec: add enqueue and dequeue support
net/enetfec: add features
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 133 +++++
doc/guides/nics/features/enetfec.ini | 14 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 4 +
drivers/net/enetfec/enet_ethdev.c | 761 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 167 ++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 +
drivers/net/enetfec/enet_regs.h | 116 ++++
drivers/net/enetfec/enet_rxtx.c | 496 ++++++++++++++++
drivers/net/enetfec/enet_uio.c | 273 +++++++++
drivers/net/enetfec/enet_uio.h | 64 +++
drivers/net/enetfec/meson.build | 11 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
15 files changed, 2082 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v4 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-10-01 11:42 [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
@ 2021-10-01 11:42 ` Apeksha Gupta
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 0/5] drivers/net: add " Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 2/5] net/enetfec: add UIO support Apeksha Gupta
` (4 subsequent siblings)
5 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-01 11:42 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
ENETFEC (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC i.MX 8M Mini.
This patch adds skeleton for enetfec driver with probe function.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
MAINTAINERS | 7 ++
doc/guides/nics/enetfec.rst | 129 +++++++++++++++++++
doc/guides/nics/features/enetfec.ini | 9 ++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 4 +
drivers/net/enetfec/enet_ethdev.c | 85 +++++++++++++
drivers/net/enetfec/enet_ethdev.h | 165 +++++++++++++++++++++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 +++++
drivers/net/enetfec/meson.build | 11 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
11 files changed, 446 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 30bf77b79a..9cb85d0d00 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -869,6 +869,13 @@ F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
+NXP enetfec
+M: Apeksha Gupta <apeksha.gupta@nxp.com>
+M: Sachin Saxena <sachin.saxena@nxp.com>
+F: drivers/net/enetfec/
+F: doc/guides/nics/enetfec.rst
+F: doc/guides/nics/features/enetfec.ini
+
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 0000000000..36b6e34c6a
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,129 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both
+high performance and low power consumption. ENETFEC PMD is a hardware
+programmable packet forwarding engine to provide high performance
+Ethernet interface. It has only 1 GB Ethernet interface with RJ45
+connector.
+
+The diagram below shows a system level overview of ENETFEC:
+
+ =====================================================
+ Userspace
+ +-----------------------------------------+
+ | ENETFEC Driver |
+ | +-------------------------+ |
+ | | virtual ethernet device | |
+ +-----------------------------------------+
+ ^ |
+ ENETFEC | |
+ PMD | |
+ RXQ | | TXQ
+ | |
+ | v
+ =====================================================
+ Kernel Space
+ +---------+
+ | fec-uio |
+ =====================================================
+ Hardware
+ +----------------------------------------+
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+------------------+
+ | PHY |
+ +-----+
+
+ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
+userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
+blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
+initialisation and for mapping the allocated memory of register & buffer
+discriptor with DPDK which gives access to non-cacheable memory for buffer
+discriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
+driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- Linux
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
+
+.. note::
+
+ Branch is 'lf-5.10.y'
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **dpdk-testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 0000000000..bdfbdbd9d4
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 784d5d39f6..777fdab4a0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ad7c1afec0..e41d033a95 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -91,6 +91,10 @@ New Features
Added command-line options to specify total number of processes and
current process ID. Each process owns subset of Rx and Tx queues.
+* **Added NXP ENETFEC PMD.**
+
+ Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
+ :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
Removed Items
-------------
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 0000000000..8a74fb5bf2
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+#define ENETFEC_CDEV_INVALID_FD -1
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+ ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+
+ return 0;
+
+failed_init:
+ ENETFEC_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ int ret;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (eth_dev == NULL)
+ return -ENODEV;
+
+ ret = rte_eth_dev_release_port(eth_dev);
+ if (ret != 0)
+ return -EINVAL;
+
+ ENETFEC_PMD_INFO("Closing sw device");
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 0000000000..2d2e43eb26
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef __ENETFEC_ETHDEV_H__
+#define __ENETFEC_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+/*
+ * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
+ */
+#define ENETFEC_MAX_Q 3
+
+#define ETHER_ADDR_LEN 6
+#define BD_LEN 49152
+#define ENETFEC_TX_FR_SIZE 2048
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
+
+/* full duplex or half duplex */
+#define HALF_DUPLEX 0x00
+#define FULL_DUPLEX 0x01
+#define UNKNOWN_DUPLEX 0xff
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ETH_ALEN RTE_ETHER_ADDR_LEN
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
+
+#define __iomem
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+/* Required types */
+typedef uint8_t u8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef uint64_t u64;
+
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int queue_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
+ * descriptor base is x_bd_base. Currently available buffer are x_cur
+ * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
+ * that is sent by the controller.
+ * The tx_cur and dirty_tx are same in completely full and empty
+ * conditions. Actual condition is determined by empty & ready bits.
+ */
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
+ struct rte_mempool *pool;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
+ bool bufdesc_ex;
+ unsigned int tx_align;
+ unsigned int rx_align;
+ int full_duplex;
+ unsigned int phy_speed;
+ uint32_t quirks;
+ int flag_csum;
+ int flag_pause;
+ int flag_wol;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ int link;
+ void *hw_baseaddr_v;
+ uint64_t hw_baseaddr_p;
+ void *bd_addr_v;
+ uint64_t bd_addr_p;
+ uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
+ uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
+ void *dma_baseaddr_r[ENETFEC_MAX_Q];
+ void *dma_baseaddr_t[ENETFEC_MAX_Q];
+ uint64_t cbus_size;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ int hw_ts_rx_en;
+ int hw_ts_tx_en;
+ struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
+};
+
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
+static inline int
+fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
+ struct bufdesc_prop *bd);
+int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
+ struct rte_mbuf *mbuf);
+
+#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 0000000000..e7b3964a0e
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _ENETFEC_LOGS_H_
+#define _ENETFEC_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENETFEC_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENETFEC_PMD_DEBUG(fmt, args...) \
+ ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
+#define ENETFEC_PMD_ERR(fmt, args...) \
+ ENETFEC_PMD_LOG(ERR, fmt, ## args)
+#define ENETFEC_PMD_INFO(fmt, args...) \
+ ENETFEC_PMD_LOG(INFO, fmt, ## args)
+
+#define ENETFEC_PMD_WARN(fmt, args...) \
+ ENETFEC_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENETFEC_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENETFEC_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 0000000000..79dca58dea
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+sources = files('enet_ethdev.c',
+ 'enet_uio.c',
+ 'enet_rxtx.c')
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 0000000000..b66517b171
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index bcf488f203..04be346509 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -19,6 +19,7 @@ drivers = [
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v4 2/5] net/enetfec: add UIO support
2021-10-01 11:42 [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-10-01 11:42 ` Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 3/5] net/enetfec: support queue configuration Apeksha Gupta
` (3 subsequent siblings)
5 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-01 11:42 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with "fec-uio" driver implemented in
kernel for PHY initialisation and for mapping the allocated memory
of register & BD from kernel to DPDK which gives access to
non-cacheable memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 236 ++++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 2 +
drivers/net/enetfec/enet_regs.h | 106 ++++++++++++
drivers/net/enetfec/enet_uio.c | 273 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++++++
5 files changed, 681 insertions(+)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 8a74fb5bf2..406a8db7f3 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -13,16 +13,221 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_ethdev.h"
#include "enet_pmd_logs.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
#define ENETFEC_CDEV_INVALID_FD -1
+#define BIT(nr) (1u << (nr))
+
+/* FEC receive acceleration */
+#define ENETFEC_RACC_IPDIS BIT(1)
+#define ENETFEC_RACC_PRODIS BIT(2)
+#define ENETFEC_RACC_SHIFT16 BIT(7)
+#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
+ ENETFEC_RACC_PRODIS)
+
+#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
+#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
+
+/* Pause frame field and FIFO threshold */
+#define ENETFEC_FCE BIT(5)
+#define ENETFEC_RSEM_V 0x84
+#define ENETFEC_RSFL_V 16
+#define ENETFEC_RAEM_V 0x8
+#define ENETFEC_RAFL_V 0x8
+#define ENETFEC_OPD_V 0xFFF0
+
+#define NUM_OF_QUEUES 6
+
+uint32_t e_cntl;
+
+/*
+ * This function is called to start or restart the ENETFEC during a link
+ * change, transmit timeout, or to reconfigure the ENETFEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t temp_mac[2];
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENETFEC_ETHEREN;
+
+ /* default mac address */
+ struct rte_ether_addr addr = {
+ .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
+ uint32_t val;
+
+ /*
+ * enet-mac reset will reset mac address registers too,
+ * so need to reconfigure it.
+ */
+ memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
+ rte_write32(rte_cpu_to_be_32(temp_mac[0]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ rte_write32(rte_cpu_to_be_32(temp_mac[1]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(rte_cpu_to_le_32(0x04),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ /* align IP header */
+ val |= ENETFEC_RACC_SHIFT16;
+ val &= ~ENETFEC_RACC_OPTIONS;
+ rte_write32(rte_cpu_to_le_32(val),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= BIT(6);
+ ecntl |= BIT(5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENETFEC_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
+ } else {
+ rcntl &= ~ENETFEC_FCE;
+ }
+
+ rte_write32(rte_cpu_to_le_32(rcntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* enable ENETFEC endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENETFEC store and forward mode */
+ rte_write32(rte_cpu_to_le_32(1 << 8),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
+ }
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENETFEC_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENETFEC_RXC_DLY;
+ /* Enable the MIB statistic event counters */
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
+
+ ecntl |= 0x70000000;
+ e_cntl = ecntl;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(rte_cpu_to_le_32(ecntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
+
+ return 0;
+}
+
+static int
+enetfec_eth_start(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+/* ENETFEC enable function.
+ * @param[in] base ENETFEC base address
+ */
+void
+enetfec_enable(void *base)
+{
+ rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) | e_cntl,
+ (uint8_t *)base + ENETFEC_ECR);
+}
+
+/* ENETFEC disable function.
+ * @param[in] base ENETFEC base address
+ */
+void
+enetfec_disable(void *base)
+{
+ rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) & ~e_cntl,
+ (uint8_t *)base + ENETFEC_ECR);
+}
+
+static int
+enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ dev->data->dev_started = 0;
+ enetfec_disable(fep->hw_baseaddr_v);
+
+ return 0;
+}
+
+static const struct eth_dev_ops enetfec_ops = {
+ .dev_configure = enetfec_eth_configure,
+ .dev_start = enetfec_eth_start,
+ .dev_stop = enetfec_eth_stop
+};
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ fep->full_duplex = FULL_DUPLEX;
+ dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
+
return 0;
}
@@ -33,6 +238,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -46,6 +253,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENETFEC_MAX_Q;
+ fep->max_tx_queues = ENETFEC_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT
+ | QUIRK_RACC;
+
+ rc = enetfec_configure();
+ if (rc != 0)
+ return -ENOMEM;
+ rc = config_enetfec_uio(fep);
+ if (rc != 0)
+ return -ENOMEM;
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / NUM_OF_QUEUES;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 2d2e43eb26..e9f58d1c83 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -161,5 +161,7 @@ struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
struct bufdesc_prop *bd);
int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
struct rte_mbuf *mbuf);
+void enetfec_enable(void *base);
+void enetfec_disable(void *base);
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 0000000000..5415ed77ea
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __ENETFEC_REGS_H
+#define __ENETFEC_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */
+#define RX_BD_INT 0x00800000
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+
+/*
+ * 0 The next BD in consecutive location
+ * 1 The next BD in ENETFECn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of buffer descriptor */
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+#define TX_BD_WRAP ((ushort)0x2000)
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+
+#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_RD_START_2 : ENETFEC_RD_START_0))
+#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_TD_START_2 : ENETFEC_TD_START_0))
+#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0))
+
+#define ENETFEC_ETHEREN ((uint)0x00000002)
+#define ENETFEC_TXC_DLY ((uint)0x00010000)
+#define ENETFEC_RXC_DLY ((uint)0x00020000)
+
+/* ENETFEC MAC is in controller */
+#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENETFEC_EIR 0x004 /* Interrupt event register */
+#define ENETFEC_EIMR 0x008 /* Interrupt mask register */
+#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENETFEC_ECR 0x024 /* Ethernet control register */
+#define ENETFEC_MSCR 0x044 /* MII speed control register */
+#define ENETFEC_MIBC 0x064 /* MIB control and status register */
+#define ENETFEC_RCR 0x084 /* Receive control register */
+#define ENETFEC_TCR 0x0c4 /* Transmit Control register */
+#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */
+#define ENETFEC_IALR 0x11c /* hash table 32 bits low */
+#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */
+#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */
+#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */
+#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */
+#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */
+#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */
+#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */
+#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */
+#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/
+#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */
+
+#endif /*__ENETFEC_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 0000000000..074fe0ae63
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,273 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+static int enetfec_count;
+
+/** @brief Checks if a file name contains a certain substring.
+ * This function assumes a filename format of: [text][number].
+ * @param [in] filename File name
+ * @param [in] match String to match in file name
+ *
+ * @retval true if file name matches the criteria
+ * @retval false if file name does not match the criteria
+ */
+static bool
+file_name_match_extract(const char filename[], const char match[])
+{
+ char *substr = NULL;
+
+ substr = strstr(filename, match);
+ if (substr == NULL)
+ return false;
+
+ return true;
+}
+
+/*
+ * @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for success
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ if (ret <= 0) {
+ ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name);
+ return ret;
+ }
+ close(fd);
+
+ /* NULL-ify string */
+ line[ret] = '\0';
+
+ return 0;
+}
+
+/*
+ * @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ *
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
+ char uio_map_p_addr_str[32];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return NULL;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (enetfec_count > 0) {
+ ENETFEC_PMD_INFO("Mapped!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ return -1;
+ }
+
+ ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->bd_addr_p = uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ enetfec_count++;
+
+ return 0;
+}
+
+int
+enetfec_configure(void)
+{
+ char uio_name[32];
+ int uio_minor_number = -1;
+ int ret;
+ DIR *d = NULL;
+ struct dirent *dir;
+
+ d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
+ if (d == NULL) {
+ ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
+ return -1;
+ }
+
+ /* Iterate through all subdirs */
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 1) ||
+ !strncmp(dir->d_name, "..", 2))
+ continue;
+
+ if (file_name_match_extract (dir->d_name, "uio")) {
+ /*
+ * As substring <uio> was found in <d_name>
+ * read number following <uio> substring in <d_name>
+ */
+ sscanf(dir->d_name + strlen("uio"), "%d", &uio_minor_number);
+ /*
+ * Open file uioX/name and read first line which contains
+ * the name for the device. Based on the name check if this
+ * UIO device is for enetfec.
+ */
+ memset(uio_name, 0, sizeof(uio_name));
+ ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
+ dir->d_name, "name", uio_name);
+ if (ret != 0) {
+ ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ closedir(d);
+ return -1;
+ }
+
+ if (file_name_match_extract(uio_name,
+ FEC_UIO_DEVICE_NAME)) {
+ enetfec_uio_job.uio_minor_number = uio_minor_number;
+ ENETFEC_PMD_INFO("Detected enetfec device uio name: %s", uio_name);
+ }
+ }
+ }
+ closedir(d);
+ return 0;
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 0000000000..4a031d3f46
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+/*
+ * Name of UIO device. User space FEC will have a corresponding
+ * UIO device.
+ * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH.
+ *
+ * @note Must be kept in sync with FEC kernel driver
+ * define #FEC_UIO_DEVICE_NAME !
+ */
+#define FEC_UIO_DEVICE_NAME "imx-fec-uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int enetfec_configure(void);
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(void);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v4 3/5] net/enetfec: support queue configuration
2021-10-01 11:42 [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 1/5] net/enetfec: introduce " Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-10-01 11:42 ` Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
` (2 subsequent siblings)
5 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-01 11:42 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds Rx/Tx queue configuration setup operations.
On packet reception the respective BD Ring status bit is set
which is then used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 230 +++++++++++++++++++++++++++++-
1 file changed, 229 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 406a8db7f3..9637aa60dd 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -45,6 +45,19 @@
uint32_t e_cntl;
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_CHECKSUM;
+
+static uint64_t dev_tx_offloads_sup =
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM;
+
/*
* This function is called to start or restart the ENETFEC during a link
* change, transmit timeout, or to reconfigure the ENETFEC. The network
@@ -213,10 +226,225 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_queues = ENETFEC_MAX_Q;
+ dev_info->max_tx_queues = ENETFEC_MAX_Q;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ dev_info->tx_offload_capa = dev_tx_offloads_sup;
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Tx deferred start is not supported */
+ if (tx_conf->tx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (txq == NULL) {
+ ENETFEC_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.queue_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((uint64_t)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((uint64_t)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(0, &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Rx deferred start is not supported */
+ if (rx_conf->rx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Rx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (rxq == NULL) {
+ ENETFEC_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.queue_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENETFEC_PMD_ERR("mbuf failed\n");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr) > 0)
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ if (rxq->rx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ }
+ rte_free(rxq);
+ return errno;
+}
+
static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
- .dev_stop = enetfec_eth_stop
+ .dev_stop = enetfec_eth_stop,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v4 4/5] net/enetfec: add enqueue and dequeue support
2021-10-01 11:42 [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
` (2 preceding siblings ...)
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-10-01 11:42 ` Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 5/5] net/enetfec: add features Apeksha Gupta
2021-10-17 10:49 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
5 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-01 11:42 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds burst enqueue and dequeue operations to the enetfec
PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
used to enable this feature. By default loopback mode is disabled.
Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 2 +
drivers/net/enetfec/enet_ethdev.c | 197 ++++++++++++
drivers/net/enetfec/enet_rxtx.c | 445 +++++++++++++++++++++++++++
4 files changed, 646 insertions(+)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 36b6e34c6a..6c4e23379f 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -80,6 +80,8 @@ driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index bdfbdbd9d4..7e0fb148ac 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Basic stats = Y
+Promiscuous mode = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 9637aa60dd..6311f1b114 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -41,6 +41,8 @@
#define ENETFEC_RAFL_V 0x8
#define ENETFEC_OPD_V 0xFFF0
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
#define NUM_OF_QUEUES 6
uint32_t e_cntl;
@@ -179,6 +181,40 @@ enetfec_restart(struct rte_eth_dev *dev)
rte_delay_us(10);
}
+static void
+enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
static int
enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
{
@@ -192,6 +228,8 @@ static int
enetfec_eth_start(struct rte_eth_dev *dev)
{
enetfec_restart(dev);
+ dev->rx_pkt_burst = &enetfec_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_xmit_pkts;
return 0;
}
@@ -226,6 +264,110 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_close(__rte_unused struct rte_eth_dev *dev)
+{
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused)
+{
+ struct rte_eth_link link;
+ unsigned int lstatus = 1;
+
+ if (dev == NULL) {
+ ENETFEC_PMD_ERR("Invalid device in link_update.\n");
+ return 0;
+ }
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ link.link_status = lstatus;
+ link.link_speed = ETH_SPEED_NUM_1G;
+
+ ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ "Up");
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(rte_cpu_to_le_32(tmp),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(rte_cpu_to_le_32(0x04400002),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0x10800049),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ if (stats == NULL)
+ return -1;
+
+ memset(stats, 0, sizeof(struct rte_eth_stats));
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+
+ return 0;
+}
+
static int
enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -237,6 +379,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static void
+enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
static const unsigned short offset_des_active_rxq[] = {
ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
};
@@ -442,6 +596,12 @@ static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
.dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
+ .link_update = enetfec_eth_link_update,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup
@@ -468,6 +628,7 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
int rc;
int i;
unsigned int bdsize;
+ struct rte_ether_addr macaddr;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -510,6 +671,27 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set default mac address
+ */
+ macaddr.addr_bytes[0] = 1;
+ macaddr.addr_bytes[1] = 1;
+ macaddr.addr_bytes[2] = 1;
+ macaddr.addr_bytes[3] = 1;
+ macaddr.addr_bytes[4] = 1;
+ macaddr.addr_bytes[5] = 1;
+ enetfec_set_mac_address(dev, &macaddr);
+
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
@@ -518,6 +700,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
failed_init:
ENETFEC_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -525,6 +709,8 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
int ret;
/* find the ethdev entry */
@@ -532,11 +718,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
if (eth_dev == NULL)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+ enetfec_eth_stop(eth_dev);
+
ret = rte_eth_dev_release_port(eth_dev);
if (ret != 0)
return -EINVAL;
ENETFEC_PMD_INFO("Closing sw device");
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 0000000000..445fa97e77
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,445 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+#define ENETFEC_LOOPBACK 0
+#define ENETFEC_DUMP 0
+
+#if ENETFEC_DUMP
+static void
+enet_dump(struct enetfec_priv_tx_q *txq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENETFEC_PMD_DEBUG("TX ring dump\n");
+ ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = txq->bd.base;
+ do {
+ ENETFEC_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == txq->bd.cur ? 'S' : ' ',
+ bdp == txq->dirty_tx ? 'H' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ txq->tx_mbuf[index]);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ index++;
+ } while (bdp != txq->bd.base);
+}
+
+static void
+enet_dump_rx(struct enetfec_priv_rx_q *rxq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENETFEC_PMD_DEBUG("RX ring dump\n");
+ ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = rxq->bd.base;
+ do {
+ ENETFEC_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == rxq->bd.cur ? 'S' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ rxq->rx_mbuf[index]);
+ rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index],
+ rxq->rx_mbuf[index]->pkt_len);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ index++;
+ } while (bdp != rxq->bd.base);
+}
+#endif
+
+#if ENETFEC_LOOPBACK
+static volatile bool lb_quit;
+
+static void fec_signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
+ printf("\n\n %s: Signal %d received, preparing to exit...\n",
+ __func__, signum);
+ lb_quit = true;
+ }
+}
+
+static void
+enetfec_lb_rxtx(void *rxq1)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL;
+ struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len = 0;
+ int index_r = 0, index_t = 0;
+ u8 *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ unsigned int i;
+ struct enetfec_private *fep;
+ struct enetfec_priv_tx_q *txq;
+ fep = rxq->fep->dev->data->dev_private;
+ txq = fep->tx_queues[0];
+
+ pool = rxq->pool;
+ rx_bdp = rxq->bd.cur;
+ tx_bdp = txq->bd.cur;
+
+ signal(SIGTSTP, fec_signal_handler);
+ while (!lb_quit) {
+chk_again:
+ status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc));
+ if (status & RX_BD_EMPTY) {
+ if (!lb_quit)
+ goto chk_again;
+ rxq->bd.cur = rx_bdp;
+ txq->bd.cur = tx_bdp;
+ return;
+ }
+
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ ENETFEC_PMD_ERR("rx_fifo_error\n");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_PMD_ERR("rx_length_error\n");
+ if (status & RX_BD_LAST)
+ ENETFEC_PMD_ERR("rcv is not +last\n");
+ }
+ /* CRC Error */
+ if (status & RX_BD_CR)
+ ENETFEC_PMD_ERR("rx_crc_errors\n");
+
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_PMD_ERR("rx_frame_error\n");
+ mbuf = NULL;
+ goto rx_processing_done;
+ }
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(!new_mbuf)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Process the incoming frame. */
+ pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen));
+
+ /* shows data with respect to the data_off field. */
+ index_r = enet_get_bd_index(rx_bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index_r];
+
+ /* adjust pkt_len */
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4);
+ if (rxq->fep->quirks & QUIRK_RACC)
+ rte_pktmbuf_adj(mbuf, 2);
+
+ /* Replace Buffer in BD */
+ rxq->rx_mbuf[index_r] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &rx_bdp->bd_bufaddr);
+
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+
+ /* TX begins: First clean the ring then process packet */
+ index_t = enet_get_bd_index(tx_bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc));
+ if (status & TX_BD_READY)
+ stats->oerrors++;
+ break;
+ if (txq->tx_mbuf[index_t]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index_t]);
+ txq->tx_mbuf[index_t] = NULL;
+ }
+
+ if (mbuf == NULL)
+ continue;
+
+ /* Fill in a Tx ring entry */
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ pkt_len = rte_pktmbuf_pkt_len(mbuf);
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+
+ for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &tx_bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen);
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+
+ /* Save mbuf pointer to clean later */
+ txq->tx_mbuf[index_t] = mbuf;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd);
+ }
+}
+#endif
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+#if ENETFEC_LOOPBACK
+ enetfec_lb_rxtx(rxq1);
+#endif
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while ((status & RX_BD_EMPTY) == 0) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(new_mbuf == NULL)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENETFEC_PMD_ERR("rx_fifo_error\n");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_PMD_ERR("rx_length_error\n");
+ if (status & RX_BD_LAST)
+ ENETFEC_PMD_ERR("rcv is not +last\n");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENETFEC_PMD_ERR("rx_crc_errors\n");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_PMD_ERR("rx_frame_error\n");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ u8 *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENETFEC_PMD_DEBUG("SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v4 5/5] net/enetfec: add features
2021-10-01 11:42 [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
` (3 preceding siblings ...)
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
@ 2021-10-01 11:42 ` Apeksha Gupta
2021-10-17 10:49 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
5 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-01 11:42 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds checksum and VLAN offloads in enetfec network
poll mode driver.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 ++
doc/guides/nics/features/enetfec.ini | 3 ++
drivers/net/enetfec/enet_ethdev.c | 17 ++++++++-
drivers/net/enetfec/enet_regs.h | 10 ++++++
drivers/net/enetfec/enet_rxtx.c | 53 +++++++++++++++++++++++++++-
5 files changed, 83 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 6c4e23379f..7f4560e5ce 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -82,6 +82,8 @@ ENETFEC Features
- Basic stats
- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 7e0fb148ac..3e9cc90b9f 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -6,6 +6,9 @@
[Features]
Basic stats = Y
Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 6311f1b114..35bb4e0a98 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -106,7 +106,11 @@ enetfec_restart(struct rte_eth_dev *dev)
val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
/* align IP header */
val |= ENETFEC_RACC_SHIFT16;
- val &= ~ENETFEC_RACC_OPTIONS;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENETFEC_RACC_OPTIONS;
+ else
+ val &= ~ENETFEC_RACC_OPTIONS;
rte_write32(rte_cpu_to_le_32(val),
(uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
@@ -611,9 +615,20 @@ static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
fep->full_duplex = FULL_DUPLEX;
dev->dev_ops = &enetfec_ops;
+ if (fep->quirks & QUIRK_VLAN)
+ /* enable hw VLAN support */
+ rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+
+ if (fep->quirks & QUIRK_CSUM) {
+ /* enable hw accelerator */
+ rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+ }
rte_eth_dev_probing_finish(dev);
return 0;
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
index 5415ed77ea..a300c6f8bc 100644
--- a/drivers/net/enetfec/enet_regs.h
+++ b/drivers/net/enetfec/enet_regs.h
@@ -27,6 +27,12 @@
#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENETFEC_RX_VLAN 0x00000004
+
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+
/* Ethernet transmit use control and status of buffer descriptor */
#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
@@ -56,6 +62,10 @@
#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
/* GBIT supported in controller */
#define QUIRK_GBIT (1 << 3)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan */
+#define QUIRK_VLAN (1 << 6)
/* RACC register supported by controller */
#define QUIRK_RACC (1 << 12)
/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index 445fa97e77..fdd3343589 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -245,9 +245,14 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
unsigned short status;
unsigned short pkt_len;
int pkt_received = 0, index = 0;
- void *data;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
pool = rxq->pool;
bdp = rxq->bd.cur;
#if ENETFEC_LOOPBACK
@@ -302,6 +307,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
mbuf = rxq->rx_mbuf[index];
data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
rte_prefetch0(data);
rte_pktmbuf_append((struct rte_mbuf *)mbuf,
pkt_len - 4);
@@ -311,6 +317,47 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
rx_pkts[pkt_received] = mbuf;
pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)
+ ((uint8_t *)data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove((uint8_t *)mbuf_data + VLAN_HLEN,
+ data, ETH_ALEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if ((rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
+ /* don't check it */
+ mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ }
+
rxq->rx_mbuf[index] = new_mbuf;
rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
&bdp->bd_bufaddr);
@@ -411,6 +458,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
if (txq->fep->bufdesc_ex) {
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
rte_write32(0, &ebdp->bd_bdu);
rte_write32(rte_cpu_to_le_32(estatus),
&ebdp->bd_esc);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver
2021-10-01 11:42 [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
` (4 preceding siblings ...)
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 5/5] net/enetfec: add features Apeksha Gupta
@ 2021-10-17 10:49 ` Apeksha Gupta
5 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-17 10:49 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
Hi Ferruh,
Do you have any further comments on this series ?
Regards,
Apeksha
> -----Original Message-----
> From: Apeksha Gupta <apeksha.gupta@nxp.com>
> Sent: Friday, October 1, 2021 5:12 PM
> To: david.marchand@redhat.com; andrew.rybchenko@oktetlabs.ru;
> ferruh.yigit@intel.com
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Apeksha Gupta <apeksha.gupta@nxp.com>
> Subject: [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver
>
> This patch series introduce the enetfec driver, ENETFEC
> (Fast Ethernet Controller) is a network poll mode driver for
> the inbuilt NIC found in the NXP i.MX 8M Mini SoC.
>
> An overview of the enetfec driver with probe and remove are in patch 1.
> Patch 2 design UIO interface so that user space directly communicate with
> a UIO based hardware device. UIO interface mmap the Control and Status
> Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
> gives access to non-cacheble memory for BD.
>
> Patch 3 adds the RX/TX queue configuration setup operations.
> Patch 4 adds enqueue and dequeue support. Also adds some basic features
> like promiscuous enable, basic stats.
> Patch 5 adds checksum and VLAN features.
>
> Apeksha Gupta (5):
> net/enetfec: introduce NXP ENETFEC driver
> net/enetfec: add UIO support
> net/enetfec: support queue configuration
> net/enetfec: add enqueue and dequeue support
> net/enetfec: add features
>
> MAINTAINERS | 7 +
> doc/guides/nics/enetfec.rst | 133 +++++
> doc/guides/nics/features/enetfec.ini | 14 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/rel_notes/release_21_11.rst | 4 +
> drivers/net/enetfec/enet_ethdev.c | 761 +++++++++++++++++++++++++
> drivers/net/enetfec/enet_ethdev.h | 167 ++++++
> drivers/net/enetfec/enet_pmd_logs.h | 31 +
> drivers/net/enetfec/enet_regs.h | 116 ++++
> drivers/net/enetfec/enet_rxtx.c | 496 ++++++++++++++++
> drivers/net/enetfec/enet_uio.c | 273 +++++++++
> drivers/net/enetfec/enet_uio.h | 64 +++
> drivers/net/enetfec/meson.build | 11 +
> drivers/net/enetfec/version.map | 3 +
> drivers/net/meson.build | 1 +
> 15 files changed, 2082 insertions(+)
> create mode 100644 doc/guides/nics/enetfec.rst
> create mode 100644 doc/guides/nics/features/enetfec.ini
> create mode 100644 drivers/net/enetfec/enet_ethdev.c
> create mode 100644 drivers/net/enetfec/enet_ethdev.h
> create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
> create mode 100644 drivers/net/enetfec/enet_regs.h
> create mode 100644 drivers/net/enetfec/enet_rxtx.c
> create mode 100644 drivers/net/enetfec/enet_uio.c
> create mode 100644 drivers/net/enetfec/enet_uio.h
> create mode 100644 drivers/net/enetfec/meson.build
> create mode 100644 drivers/net/enetfec/version.map
>
> --
> 2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v5 0/5] drivers/net: add NXP ENETFEC driver
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-10-19 18:39 ` Apeksha Gupta
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 1/5] net/enetfec: introduce " Apeksha Gupta
` (4 more replies)
0 siblings, 5 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-19 18:39 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch series introduce the enetfec driver, ENETFEC
(Fast Ethernet Controller) is a network poll mode driver for
the inbuilt NIC found in the NXP i.MX 8M Mini SoC.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO interface so that user space directly communicate with
a UIO based hardware device. UIO interface mmap the Control and Status
Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
gives access to non-cacheble memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Patch 5 adds checksum and VLAN features.
Apeksha Gupta (5):
net/enetfec: introduce NXP ENETFEC driver
net/enetfec: add UIO support
net/enetfec: support queue configuration
net/enetfec: add enqueue and dequeue support
net/enetfec: add features
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 133 +++++
doc/guides/nics/features/enetfec.ini | 14 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 4 +
drivers/net/enetfec/enet_ethdev.c | 761 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 181 ++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 +
drivers/net/enetfec/enet_regs.h | 116 ++++
drivers/net/enetfec/enet_rxtx.c | 496 ++++++++++++++++
drivers/net/enetfec/enet_uio.c | 278 +++++++++
drivers/net/enetfec/enet_uio.h | 64 +++
drivers/net/enetfec/meson.build | 11 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
15 files changed, 2101 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v5 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 0/5] drivers/net: add " Apeksha Gupta
@ 2021-10-19 18:39 ` Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add " Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 2/5] net/enetfec: add UIO support Apeksha Gupta
` (3 subsequent siblings)
4 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-19 18:39 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
ENETFEC (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC i.MX 8M Mini.
This patch adds skeleton for enetfec driver with probe function.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 129 ++++++++++++++++++
doc/guides/nics/features/enetfec.ini | 9 ++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 4 +
drivers/net/enetfec/enet_ethdev.c | 85 ++++++++++++
drivers/net/enetfec/enet_ethdev.h | 179 +++++++++++++++++++++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 +++++
drivers/net/enetfec/meson.build | 11 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
11 files changed, 460 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 8dceb6c0e0..db2df484d0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -876,6 +876,13 @@ F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
+NXP enetfec
+M: Apeksha Gupta <apeksha.gupta@nxp.com>
+M: Sachin Saxena <sachin.saxena@nxp.com>
+F: drivers/net/enetfec/
+F: doc/guides/nics/enetfec.rst
+F: doc/guides/nics/features/enetfec.ini
+
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 0000000000..36b6e34c6a
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,129 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both
+high performance and low power consumption. ENETFEC PMD is a hardware
+programmable packet forwarding engine to provide high performance
+Ethernet interface. It has only 1 GB Ethernet interface with RJ45
+connector.
+
+The diagram below shows a system level overview of ENETFEC:
+
+ =====================================================
+ Userspace
+ +-----------------------------------------+
+ | ENETFEC Driver |
+ | +-------------------------+ |
+ | | virtual ethernet device | |
+ +-----------------------------------------+
+ ^ |
+ ENETFEC | |
+ PMD | |
+ RXQ | | TXQ
+ | |
+ | v
+ =====================================================
+ Kernel Space
+ +---------+
+ | fec-uio |
+ =====================================================
+ Hardware
+ +----------------------------------------+
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+------------------+
+ | PHY |
+ +-----+
+
+ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
+userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
+blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
+initialisation and for mapping the allocated memory of register & buffer
+discriptor with DPDK which gives access to non-cacheable memory for buffer
+discriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
+driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- Linux
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
+
+.. note::
+
+ Branch is 'lf-5.10.y'
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **dpdk-testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 0000000000..bdfbdbd9d4
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 784d5d39f6..777fdab4a0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 3362c52a73..e964838967 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -20,6 +20,10 @@ DPDK Release 21.11
ninja -C build doc
xdg-open build/doc/guides/html/rel_notes/release_21_11.html
+* **Added NXP ENETFEC PMD.**
+
+ Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
+ :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
New Features
------------
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 0000000000..8a74fb5bf2
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+#define ENETFEC_CDEV_INVALID_FD -1
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+ ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+
+ return 0;
+
+failed_init:
+ ENETFEC_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ int ret;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (eth_dev == NULL)
+ return -ENODEV;
+
+ ret = rte_eth_dev_release_port(eth_dev);
+ if (ret != 0)
+ return -EINVAL;
+
+ ENETFEC_PMD_INFO("Closing sw device");
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 0000000000..c674dfc782
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef __ENETFEC_ETHDEV_H__
+#define __ENETFEC_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+/*
+ * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
+ */
+#define ENETFEC_MAX_Q 3
+
+#define ETHER_ADDR_LEN 6
+#define BD_LEN 49152
+#define ENETFEC_TX_FR_SIZE 2048
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
+
+/* full duplex or half duplex */
+#define HALF_DUPLEX 0x00
+#define FULL_DUPLEX 0x01
+#define UNKNOWN_DUPLEX 0xff
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ETH_ALEN RTE_ETHER_ADDR_LEN
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
+
+#define __iomem
+#if defined(RTE_ARCH_ARM)
+#if defined(RTE_ARCH_64)
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+
+#else /* RTE_ARCH_32 */
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+#else
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+/* Required types */
+typedef uint8_t u8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef uint64_t u64;
+
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int queue_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
+ * descriptor base is x_bd_base. Currently available buffer are x_cur
+ * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
+ * that is sent by the controller.
+ * The tx_cur and dirty_tx are same in completely full and empty
+ * conditions. Actual condition is determined by empty & ready bits.
+ */
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
+ struct rte_mempool *pool;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
+ bool bufdesc_ex;
+ unsigned int tx_align;
+ unsigned int rx_align;
+ int full_duplex;
+ unsigned int phy_speed;
+ uint32_t quirks;
+ int flag_csum;
+ int flag_pause;
+ int flag_wol;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ int link;
+ void *hw_baseaddr_v;
+ uint64_t hw_baseaddr_p;
+ void *bd_addr_v;
+ uint64_t bd_addr_p;
+ uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
+ uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
+ void *dma_baseaddr_r[ENETFEC_MAX_Q];
+ void *dma_baseaddr_t[ENETFEC_MAX_Q];
+ uint64_t cbus_size;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ int hw_ts_rx_en;
+ int hw_ts_tx_en;
+ struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
+};
+
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
+static inline int
+fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
+ struct bufdesc_prop *bd);
+int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
+ struct rte_mbuf *mbuf);
+
+#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 0000000000..e7b3964a0e
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _ENETFEC_LOGS_H_
+#define _ENETFEC_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENETFEC_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENETFEC_PMD_DEBUG(fmt, args...) \
+ ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
+#define ENETFEC_PMD_ERR(fmt, args...) \
+ ENETFEC_PMD_LOG(ERR, fmt, ## args)
+#define ENETFEC_PMD_INFO(fmt, args...) \
+ ENETFEC_PMD_LOG(INFO, fmt, ## args)
+
+#define ENETFEC_PMD_WARN(fmt, args...) \
+ ENETFEC_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENETFEC_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENETFEC_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 0000000000..79dca58dea
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+sources = files('enet_ethdev.c',
+ 'enet_uio.c',
+ 'enet_rxtx.c')
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 0000000000..b66517b171
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 24ad121fe4..ac294d8507 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -18,6 +18,7 @@ drivers = [
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v5 2/5] net/enetfec: add UIO support
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 0/5] drivers/net: add " Apeksha Gupta
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-10-19 18:40 ` Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 3/5] net/enetfec: support queue configuration Apeksha Gupta
` (2 subsequent siblings)
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-19 18:40 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with "fec-uio" driver implemented in
kernel for PHY initialisation and for mapping the allocated memory
of register & BD from kernel to DPDK which gives access to
non-cacheable memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 236 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 2 +
drivers/net/enetfec/enet_regs.h | 106 ++++++++++++
drivers/net/enetfec/enet_uio.c | 278 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++++++
5 files changed, 686 insertions(+)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 8a74fb5bf2..406a8db7f3 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -13,16 +13,221 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_ethdev.h"
#include "enet_pmd_logs.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
#define ENETFEC_CDEV_INVALID_FD -1
+#define BIT(nr) (1u << (nr))
+
+/* FEC receive acceleration */
+#define ENETFEC_RACC_IPDIS BIT(1)
+#define ENETFEC_RACC_PRODIS BIT(2)
+#define ENETFEC_RACC_SHIFT16 BIT(7)
+#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
+ ENETFEC_RACC_PRODIS)
+
+#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
+#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
+
+/* Pause frame field and FIFO threshold */
+#define ENETFEC_FCE BIT(5)
+#define ENETFEC_RSEM_V 0x84
+#define ENETFEC_RSFL_V 16
+#define ENETFEC_RAEM_V 0x8
+#define ENETFEC_RAFL_V 0x8
+#define ENETFEC_OPD_V 0xFFF0
+
+#define NUM_OF_QUEUES 6
+
+uint32_t e_cntl;
+
+/*
+ * This function is called to start or restart the ENETFEC during a link
+ * change, transmit timeout, or to reconfigure the ENETFEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t temp_mac[2];
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENETFEC_ETHEREN;
+
+ /* default mac address */
+ struct rte_ether_addr addr = {
+ .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
+ uint32_t val;
+
+ /*
+ * enet-mac reset will reset mac address registers too,
+ * so need to reconfigure it.
+ */
+ memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
+ rte_write32(rte_cpu_to_be_32(temp_mac[0]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ rte_write32(rte_cpu_to_be_32(temp_mac[1]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(rte_cpu_to_le_32(0x04),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ /* align IP header */
+ val |= ENETFEC_RACC_SHIFT16;
+ val &= ~ENETFEC_RACC_OPTIONS;
+ rte_write32(rte_cpu_to_le_32(val),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= BIT(6);
+ ecntl |= BIT(5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENETFEC_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
+ } else {
+ rcntl &= ~ENETFEC_FCE;
+ }
+
+ rte_write32(rte_cpu_to_le_32(rcntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* enable ENETFEC endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENETFEC store and forward mode */
+ rte_write32(rte_cpu_to_le_32(1 << 8),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
+ }
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENETFEC_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENETFEC_RXC_DLY;
+ /* Enable the MIB statistic event counters */
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
+
+ ecntl |= 0x70000000;
+ e_cntl = ecntl;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(rte_cpu_to_le_32(ecntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
+
+ return 0;
+}
+
+static int
+enetfec_eth_start(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+/* ENETFEC enable function.
+ * @param[in] base ENETFEC base address
+ */
+void
+enetfec_enable(void *base)
+{
+ rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) | e_cntl,
+ (uint8_t *)base + ENETFEC_ECR);
+}
+
+/* ENETFEC disable function.
+ * @param[in] base ENETFEC base address
+ */
+void
+enetfec_disable(void *base)
+{
+ rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) & ~e_cntl,
+ (uint8_t *)base + ENETFEC_ECR);
+}
+
+static int
+enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ dev->data->dev_started = 0;
+ enetfec_disable(fep->hw_baseaddr_v);
+
+ return 0;
+}
+
+static const struct eth_dev_ops enetfec_ops = {
+ .dev_configure = enetfec_eth_configure,
+ .dev_start = enetfec_eth_start,
+ .dev_stop = enetfec_eth_stop
+};
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ fep->full_duplex = FULL_DUPLEX;
+ dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
+
return 0;
}
@@ -33,6 +238,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -46,6 +253,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENETFEC_MAX_Q;
+ fep->max_tx_queues = ENETFEC_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT
+ | QUIRK_RACC;
+
+ rc = enetfec_configure();
+ if (rc != 0)
+ return -ENOMEM;
+ rc = config_enetfec_uio(fep);
+ if (rc != 0)
+ return -ENOMEM;
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / NUM_OF_QUEUES;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index c674dfc782..312e0424e5 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -175,5 +175,7 @@ struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
struct bufdesc_prop *bd);
int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
struct rte_mbuf *mbuf);
+void enetfec_enable(void *base);
+void enetfec_disable(void *base);
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 0000000000..5415ed77ea
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __ENETFEC_REGS_H
+#define __ENETFEC_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */
+#define RX_BD_INT 0x00800000
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+
+/*
+ * 0 The next BD in consecutive location
+ * 1 The next BD in ENETFECn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of buffer descriptor */
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+#define TX_BD_WRAP ((ushort)0x2000)
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+
+#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_RD_START_2 : ENETFEC_RD_START_0))
+#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_TD_START_2 : ENETFEC_TD_START_0))
+#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0))
+
+#define ENETFEC_ETHEREN ((uint)0x00000002)
+#define ENETFEC_TXC_DLY ((uint)0x00010000)
+#define ENETFEC_RXC_DLY ((uint)0x00020000)
+
+/* ENETFEC MAC is in controller */
+#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENETFEC_EIR 0x004 /* Interrupt event register */
+#define ENETFEC_EIMR 0x008 /* Interrupt mask register */
+#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENETFEC_ECR 0x024 /* Ethernet control register */
+#define ENETFEC_MSCR 0x044 /* MII speed control register */
+#define ENETFEC_MIBC 0x064 /* MIB control and status register */
+#define ENETFEC_RCR 0x084 /* Receive control register */
+#define ENETFEC_TCR 0x0c4 /* Transmit Control register */
+#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */
+#define ENETFEC_IALR 0x11c /* hash table 32 bits low */
+#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */
+#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */
+#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */
+#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */
+#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */
+#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */
+#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */
+#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */
+#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/
+#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */
+
+#endif /*__ENETFEC_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 0000000000..c939b4736b
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+static int enetfec_count;
+
+/** @brief Checks if a file name contains a certain substring.
+ * This function assumes a filename format of: [text][number].
+ * @param [in] filename File name
+ * @param [in] match String to match in file name
+ *
+ * @retval true if file name matches the criteria
+ * @retval false if file name does not match the criteria
+ */
+static bool
+file_name_match_extract(const char filename[], const char match[])
+{
+ char *substr = NULL;
+
+ substr = strstr(filename, match);
+ if (substr == NULL)
+ return false;
+
+ return true;
+}
+
+/*
+ * @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for success
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ if (ret <= 0) {
+ ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name);
+ return ret;
+ }
+ close(fd);
+
+ /* NULL-ify string */
+ line[ret] = '\0';
+
+ return 0;
+}
+
+/*
+ * @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ *
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
+ char uio_map_p_addr_str[32];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return NULL;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (enetfec_count > 0) {
+ ENETFEC_PMD_INFO("Mapped!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ return -1;
+ }
+
+ ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->bd_addr_p = uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ enetfec_count++;
+
+ return 0;
+}
+
+int
+enetfec_configure(void)
+{
+ char uio_name[32];
+ int uio_minor_number = -1;
+ int ret;
+ DIR *d = NULL;
+ struct dirent *dir;
+
+ d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
+ if (d == NULL) {
+ ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
+ return -1;
+ }
+
+ /* Iterate through all subdirs */
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 1) ||
+ !strncmp(dir->d_name, "..", 2))
+ continue;
+
+ if (file_name_match_extract(dir->d_name, "uio")) {
+ /*
+ * As substring <uio> was found in <d_name>
+ * read number following <uio> substring in <d_name>
+ */
+ ret = sscanf(dir->d_name + strlen("uio"), "%d",
+ &uio_minor_number);
+ if (ret < 0)
+ ENETFEC_PMD_ERR("Error: not find minor number\n");
+ /*
+ * Open file uioX/name and read first line which
+ * contains the name for the device. Based on the
+ * name check if this UIO device is for enetfec.
+ */
+ memset(uio_name, 0, sizeof(uio_name));
+ ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
+ dir->d_name, "name", uio_name);
+ if (ret != 0) {
+ ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ closedir(d);
+ return -1;
+ }
+
+ if (file_name_match_extract(uio_name,
+ FEC_UIO_DEVICE_NAME)) {
+ enetfec_uio_job.uio_minor_number =
+ uio_minor_number;
+ ENETFEC_PMD_INFO("enetfec device uio name: %s",
+ uio_name);
+ }
+ }
+ }
+ closedir(d);
+ return 0;
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 0000000000..4a031d3f46
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+/*
+ * Name of UIO device. User space FEC will have a corresponding
+ * UIO device.
+ * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH.
+ *
+ * @note Must be kept in sync with FEC kernel driver
+ * define #FEC_UIO_DEVICE_NAME !
+ */
+#define FEC_UIO_DEVICE_NAME "imx-fec-uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int enetfec_configure(void);
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(void);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v5 3/5] net/enetfec: support queue configuration
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 0/5] drivers/net: add " Apeksha Gupta
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 1/5] net/enetfec: introduce " Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-10-19 18:40 ` Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 5/5] net/enetfec: add features Apeksha Gupta
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-19 18:40 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds Rx/Tx queue configuration setup operations.
On packet reception the respective BD Ring status bit is set
which is then used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 230 +++++++++++++++++++++++++++++-
1 file changed, 229 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 406a8db7f3..0ff93363c7 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -45,6 +45,19 @@
uint32_t e_cntl;
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_CHECKSUM;
+
+static uint64_t dev_tx_offloads_sup =
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM;
+
/*
* This function is called to start or restart the ENETFEC during a link
* change, transmit timeout, or to reconfigure the ENETFEC. The network
@@ -213,10 +226,225 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_queues = ENETFEC_MAX_Q;
+ dev_info->max_tx_queues = ENETFEC_MAX_Q;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ dev_info->tx_offload_capa = dev_tx_offloads_sup;
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Tx deferred start is not supported */
+ if (tx_conf->tx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (txq == NULL) {
+ ENETFEC_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.queue_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(0, &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Rx deferred start is not supported */
+ if (rx_conf->rx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Rx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (rxq == NULL) {
+ ENETFEC_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.queue_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENETFEC_PMD_ERR("mbuf failed\n");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr) > 0)
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ if (rxq->rx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ }
+ rte_free(rxq);
+ return errno;
+}
+
static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
- .dev_stop = enetfec_eth_stop
+ .dev_stop = enetfec_eth_stop,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v5 4/5] net/enetfec: add enqueue and dequeue support
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 0/5] drivers/net: add " Apeksha Gupta
` (2 preceding siblings ...)
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-10-19 18:40 ` Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 5/5] net/enetfec: add features Apeksha Gupta
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-19 18:40 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds burst enqueue and dequeue operations to the enetfec
PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
used to enable this feature. By default loopback mode is disabled.
Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 2 +
drivers/net/enetfec/enet_ethdev.c | 197 ++++++++++++
drivers/net/enetfec/enet_rxtx.c | 445 +++++++++++++++++++++++++++
4 files changed, 646 insertions(+)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 36b6e34c6a..6c4e23379f 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -80,6 +80,8 @@ driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index bdfbdbd9d4..7e0fb148ac 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Basic stats = Y
+Promiscuous mode = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 0ff93363c7..4419952443 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -41,6 +41,8 @@
#define ENETFEC_RAFL_V 0x8
#define ENETFEC_OPD_V 0xFFF0
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
#define NUM_OF_QUEUES 6
uint32_t e_cntl;
@@ -179,6 +181,40 @@ enetfec_restart(struct rte_eth_dev *dev)
rte_delay_us(10);
}
+static void
+enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
static int
enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
{
@@ -192,6 +228,8 @@ static int
enetfec_eth_start(struct rte_eth_dev *dev)
{
enetfec_restart(dev);
+ dev->rx_pkt_burst = &enetfec_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_xmit_pkts;
return 0;
}
@@ -226,6 +264,110 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_close(__rte_unused struct rte_eth_dev *dev)
+{
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused)
+{
+ struct rte_eth_link link;
+ unsigned int lstatus = 1;
+
+ if (dev == NULL) {
+ ENETFEC_PMD_ERR("Invalid device in link_update.\n");
+ return 0;
+ }
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ link.link_status = lstatus;
+ link.link_speed = ETH_SPEED_NUM_1G;
+
+ ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ "Up");
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(rte_cpu_to_le_32(tmp),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(rte_cpu_to_le_32(0x04400002),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0x10800049),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ if (stats == NULL)
+ return -1;
+
+ memset(stats, 0, sizeof(struct rte_eth_stats));
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+
+ return 0;
+}
+
static int
enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -237,6 +379,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static void
+enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
static const unsigned short offset_des_active_rxq[] = {
ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
};
@@ -442,6 +596,12 @@ static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
.dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
+ .link_update = enetfec_eth_link_update,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup
@@ -468,6 +628,7 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
int rc;
int i;
unsigned int bdsize;
+ struct rte_ether_addr macaddr;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -510,6 +671,27 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set default mac address
+ */
+ macaddr.addr_bytes[0] = 1;
+ macaddr.addr_bytes[1] = 1;
+ macaddr.addr_bytes[2] = 1;
+ macaddr.addr_bytes[3] = 1;
+ macaddr.addr_bytes[4] = 1;
+ macaddr.addr_bytes[5] = 1;
+ enetfec_set_mac_address(dev, &macaddr);
+
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
@@ -518,6 +700,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
failed_init:
ENETFEC_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -525,6 +709,8 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
int ret;
/* find the ethdev entry */
@@ -532,11 +718,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
if (eth_dev == NULL)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+ enetfec_eth_stop(eth_dev);
+
ret = rte_eth_dev_release_port(eth_dev);
if (ret != 0)
return -EINVAL;
ENETFEC_PMD_INFO("Closing sw device");
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 0000000000..445fa97e77
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,445 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+#define ENETFEC_LOOPBACK 0
+#define ENETFEC_DUMP 0
+
+#if ENETFEC_DUMP
+static void
+enet_dump(struct enetfec_priv_tx_q *txq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENETFEC_PMD_DEBUG("TX ring dump\n");
+ ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = txq->bd.base;
+ do {
+ ENETFEC_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == txq->bd.cur ? 'S' : ' ',
+ bdp == txq->dirty_tx ? 'H' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ txq->tx_mbuf[index]);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ index++;
+ } while (bdp != txq->bd.base);
+}
+
+static void
+enet_dump_rx(struct enetfec_priv_rx_q *rxq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENETFEC_PMD_DEBUG("RX ring dump\n");
+ ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = rxq->bd.base;
+ do {
+ ENETFEC_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == rxq->bd.cur ? 'S' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ rxq->rx_mbuf[index]);
+ rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index],
+ rxq->rx_mbuf[index]->pkt_len);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ index++;
+ } while (bdp != rxq->bd.base);
+}
+#endif
+
+#if ENETFEC_LOOPBACK
+static volatile bool lb_quit;
+
+static void fec_signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
+ printf("\n\n %s: Signal %d received, preparing to exit...\n",
+ __func__, signum);
+ lb_quit = true;
+ }
+}
+
+static void
+enetfec_lb_rxtx(void *rxq1)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL;
+ struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len = 0;
+ int index_r = 0, index_t = 0;
+ u8 *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ unsigned int i;
+ struct enetfec_private *fep;
+ struct enetfec_priv_tx_q *txq;
+ fep = rxq->fep->dev->data->dev_private;
+ txq = fep->tx_queues[0];
+
+ pool = rxq->pool;
+ rx_bdp = rxq->bd.cur;
+ tx_bdp = txq->bd.cur;
+
+ signal(SIGTSTP, fec_signal_handler);
+ while (!lb_quit) {
+chk_again:
+ status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc));
+ if (status & RX_BD_EMPTY) {
+ if (!lb_quit)
+ goto chk_again;
+ rxq->bd.cur = rx_bdp;
+ txq->bd.cur = tx_bdp;
+ return;
+ }
+
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ ENETFEC_PMD_ERR("rx_fifo_error\n");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_PMD_ERR("rx_length_error\n");
+ if (status & RX_BD_LAST)
+ ENETFEC_PMD_ERR("rcv is not +last\n");
+ }
+ /* CRC Error */
+ if (status & RX_BD_CR)
+ ENETFEC_PMD_ERR("rx_crc_errors\n");
+
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_PMD_ERR("rx_frame_error\n");
+ mbuf = NULL;
+ goto rx_processing_done;
+ }
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(!new_mbuf)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Process the incoming frame. */
+ pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen));
+
+ /* shows data with respect to the data_off field. */
+ index_r = enet_get_bd_index(rx_bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index_r];
+
+ /* adjust pkt_len */
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4);
+ if (rxq->fep->quirks & QUIRK_RACC)
+ rte_pktmbuf_adj(mbuf, 2);
+
+ /* Replace Buffer in BD */
+ rxq->rx_mbuf[index_r] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &rx_bdp->bd_bufaddr);
+
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+
+ /* TX begins: First clean the ring then process packet */
+ index_t = enet_get_bd_index(tx_bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc));
+ if (status & TX_BD_READY)
+ stats->oerrors++;
+ break;
+ if (txq->tx_mbuf[index_t]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index_t]);
+ txq->tx_mbuf[index_t] = NULL;
+ }
+
+ if (mbuf == NULL)
+ continue;
+
+ /* Fill in a Tx ring entry */
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ pkt_len = rte_pktmbuf_pkt_len(mbuf);
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+
+ for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &tx_bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen);
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+
+ /* Save mbuf pointer to clean later */
+ txq->tx_mbuf[index_t] = mbuf;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd);
+ }
+}
+#endif
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+#if ENETFEC_LOOPBACK
+ enetfec_lb_rxtx(rxq1);
+#endif
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while ((status & RX_BD_EMPTY) == 0) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(new_mbuf == NULL)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENETFEC_PMD_ERR("rx_fifo_error\n");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_PMD_ERR("rx_length_error\n");
+ if (status & RX_BD_LAST)
+ ENETFEC_PMD_ERR("rcv is not +last\n");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENETFEC_PMD_ERR("rx_crc_errors\n");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_PMD_ERR("rx_frame_error\n");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ u8 *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENETFEC_PMD_DEBUG("SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v5 5/5] net/enetfec: add features
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 0/5] drivers/net: add " Apeksha Gupta
` (3 preceding siblings ...)
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
@ 2021-10-19 18:40 ` Apeksha Gupta
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-19 18:40 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds checksum and VLAN offloads in enetfec network
poll mode driver.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 ++
doc/guides/nics/features/enetfec.ini | 3 ++
drivers/net/enetfec/enet_ethdev.c | 17 ++++++++-
drivers/net/enetfec/enet_regs.h | 10 ++++++
drivers/net/enetfec/enet_rxtx.c | 53 +++++++++++++++++++++++++++-
5 files changed, 83 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 6c4e23379f..7f4560e5ce 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -82,6 +82,8 @@ ENETFEC Features
- Basic stats
- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 7e0fb148ac..3e9cc90b9f 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -6,6 +6,9 @@
[Features]
Basic stats = Y
Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 4419952443..c6957e16e5 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -106,7 +106,11 @@ enetfec_restart(struct rte_eth_dev *dev)
val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
/* align IP header */
val |= ENETFEC_RACC_SHIFT16;
- val &= ~ENETFEC_RACC_OPTIONS;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENETFEC_RACC_OPTIONS;
+ else
+ val &= ~ENETFEC_RACC_OPTIONS;
rte_write32(rte_cpu_to_le_32(val),
(uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
@@ -611,9 +615,20 @@ static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
fep->full_duplex = FULL_DUPLEX;
dev->dev_ops = &enetfec_ops;
+ if (fep->quirks & QUIRK_VLAN)
+ /* enable hw VLAN support */
+ rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+
+ if (fep->quirks & QUIRK_CSUM) {
+ /* enable hw accelerator */
+ rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+ }
rte_eth_dev_probing_finish(dev);
return 0;
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
index 5415ed77ea..a300c6f8bc 100644
--- a/drivers/net/enetfec/enet_regs.h
+++ b/drivers/net/enetfec/enet_regs.h
@@ -27,6 +27,12 @@
#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENETFEC_RX_VLAN 0x00000004
+
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+
/* Ethernet transmit use control and status of buffer descriptor */
#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
@@ -56,6 +62,10 @@
#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
/* GBIT supported in controller */
#define QUIRK_GBIT (1 << 3)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan */
+#define QUIRK_VLAN (1 << 6)
/* RACC register supported by controller */
#define QUIRK_RACC (1 << 12)
/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index 445fa97e77..fdd3343589 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -245,9 +245,14 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
unsigned short status;
unsigned short pkt_len;
int pkt_received = 0, index = 0;
- void *data;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
pool = rxq->pool;
bdp = rxq->bd.cur;
#if ENETFEC_LOOPBACK
@@ -302,6 +307,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
mbuf = rxq->rx_mbuf[index];
data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
rte_prefetch0(data);
rte_pktmbuf_append((struct rte_mbuf *)mbuf,
pkt_len - 4);
@@ -311,6 +317,47 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
rx_pkts[pkt_received] = mbuf;
pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)
+ ((uint8_t *)data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove((uint8_t *)mbuf_data + VLAN_HLEN,
+ data, ETH_ALEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if ((rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
+ /* don't check it */
+ mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ }
+
rxq->rx_mbuf[index] = new_mbuf;
rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
&bdp->bd_bufaddr);
@@ -411,6 +458,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
if (txq->fep->bufdesc_ex) {
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
rte_write32(0, &ebdp->bd_bdu);
rte_write32(rte_cpu_to_le_32(estatus),
&ebdp->bd_esc);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v6 0/5] drivers/net: add NXP ENETFEC driver
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-10-21 4:46 ` Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce " Apeksha Gupta
` (5 more replies)
0 siblings, 6 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-21 4:46 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch series introduce the enetfec driver, ENETFEC
(Fast Ethernet Controller) is a network poll mode driver for
the inbuilt NIC found in the NXP i.MX 8M Mini SoC.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO interface so that user space directly communicate with
a UIO based hardware device. UIO interface mmap the Control and Status
Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
gives access to non-cacheble memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Patch 5 adds checksum and VLAN features.
Apeksha Gupta (5):
net/enetfec: introduce NXP ENETFEC driver
net/enetfec: add UIO support
net/enetfec: support queue configuration
net/enetfec: add enqueue and dequeue support
net/enetfec: add features
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 135 +++++
doc/guides/nics/features/enetfec.ini | 14 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 4 +
drivers/net/enetfec/enet_ethdev.c | 761 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 181 ++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 +
drivers/net/enetfec/enet_regs.h | 116 ++++
drivers/net/enetfec/enet_rxtx.c | 496 ++++++++++++++++
drivers/net/enetfec/enet_uio.c | 278 +++++++++
drivers/net/enetfec/enet_uio.h | 64 +++
drivers/net/enetfec/meson.build | 11 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
15 files changed, 2103 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add " Apeksha Gupta
@ 2021-10-21 4:46 ` Apeksha Gupta
2021-10-21 5:24 ` Hemant Agrawal
` (2 more replies)
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 2/5] net/enetfec: add UIO support Apeksha Gupta
` (4 subsequent siblings)
5 siblings, 3 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-21 4:46 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
ENETFEC (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC i.MX 8M Mini.
This patch adds skeleton for enetfec driver with probe function.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
v6:
- Fix document build errors
---
---
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 131 ++++++++++++++++++
doc/guides/nics/features/enetfec.ini | 9 ++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 4 +
drivers/net/enetfec/enet_ethdev.c | 85 ++++++++++++
drivers/net/enetfec/enet_ethdev.h | 179 +++++++++++++++++++++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 +++++
drivers/net/enetfec/meson.build | 11 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
11 files changed, 462 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 8dceb6c0e0..db2df484d0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -876,6 +876,13 @@ F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
+NXP enetfec
+M: Apeksha Gupta <apeksha.gupta@nxp.com>
+M: Sachin Saxena <sachin.saxena@nxp.com>
+F: drivers/net/enetfec/
+F: doc/guides/nics/enetfec.rst
+F: doc/guides/nics/features/enetfec.ini
+
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 0000000000..dfcd032098
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,131 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both
+high performance and low power consumption. ENETFEC PMD is a hardware
+programmable packet forwarding engine to provide high performance
+Ethernet interface. It has only 1 GB Ethernet interface with RJ45
+connector.
+
+The diagram below shows a system level overview of ENETFEC:
+
+ .. code-block:: console
+
+ =====================================================
+ Userspace
+ +-----------------------------------------+
+ | ENETFEC Driver |
+ | +-------------------------+ |
+ | | virtual ethernet device | |
+ +-----------------------------------------+
+ ^ |
+ | |
+ | |
+ RXQ | | TXQ
+ | |
+ | v
+ =====================================================
+ Kernel Space
+ +---------+
+ | fec-uio |
+ ====================+=========+======================
+ Hardware
+ +-----------------------------------------+
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+-------------------+
+ | PHY |
+ +-----+
+
+ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
+userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
+blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
+initialisation and for mapping the allocated memory of register & buffer
+descriptor with DPDK which gives access to non-cacheable memory for buffer
+descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
+driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- Linux
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
+
+ .. note::
+
+ Branch is 'lf-5.10.y'
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **dpdk-testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 0000000000..bdfbdbd9d4
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 784d5d39f6..777fdab4a0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 3362c52a73..e964838967 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -20,6 +20,10 @@ DPDK Release 21.11
ninja -C build doc
xdg-open build/doc/guides/html/rel_notes/release_21_11.html
+* **Added NXP ENETFEC PMD.**
+
+ Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
+ :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
New Features
------------
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 0000000000..8a74fb5bf2
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+#define ENETFEC_CDEV_INVALID_FD -1
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+ ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+
+ return 0;
+
+failed_init:
+ ENETFEC_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ int ret;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (eth_dev == NULL)
+ return -ENODEV;
+
+ ret = rte_eth_dev_release_port(eth_dev);
+ if (ret != 0)
+ return -EINVAL;
+
+ ENETFEC_PMD_INFO("Closing sw device");
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 0000000000..c674dfc782
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef __ENETFEC_ETHDEV_H__
+#define __ENETFEC_ETHDEV_H__
+
+#include <rte_ethdev.h>
+
+/*
+ * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
+ */
+#define ENETFEC_MAX_Q 3
+
+#define ETHER_ADDR_LEN 6
+#define BD_LEN 49152
+#define ENETFEC_TX_FR_SIZE 2048
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
+
+/* full duplex or half duplex */
+#define HALF_DUPLEX 0x00
+#define FULL_DUPLEX 0x01
+#define UNKNOWN_DUPLEX 0xff
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ETH_ALEN RTE_ETHER_ADDR_LEN
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
+
+#define __iomem
+#if defined(RTE_ARCH_ARM)
+#if defined(RTE_ARCH_64)
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+
+#else /* RTE_ARCH_32 */
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+#else
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+/* Required types */
+typedef uint8_t u8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef uint64_t u64;
+
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int queue_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
+ * descriptor base is x_bd_base. Currently available buffer are x_cur
+ * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
+ * that is sent by the controller.
+ * The tx_cur and dirty_tx are same in completely full and empty
+ * conditions. Actual condition is determined by empty & ready bits.
+ */
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
+ struct rte_mempool *pool;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
+ bool bufdesc_ex;
+ unsigned int tx_align;
+ unsigned int rx_align;
+ int full_duplex;
+ unsigned int phy_speed;
+ uint32_t quirks;
+ int flag_csum;
+ int flag_pause;
+ int flag_wol;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ int link;
+ void *hw_baseaddr_v;
+ uint64_t hw_baseaddr_p;
+ void *bd_addr_v;
+ uint64_t bd_addr_p;
+ uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
+ uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
+ void *dma_baseaddr_r[ENETFEC_MAX_Q];
+ void *dma_baseaddr_t[ENETFEC_MAX_Q];
+ uint64_t cbus_size;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ int hw_ts_rx_en;
+ int hw_ts_tx_en;
+ struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
+};
+
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
+static inline int
+fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
+ struct bufdesc_prop *bd);
+int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
+ struct rte_mbuf *mbuf);
+
+#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 0000000000..e7b3964a0e
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _ENETFEC_LOGS_H_
+#define _ENETFEC_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENETFEC_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENETFEC_PMD_DEBUG(fmt, args...) \
+ ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
+#define ENETFEC_PMD_ERR(fmt, args...) \
+ ENETFEC_PMD_LOG(ERR, fmt, ## args)
+#define ENETFEC_PMD_INFO(fmt, args...) \
+ ENETFEC_PMD_LOG(INFO, fmt, ## args)
+
+#define ENETFEC_PMD_WARN(fmt, args...) \
+ ENETFEC_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENETFEC_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENETFEC_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 0000000000..79dca58dea
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+sources = files('enet_ethdev.c',
+ 'enet_uio.c',
+ 'enet_rxtx.c')
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 0000000000..b66517b171
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 24ad121fe4..ac294d8507 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -18,6 +18,7 @@ drivers = [
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v6 2/5] net/enetfec: add UIO support
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add " Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-10-21 4:46 ` Apeksha Gupta
2021-10-27 14:21 ` Ferruh Yigit
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 3/5] net/enetfec: support queue configuration Apeksha Gupta
` (3 subsequent siblings)
5 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-21 4:46 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with "fec-uio" driver implemented in
kernel for PHY initialisation and for mapping the allocated memory
of register & BD from kernel to DPDK which gives access to
non-cacheable memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 236 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 2 +
drivers/net/enetfec/enet_regs.h | 106 ++++++++++++
drivers/net/enetfec/enet_uio.c | 278 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++++++
5 files changed, 686 insertions(+)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 8a74fb5bf2..406a8db7f3 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -13,16 +13,221 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_ethdev.h"
#include "enet_pmd_logs.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
#define ENETFEC_CDEV_INVALID_FD -1
+#define BIT(nr) (1u << (nr))
+
+/* FEC receive acceleration */
+#define ENETFEC_RACC_IPDIS BIT(1)
+#define ENETFEC_RACC_PRODIS BIT(2)
+#define ENETFEC_RACC_SHIFT16 BIT(7)
+#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
+ ENETFEC_RACC_PRODIS)
+
+#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
+#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
+
+/* Pause frame field and FIFO threshold */
+#define ENETFEC_FCE BIT(5)
+#define ENETFEC_RSEM_V 0x84
+#define ENETFEC_RSFL_V 16
+#define ENETFEC_RAEM_V 0x8
+#define ENETFEC_RAFL_V 0x8
+#define ENETFEC_OPD_V 0xFFF0
+
+#define NUM_OF_QUEUES 6
+
+uint32_t e_cntl;
+
+/*
+ * This function is called to start or restart the ENETFEC during a link
+ * change, transmit timeout, or to reconfigure the ENETFEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t temp_mac[2];
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENETFEC_ETHEREN;
+
+ /* default mac address */
+ struct rte_ether_addr addr = {
+ .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
+ uint32_t val;
+
+ /*
+ * enet-mac reset will reset mac address registers too,
+ * so need to reconfigure it.
+ */
+ memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
+ rte_write32(rte_cpu_to_be_32(temp_mac[0]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ rte_write32(rte_cpu_to_be_32(temp_mac[1]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(rte_cpu_to_le_32(0x04),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ /* align IP header */
+ val |= ENETFEC_RACC_SHIFT16;
+ val &= ~ENETFEC_RACC_OPTIONS;
+ rte_write32(rte_cpu_to_le_32(val),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= BIT(6);
+ ecntl |= BIT(5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENETFEC_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
+ } else {
+ rcntl &= ~ENETFEC_FCE;
+ }
+
+ rte_write32(rte_cpu_to_le_32(rcntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* enable ENETFEC endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENETFEC store and forward mode */
+ rte_write32(rte_cpu_to_le_32(1 << 8),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
+ }
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENETFEC_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENETFEC_RXC_DLY;
+ /* Enable the MIB statistic event counters */
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
+
+ ecntl |= 0x70000000;
+ e_cntl = ecntl;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(rte_cpu_to_le_32(ecntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
+
+ return 0;
+}
+
+static int
+enetfec_eth_start(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+/* ENETFEC enable function.
+ * @param[in] base ENETFEC base address
+ */
+void
+enetfec_enable(void *base)
+{
+ rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) | e_cntl,
+ (uint8_t *)base + ENETFEC_ECR);
+}
+
+/* ENETFEC disable function.
+ * @param[in] base ENETFEC base address
+ */
+void
+enetfec_disable(void *base)
+{
+ rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) & ~e_cntl,
+ (uint8_t *)base + ENETFEC_ECR);
+}
+
+static int
+enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ dev->data->dev_started = 0;
+ enetfec_disable(fep->hw_baseaddr_v);
+
+ return 0;
+}
+
+static const struct eth_dev_ops enetfec_ops = {
+ .dev_configure = enetfec_eth_configure,
+ .dev_start = enetfec_eth_start,
+ .dev_stop = enetfec_eth_stop
+};
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ fep->full_duplex = FULL_DUPLEX;
+ dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
+
return 0;
}
@@ -33,6 +238,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -46,6 +253,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENETFEC_MAX_Q;
+ fep->max_tx_queues = ENETFEC_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT
+ | QUIRK_RACC;
+
+ rc = enetfec_configure();
+ if (rc != 0)
+ return -ENOMEM;
+ rc = config_enetfec_uio(fep);
+ if (rc != 0)
+ return -ENOMEM;
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / NUM_OF_QUEUES;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index c674dfc782..312e0424e5 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -175,5 +175,7 @@ struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
struct bufdesc_prop *bd);
int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
struct rte_mbuf *mbuf);
+void enetfec_enable(void *base);
+void enetfec_disable(void *base);
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 0000000000..5415ed77ea
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __ENETFEC_REGS_H
+#define __ENETFEC_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */
+#define RX_BD_INT 0x00800000
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+
+/*
+ * 0 The next BD in consecutive location
+ * 1 The next BD in ENETFECn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of buffer descriptor */
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+#define TX_BD_WRAP ((ushort)0x2000)
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+
+#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_RD_START_2 : ENETFEC_RD_START_0))
+#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_TD_START_2 : ENETFEC_TD_START_0))
+#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0))
+
+#define ENETFEC_ETHEREN ((uint)0x00000002)
+#define ENETFEC_TXC_DLY ((uint)0x00010000)
+#define ENETFEC_RXC_DLY ((uint)0x00020000)
+
+/* ENETFEC MAC is in controller */
+#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENETFEC_EIR 0x004 /* Interrupt event register */
+#define ENETFEC_EIMR 0x008 /* Interrupt mask register */
+#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENETFEC_ECR 0x024 /* Ethernet control register */
+#define ENETFEC_MSCR 0x044 /* MII speed control register */
+#define ENETFEC_MIBC 0x064 /* MIB control and status register */
+#define ENETFEC_RCR 0x084 /* Receive control register */
+#define ENETFEC_TCR 0x0c4 /* Transmit Control register */
+#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */
+#define ENETFEC_IALR 0x11c /* hash table 32 bits low */
+#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */
+#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */
+#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */
+#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */
+#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */
+#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */
+#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */
+#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */
+#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/
+#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */
+
+#endif /*__ENETFEC_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 0000000000..c939b4736b
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+static int enetfec_count;
+
+/** @brief Checks if a file name contains a certain substring.
+ * This function assumes a filename format of: [text][number].
+ * @param [in] filename File name
+ * @param [in] match String to match in file name
+ *
+ * @retval true if file name matches the criteria
+ * @retval false if file name does not match the criteria
+ */
+static bool
+file_name_match_extract(const char filename[], const char match[])
+{
+ char *substr = NULL;
+
+ substr = strstr(filename, match);
+ if (substr == NULL)
+ return false;
+
+ return true;
+}
+
+/*
+ * @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for success
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ if (ret <= 0) {
+ ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name);
+ return ret;
+ }
+ close(fd);
+
+ /* NULL-ify string */
+ line[ret] = '\0';
+
+ return 0;
+}
+
+/*
+ * @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ *
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
+ char uio_map_p_addr_str[32];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return NULL;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (enetfec_count > 0) {
+ ENETFEC_PMD_INFO("Mapped!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ return -1;
+ }
+
+ ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->bd_addr_p = uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ enetfec_count++;
+
+ return 0;
+}
+
+int
+enetfec_configure(void)
+{
+ char uio_name[32];
+ int uio_minor_number = -1;
+ int ret;
+ DIR *d = NULL;
+ struct dirent *dir;
+
+ d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
+ if (d == NULL) {
+ ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
+ return -1;
+ }
+
+ /* Iterate through all subdirs */
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 1) ||
+ !strncmp(dir->d_name, "..", 2))
+ continue;
+
+ if (file_name_match_extract(dir->d_name, "uio")) {
+ /*
+ * As substring <uio> was found in <d_name>
+ * read number following <uio> substring in <d_name>
+ */
+ ret = sscanf(dir->d_name + strlen("uio"), "%d",
+ &uio_minor_number);
+ if (ret < 0)
+ ENETFEC_PMD_ERR("Error: not find minor number\n");
+ /*
+ * Open file uioX/name and read first line which
+ * contains the name for the device. Based on the
+ * name check if this UIO device is for enetfec.
+ */
+ memset(uio_name, 0, sizeof(uio_name));
+ ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
+ dir->d_name, "name", uio_name);
+ if (ret != 0) {
+ ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ closedir(d);
+ return -1;
+ }
+
+ if (file_name_match_extract(uio_name,
+ FEC_UIO_DEVICE_NAME)) {
+ enetfec_uio_job.uio_minor_number =
+ uio_minor_number;
+ ENETFEC_PMD_INFO("enetfec device uio name: %s",
+ uio_name);
+ }
+ }
+ }
+ closedir(d);
+ return 0;
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 0000000000..4a031d3f46
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+/*
+ * Name of UIO device. User space FEC will have a corresponding
+ * UIO device.
+ * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH.
+ *
+ * @note Must be kept in sync with FEC kernel driver
+ * define #FEC_UIO_DEVICE_NAME !
+ */
+#define FEC_UIO_DEVICE_NAME "imx-fec-uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int enetfec_configure(void);
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(void);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v6 3/5] net/enetfec: support queue configuration
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add " Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce " Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-10-21 4:46 ` Apeksha Gupta
2021-10-27 14:23 ` Ferruh Yigit
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
` (2 subsequent siblings)
5 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-21 4:46 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds Rx/Tx queue configuration setup operations.
On packet reception the respective BD Ring status bit is set
which is then used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 230 +++++++++++++++++++++++++++++-
1 file changed, 229 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 406a8db7f3..0ff93363c7 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -45,6 +45,19 @@
uint32_t e_cntl;
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_CHECKSUM;
+
+static uint64_t dev_tx_offloads_sup =
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM;
+
/*
* This function is called to start or restart the ENETFEC during a link
* change, transmit timeout, or to reconfigure the ENETFEC. The network
@@ -213,10 +226,225 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_queues = ENETFEC_MAX_Q;
+ dev_info->max_tx_queues = ENETFEC_MAX_Q;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ dev_info->tx_offload_capa = dev_tx_offloads_sup;
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Tx deferred start is not supported */
+ if (tx_conf->tx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (txq == NULL) {
+ ENETFEC_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.queue_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(0, &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Rx deferred start is not supported */
+ if (rx_conf->rx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Rx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (rxq == NULL) {
+ ENETFEC_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.queue_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENETFEC_PMD_ERR("mbuf failed\n");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr) > 0)
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ if (rxq->rx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ }
+ rte_free(rxq);
+ return errno;
+}
+
static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
- .dev_stop = enetfec_eth_stop
+ .dev_stop = enetfec_eth_stop,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v6 4/5] net/enetfec: add enqueue and dequeue support
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add " Apeksha Gupta
` (2 preceding siblings ...)
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-10-21 4:46 ` Apeksha Gupta
2021-10-27 14:25 ` Ferruh Yigit
2021-10-21 4:47 ` [dpdk-dev] [PATCH v6 5/5] net/enetfec: add features Apeksha Gupta
2021-10-27 14:15 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
5 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-21 4:46 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds burst enqueue and dequeue operations to the enetfec
PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
used to enable this feature. By default loopback mode is disabled.
Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 2 +
drivers/net/enetfec/enet_ethdev.c | 197 ++++++++++++
drivers/net/enetfec/enet_rxtx.c | 445 +++++++++++++++++++++++++++
4 files changed, 646 insertions(+)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index dfcd032098..47836630b6 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -82,6 +82,8 @@ driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index bdfbdbd9d4..7e0fb148ac 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Basic stats = Y
+Promiscuous mode = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 0ff93363c7..4419952443 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -41,6 +41,8 @@
#define ENETFEC_RAFL_V 0x8
#define ENETFEC_OPD_V 0xFFF0
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
#define NUM_OF_QUEUES 6
uint32_t e_cntl;
@@ -179,6 +181,40 @@ enetfec_restart(struct rte_eth_dev *dev)
rte_delay_us(10);
}
+static void
+enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
static int
enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
{
@@ -192,6 +228,8 @@ static int
enetfec_eth_start(struct rte_eth_dev *dev)
{
enetfec_restart(dev);
+ dev->rx_pkt_burst = &enetfec_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_xmit_pkts;
return 0;
}
@@ -226,6 +264,110 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_close(__rte_unused struct rte_eth_dev *dev)
+{
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused)
+{
+ struct rte_eth_link link;
+ unsigned int lstatus = 1;
+
+ if (dev == NULL) {
+ ENETFEC_PMD_ERR("Invalid device in link_update.\n");
+ return 0;
+ }
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ link.link_status = lstatus;
+ link.link_speed = ETH_SPEED_NUM_1G;
+
+ ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ "Up");
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(rte_cpu_to_le_32(tmp),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(rte_cpu_to_le_32(0x04400002),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0x10800049),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ if (stats == NULL)
+ return -1;
+
+ memset(stats, 0, sizeof(struct rte_eth_stats));
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+
+ return 0;
+}
+
static int
enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -237,6 +379,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static void
+enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
static const unsigned short offset_des_active_rxq[] = {
ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
};
@@ -442,6 +596,12 @@ static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
.dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
+ .link_update = enetfec_eth_link_update,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup
@@ -468,6 +628,7 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
int rc;
int i;
unsigned int bdsize;
+ struct rte_ether_addr macaddr;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -510,6 +671,27 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set default mac address
+ */
+ macaddr.addr_bytes[0] = 1;
+ macaddr.addr_bytes[1] = 1;
+ macaddr.addr_bytes[2] = 1;
+ macaddr.addr_bytes[3] = 1;
+ macaddr.addr_bytes[4] = 1;
+ macaddr.addr_bytes[5] = 1;
+ enetfec_set_mac_address(dev, &macaddr);
+
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
@@ -518,6 +700,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
failed_init:
ENETFEC_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -525,6 +709,8 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
int ret;
/* find the ethdev entry */
@@ -532,11 +718,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
if (eth_dev == NULL)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+ enetfec_eth_stop(eth_dev);
+
ret = rte_eth_dev_release_port(eth_dev);
if (ret != 0)
return -EINVAL;
ENETFEC_PMD_INFO("Closing sw device");
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 0000000000..445fa97e77
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,445 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+#define ENETFEC_LOOPBACK 0
+#define ENETFEC_DUMP 0
+
+#if ENETFEC_DUMP
+static void
+enet_dump(struct enetfec_priv_tx_q *txq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENETFEC_PMD_DEBUG("TX ring dump\n");
+ ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = txq->bd.base;
+ do {
+ ENETFEC_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == txq->bd.cur ? 'S' : ' ',
+ bdp == txq->dirty_tx ? 'H' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ txq->tx_mbuf[index]);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ index++;
+ } while (bdp != txq->bd.base);
+}
+
+static void
+enet_dump_rx(struct enetfec_priv_rx_q *rxq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENETFEC_PMD_DEBUG("RX ring dump\n");
+ ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = rxq->bd.base;
+ do {
+ ENETFEC_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == rxq->bd.cur ? 'S' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ rxq->rx_mbuf[index]);
+ rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index],
+ rxq->rx_mbuf[index]->pkt_len);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ index++;
+ } while (bdp != rxq->bd.base);
+}
+#endif
+
+#if ENETFEC_LOOPBACK
+static volatile bool lb_quit;
+
+static void fec_signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
+ printf("\n\n %s: Signal %d received, preparing to exit...\n",
+ __func__, signum);
+ lb_quit = true;
+ }
+}
+
+static void
+enetfec_lb_rxtx(void *rxq1)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL;
+ struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len = 0;
+ int index_r = 0, index_t = 0;
+ u8 *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ unsigned int i;
+ struct enetfec_private *fep;
+ struct enetfec_priv_tx_q *txq;
+ fep = rxq->fep->dev->data->dev_private;
+ txq = fep->tx_queues[0];
+
+ pool = rxq->pool;
+ rx_bdp = rxq->bd.cur;
+ tx_bdp = txq->bd.cur;
+
+ signal(SIGTSTP, fec_signal_handler);
+ while (!lb_quit) {
+chk_again:
+ status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc));
+ if (status & RX_BD_EMPTY) {
+ if (!lb_quit)
+ goto chk_again;
+ rxq->bd.cur = rx_bdp;
+ txq->bd.cur = tx_bdp;
+ return;
+ }
+
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ ENETFEC_PMD_ERR("rx_fifo_error\n");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_PMD_ERR("rx_length_error\n");
+ if (status & RX_BD_LAST)
+ ENETFEC_PMD_ERR("rcv is not +last\n");
+ }
+ /* CRC Error */
+ if (status & RX_BD_CR)
+ ENETFEC_PMD_ERR("rx_crc_errors\n");
+
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_PMD_ERR("rx_frame_error\n");
+ mbuf = NULL;
+ goto rx_processing_done;
+ }
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(!new_mbuf)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Process the incoming frame. */
+ pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen));
+
+ /* shows data with respect to the data_off field. */
+ index_r = enet_get_bd_index(rx_bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index_r];
+
+ /* adjust pkt_len */
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4);
+ if (rxq->fep->quirks & QUIRK_RACC)
+ rte_pktmbuf_adj(mbuf, 2);
+
+ /* Replace Buffer in BD */
+ rxq->rx_mbuf[index_r] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &rx_bdp->bd_bufaddr);
+
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+
+ /* TX begins: First clean the ring then process packet */
+ index_t = enet_get_bd_index(tx_bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc));
+ if (status & TX_BD_READY)
+ stats->oerrors++;
+ break;
+ if (txq->tx_mbuf[index_t]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index_t]);
+ txq->tx_mbuf[index_t] = NULL;
+ }
+
+ if (mbuf == NULL)
+ continue;
+
+ /* Fill in a Tx ring entry */
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ pkt_len = rte_pktmbuf_pkt_len(mbuf);
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+
+ for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &tx_bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen);
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+
+ /* Save mbuf pointer to clean later */
+ txq->tx_mbuf[index_t] = mbuf;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd);
+ }
+}
+#endif
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+#if ENETFEC_LOOPBACK
+ enetfec_lb_rxtx(rxq1);
+#endif
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while ((status & RX_BD_EMPTY) == 0) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(new_mbuf == NULL)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENETFEC_PMD_ERR("rx_fifo_error\n");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_PMD_ERR("rx_length_error\n");
+ if (status & RX_BD_LAST)
+ ENETFEC_PMD_ERR("rcv is not +last\n");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENETFEC_PMD_ERR("rx_crc_errors\n");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_PMD_ERR("rx_frame_error\n");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ u8 *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENETFEC_PMD_DEBUG("SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v6 5/5] net/enetfec: add features
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add " Apeksha Gupta
` (3 preceding siblings ...)
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
@ 2021-10-21 4:47 ` Apeksha Gupta
2021-10-27 14:26 ` Ferruh Yigit
2021-10-27 14:15 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
5 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-10-21 4:47 UTC (permalink / raw)
To: david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds checksum and VLAN offloads in enetfec network
poll mode driver.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 ++
doc/guides/nics/features/enetfec.ini | 3 ++
drivers/net/enetfec/enet_ethdev.c | 17 ++++++++-
drivers/net/enetfec/enet_regs.h | 10 ++++++
drivers/net/enetfec/enet_rxtx.c | 53 +++++++++++++++++++++++++++-
5 files changed, 83 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 47836630b6..132f0209e5 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -84,6 +84,8 @@ ENETFEC Features
- Basic stats
- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 7e0fb148ac..3e9cc90b9f 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -6,6 +6,9 @@
[Features]
Basic stats = Y
Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 4419952443..c6957e16e5 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -106,7 +106,11 @@ enetfec_restart(struct rte_eth_dev *dev)
val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
/* align IP header */
val |= ENETFEC_RACC_SHIFT16;
- val &= ~ENETFEC_RACC_OPTIONS;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENETFEC_RACC_OPTIONS;
+ else
+ val &= ~ENETFEC_RACC_OPTIONS;
rte_write32(rte_cpu_to_le_32(val),
(uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
@@ -611,9 +615,20 @@ static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
fep->full_duplex = FULL_DUPLEX;
dev->dev_ops = &enetfec_ops;
+ if (fep->quirks & QUIRK_VLAN)
+ /* enable hw VLAN support */
+ rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+
+ if (fep->quirks & QUIRK_CSUM) {
+ /* enable hw accelerator */
+ rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+ }
rte_eth_dev_probing_finish(dev);
return 0;
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
index 5415ed77ea..a300c6f8bc 100644
--- a/drivers/net/enetfec/enet_regs.h
+++ b/drivers/net/enetfec/enet_regs.h
@@ -27,6 +27,12 @@
#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENETFEC_RX_VLAN 0x00000004
+
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+
/* Ethernet transmit use control and status of buffer descriptor */
#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
@@ -56,6 +62,10 @@
#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
/* GBIT supported in controller */
#define QUIRK_GBIT (1 << 3)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan */
+#define QUIRK_VLAN (1 << 6)
/* RACC register supported by controller */
#define QUIRK_RACC (1 << 12)
/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index 445fa97e77..fdd3343589 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -245,9 +245,14 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
unsigned short status;
unsigned short pkt_len;
int pkt_received = 0, index = 0;
- void *data;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
pool = rxq->pool;
bdp = rxq->bd.cur;
#if ENETFEC_LOOPBACK
@@ -302,6 +307,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
mbuf = rxq->rx_mbuf[index];
data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
rte_prefetch0(data);
rte_pktmbuf_append((struct rte_mbuf *)mbuf,
pkt_len - 4);
@@ -311,6 +317,47 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
rx_pkts[pkt_received] = mbuf;
pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)
+ ((uint8_t *)data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove((uint8_t *)mbuf_data + VLAN_HLEN,
+ data, ETH_ALEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if ((rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
+ /* don't check it */
+ mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ }
+
rxq->rx_mbuf[index] = new_mbuf;
rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
&bdp->bd_bufaddr);
@@ -411,6 +458,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
if (txq->fep->bufdesc_ex) {
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
rte_write32(0, &ebdp->bd_bdu);
rte_write32(rte_cpu_to_le_32(estatus),
&ebdp->bd_esc);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-10-21 5:24 ` Hemant Agrawal
2021-10-27 14:18 ` Ferruh Yigit
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
2 siblings, 0 replies; 91+ messages in thread
From: Hemant Agrawal @ 2021-10-21 5:24 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, Sachin Saxena, Apeksha Gupta
Series-
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> -----Original Message-----
> From: Apeksha Gupta <apeksha.gupta@nxp.com>
> Sent: Thursday, October 21, 2021 10:17 AM
> To: david.marchand@redhat.com; andrew.rybchenko@oktetlabs.ru;
> ferruh.yigit@intel.com
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant
> Agrawal <hemant.agrawal@nxp.com>; Apeksha Gupta
> <apeksha.gupta@nxp.com>
> Subject: [PATCH v6 1/5] net/enetfec: introduce NXP ENETFEC driver
> Importance: High
>
> ENETFEC (Fast Ethernet Controller) is a network poll mode driver for NXP SoC
> i.MX 8M Mini.
>
> This patch adds skeleton for enetfec driver with probe function.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
>
> v6:
> - Fix document build errors
> ---
> ---
> MAINTAINERS | 7 +
> doc/guides/nics/enetfec.rst | 131 ++++++++++++++++++
> doc/guides/nics/features/enetfec.ini | 9 ++
> doc/guides/nics/index.rst | 1 +
> doc/guides/rel_notes/release_21_11.rst | 4 +
> drivers/net/enetfec/enet_ethdev.c | 85 ++++++++++++
> drivers/net/enetfec/enet_ethdev.h | 179 +++++++++++++++++++++++++
> drivers/net/enetfec/enet_pmd_logs.h | 31 +++++
> drivers/net/enetfec/meson.build | 11 ++
> drivers/net/enetfec/version.map | 3 +
> drivers/net/meson.build | 1 +
> 11 files changed, 462 insertions(+)
> create mode 100644 doc/guides/nics/enetfec.rst create mode 100644
> doc/guides/nics/features/enetfec.ini
> create mode 100644 drivers/net/enetfec/enet_ethdev.c create mode
> 100644 drivers/net/enetfec/enet_ethdev.h create mode 100644
> drivers/net/enetfec/enet_pmd_logs.h
> create mode 100644 drivers/net/enetfec/meson.build create mode 100644
> drivers/net/enetfec/version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8dceb6c0e0..db2df484d0 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -876,6 +876,13 @@ F: drivers/net/enetc/
> F: doc/guides/nics/enetc.rst
> F: doc/guides/nics/features/enetc.ini
>
> +NXP enetfec
> +M: Apeksha Gupta <apeksha.gupta@nxp.com>
> +M: Sachin Saxena <sachin.saxena@nxp.com>
> +F: drivers/net/enetfec/
> +F: doc/guides/nics/enetfec.rst
> +F: doc/guides/nics/features/enetfec.ini
> +
> NXP pfe
> M: Gagandeep Singh <g.singh@nxp.com>
> F: doc/guides/nics/pfe.rst
> diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst new
> file mode 100644 index 0000000000..dfcd032098
> --- /dev/null
> +++ b/doc/guides/nics/enetfec.rst
> @@ -0,0 +1,131 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2021 NXP
> +
> +ENETFEC Poll Mode Driver
> +========================
> +
> +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
> +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
> +
> +More information can be found at NXP Official Website
> +<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fww
> w.
> +nxp.com%2Fproducts%2Fprocessors-and-microcontrollers%2Farm-
> processors%2
> +Fi-mx-applications-processors%2Fi-mx-8-processors%2Fi-mx-8m-mini-arm-
> co
> +rtex-a53-cortex-m4-audio-voice-
> video%3Ai.MX8MMINI&data=04%7C01%7Che
> +mant.agrawal%40nxp.com%7C26a60f13113a49b78f4608d9944dd6ef%7C68
> 6ea1d3bc2
> +b4c6fa92cd99c5c301635%7C0%7C1%7C637703884297668393%7CUnknow
> n%7CTWFpbGZs
> +b3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6M
> n0%3D%
> +7C1000&sdata=zF5Z6DynoGsXwMRUvHW47564qG9zB0VcNnTJ%2B4H
> Pq9w%3D&r
> +eserved=0>
> +
> +ENETFEC
> +-------
> +
> +This section provides an overview of the NXP ENETFEC and how it is
> +integrated into the DPDK.
> +
> +Contents summary
> +
> +- ENETFEC overview
> +- ENETFEC features
> +- Supported ENETFEC SoCs
> +- Prerequisites
> +- Driver compilation and testing
> +- Limitations
> +
> +ENETFEC Overview
> +~~~~~~~~~~~~~~~~
> +The i.MX 8M Mini Media Applications Processor is built to achieve both
> +high performance and low power consumption. ENETFEC PMD is a
> hardware
> +programmable packet forwarding engine to provide high performance
> +Ethernet interface. It has only 1 GB Ethernet interface with RJ45
> +connector.
> +
> +The diagram below shows a system level overview of ENETFEC:
> +
> + .. code-block:: console
> +
> + =====================================================
> + Userspace
> + +-----------------------------------------+
> + | ENETFEC Driver |
> + | +-------------------------+ |
> + | | virtual ethernet device | |
> + +-----------------------------------------+
> + ^ |
> + | |
> + | |
> + RXQ | | TXQ
> + | |
> + | v
> + =====================================================
> + Kernel Space
> + +---------+
> + | fec-uio |
> + ====================+=========+======================
> + Hardware
> + +-----------------------------------------+
> + | i.MX 8M MINI EVK |
> + | +-----+ |
> + | | MAC | |
> + +---------------+-----+-------------------+
> + | PHY |
> + +-----+
> +
> +ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
> +userspace.'fec-uio' is the kernel driver. The MAC and PHY are the
> +hardware blocks. ENETFEC PMD uses standard UIO interface to access
> +kernel for PHY initialisation and for mapping the allocated memory of
> +register & buffer descriptor with DPDK which gives access to
> +non-cacheable memory for buffer descriptor. net_enetfec is logical
> +Ethernet interface, created by ENETFEC driver.
> +
> +- ENETFEC driver registers the device in virtual device driver.
> +- RTE framework scans and will invoke the probe function of ENETFEC driver.
> +- The probe function will set the basic device registers and also setups BD
> rings.
> +- On packet Rx the respective BD Ring status bit is set which is then
> +used for
> + packet processing.
> +- Then Tx is done first followed by Rx via logical interfaces.
> +
> +ENETFEC Features
> +~~~~~~~~~~~~~~~~~
> +
> +- Linux
> +- ARMv8
> +
> +Supported ENETFEC SoCs
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +- i.MX 8M Mini
> +
> +Prerequisites
> +~~~~~~~~~~~~~
> +
> +There are three main pre-requisites for executing ENETFEC PMD on a i.MX
> +8M Mini compatible board:
> +
> +1. **ARM 64 Tool Chain**
> +
> + For example, the `*aarch64* Linaro Toolchain
> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frele
> ases.linaro.org%2Fcomponents%2Ftoolchain%2Fbinaries%2F7.4-
> 2019.02%2Faarch64-linux-gnu%2Fgcc-linaro-7.4.1-2019.02-x86_64_aarch64-
> linux-
> gnu.tar.xz&data=04%7C01%7Chemant.agrawal%40nxp.com%7C26a60f1
> 3113a49b78f4608d9944dd6ef%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C
> 0%7C1%7C637703884297678352%7CUnknown%7CTWFpbGZsb3d8eyJWIjoi
> MC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C10
> 00&sdata=71GHiCjpjQLW3fUkSbJM%2BlkEzmclax2ULh3pW6%2Bn%2Bb
> o%3D&reserved=0>`_.
> +
> +2. **Linux Kernel**
> +
> + It can be obtained from `NXP's Github hosting
> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsour
> ce.codeaurora.org%2Fexternal%2Fqoriq%2Fqoriq-
> components%2Flinux&data=04%7C01%7Chemant.agrawal%40nxp.com
> %7C26a60f13113a49b78f4608d9944dd6ef%7C686ea1d3bc2b4c6fa92cd99c5
> c301635%7C0%7C1%7C637703884297678352%7CUnknown%7CTWFpbGZsb
> 3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0
> %3D%7C1000&sdata=cad2L9dOaiH3WBn1NavBuWl4HqSsSdRo0Hi2DRC
> COjg%3D&reserved=0>`_.
> +
> + .. note::
> +
> + Branch is 'lf-5.10.y'
> +
> +3. **Rootfile system**
> +
> + Any *aarch64* supporting filesystem can be used. For example,
> + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be
> obtained
> + from `here
> <https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcdim
> age.ubuntu.com%2Fubuntu-
> base%2Freleases%2F18.04%2Frelease%2Fubuntu-base-18.04.1-base-
> arm64.tar.gz&data=04%7C01%7Chemant.agrawal%40nxp.com%7C26a6
> 0f13113a49b78f4608d9944dd6ef%7C686ea1d3bc2b4c6fa92cd99c5c301635
> %7C0%7C1%7C637703884297678352%7CUnknown%7CTWFpbGZsb3d8eyJW
> IjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C
> 1000&sdata=m5j36jMgTleERPfborMvcN%2F9X5X8Ft2Cyagt139G8OM%
> 3D&reserved=0>`_.
> +
> +4. The Ethernet device will be registered as virtual device, so ENETFEC has
> dependency on
> + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value
> `net_enetfec` to
> + run DPDK application.
> +
> +Driver compilation and testing
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Follow instructions available in the document :ref:`compiling and
> +testing a PMD for a NIC <pmd_build_and_test>` to launch
> +**dpdk-testpmd**
> +
> +Limitations
> +~~~~~~~~~~~
> +
> +- Multi queue is not supported.
> diff --git a/doc/guides/nics/features/enetfec.ini
> b/doc/guides/nics/features/enetfec.ini
> new file mode 100644
> index 0000000000..bdfbdbd9d4
> --- /dev/null
> +++ b/doc/guides/nics/features/enetfec.ini
> @@ -0,0 +1,9 @@
> +;
> +; Supported features of the 'enetfec' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Linux = Y
> +ARMv8 = Y
> +Usage doc = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index
> 784d5d39f6..777fdab4a0 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -26,6 +26,7 @@ Network Interface Controller Drivers
> e1000em
> ena
> enetc
> + enetfec
> enic
> fm10k
> hinic
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 3362c52a73..e964838967 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -20,6 +20,10 @@ DPDK Release 21.11
> ninja -C build doc
> xdg-open build/doc/guides/html/rel_notes/release_21_11.html
>
> +* **Added NXP ENETFEC PMD.**
> +
> + Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See
> the
> + :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
>
> New Features
> ------------
> diff --git a/drivers/net/enetfec/enet_ethdev.c
> b/drivers/net/enetfec/enet_ethdev.c
> new file mode 100644
> index 0000000000..8a74fb5bf2
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -0,0 +1,85 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <stdio.h>
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <sys/mman.h>
> +#include <rte_kvargs.h>
> +#include <ethdev_vdev.h>
> +#include <rte_bus_vdev.h>
> +#include <rte_dev.h>
> +#include <rte_ether.h>
> +#include "enet_ethdev.h"
> +#include "enet_pmd_logs.h"
> +
> +#define ENETFEC_NAME_PMD net_enetfec
> +#define ENETFEC_CDEV_INVALID_FD -1
> +
> +static int
> +enetfec_eth_init(struct rte_eth_dev *dev) {
> + rte_eth_dev_probing_finish(dev);
> + return 0;
> +}
> +
> +static int
> +pmd_enetfec_probe(struct rte_vdev_device *vdev) {
> + struct rte_eth_dev *dev = NULL;
> + struct enetfec_private *fep;
> + const char *name;
> + int rc;
> +
> + name = rte_vdev_device_name(vdev);
> + if (name == NULL)
> + return -EINVAL;
> + ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
> +
> + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
> + if (dev == NULL)
> + return -ENOMEM;
> +
> + /* setup board info structure */
> + fep = dev->data->dev_private;
> + fep->dev = dev;
> + rc = enetfec_eth_init(dev);
> + if (rc)
> + goto failed_init;
> +
> + return 0;
> +
> +failed_init:
> + ENETFEC_PMD_ERR("Failed to init");
> + return rc;
> +}
> +
> +static int
> +pmd_enetfec_remove(struct rte_vdev_device *vdev) {
> + struct rte_eth_dev *eth_dev = NULL;
> + int ret;
> +
> + /* find the ethdev entry */
> + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> + if (eth_dev == NULL)
> + return -ENODEV;
> +
> + ret = rte_eth_dev_release_port(eth_dev);
> + if (ret != 0)
> + return -EINVAL;
> +
> + ENETFEC_PMD_INFO("Closing sw device");
> + return 0;
> +}
> +
> +static struct rte_vdev_driver pmd_enetfec_drv = {
> + .probe = pmd_enetfec_probe,
> + .remove = pmd_enetfec_remove,
> +};
> +
> +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
> +RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
> diff --git a/drivers/net/enetfec/enet_ethdev.h
> b/drivers/net/enetfec/enet_ethdev.h
> new file mode 100644
> index 0000000000..c674dfc782
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.h
> @@ -0,0 +1,179 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef __ENETFEC_ETHDEV_H__
> +#define __ENETFEC_ETHDEV_H__
> +
> +#include <rte_ethdev.h>
> +
> +/*
> + * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> + */
> +#define ENETFEC_MAX_Q 3
> +
> +#define ETHER_ADDR_LEN 6
> +#define BD_LEN 49152
> +#define ENETFEC_TX_FR_SIZE 2048
> +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
> +#define MAX_RX_BD_RING_SIZE 512
> +
> +/* full duplex or half duplex */
> +#define HALF_DUPLEX 0x00
> +#define FULL_DUPLEX 0x01
> +#define UNKNOWN_DUPLEX 0xff
> +
> +#define PKT_MAX_BUF_SIZE 1984
> +#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
> +#define ETH_ALEN RTE_ETHER_ADDR_LEN
> +#define ETH_HLEN RTE_ETHER_HDR_LEN
> +#define VLAN_HLEN 4
> +
> +#define __iomem
> +#if defined(RTE_ARCH_ARM)
> +#if defined(RTE_ARCH_64)
> +#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
> +#define dcbf_64(p) dcbf(p)
> +
> +#else /* RTE_ARCH_32 */
> +#define dcbf(p) RTE_SET_USED(p)
> +#define dcbf_64(p) dcbf(p)
> +#endif
> +
> +#else
> +#define dcbf(p) RTE_SET_USED(p)
> +#define dcbf_64(p) dcbf(p)
> +#endif
> +
> +/* Required types */
> +typedef uint8_t u8;
> +typedef uint16_t u16;
> +typedef uint32_t u32;
> +typedef uint64_t u64;
> +
> +struct bufdesc {
> + uint16_t bd_datlen; /* buffer data length */
> + uint16_t bd_sc; /* buffer control & status */
> + uint32_t bd_bufaddr; /* buffer address */
> +};
> +
> +struct bufdesc_ex {
> + struct bufdesc desc;
> + uint32_t bd_esc;
> + uint32_t bd_prot;
> + uint32_t bd_bdu;
> + uint32_t ts;
> + uint16_t res0[4];
> +};
> +
> +struct bufdesc_prop {
> + int queue_id;
> + /* Addresses of Tx and Rx buffers */
> + struct bufdesc *base;
> + struct bufdesc *last;
> + struct bufdesc *cur;
> + void __iomem *active_reg_desc;
> + uint64_t descr_baseaddr_p;
> + unsigned short ring_size;
> + unsigned char d_size;
> + unsigned char d_size_log2;
> +};
> +
> +struct enetfec_priv_tx_q {
> + struct bufdesc_prop bd;
> + struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
> + struct bufdesc *dirty_tx;
> + struct rte_mempool *pool;
> + struct enetfec_private *fep;
> +};
> +
> +struct enetfec_priv_rx_q {
> + struct bufdesc_prop bd;
> + struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
> + struct rte_mempool *pool;
> + struct enetfec_private *fep;
> +};
> +
> +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
> + * descriptor base is x_bd_base. Currently available buffer are x_cur
> + * and x_cur. where x is rx or tx. Current buffer is tracked by
> +dirty_tx
> + * that is sent by the controller.
> + * The tx_cur and dirty_tx are same in completely full and empty
> + * conditions. Actual condition is determined by empty & ready bits.
> + */
> +struct enetfec_private {
> + struct rte_eth_dev *dev;
> + struct rte_eth_stats stats;
> + struct rte_mempool *pool;
> + uint16_t max_rx_queues;
> + uint16_t max_tx_queues;
> + unsigned int total_tx_ring_size;
> + unsigned int total_rx_ring_size;
> + bool bufdesc_ex;
> + unsigned int tx_align;
> + unsigned int rx_align;
> + int full_duplex;
> + unsigned int phy_speed;
> + uint32_t quirks;
> + int flag_csum;
> + int flag_pause;
> + int flag_wol;
> + bool rgmii_txc_delay;
> + bool rgmii_rxc_delay;
> + int link;
> + void *hw_baseaddr_v;
> + uint64_t hw_baseaddr_p;
> + void *bd_addr_v;
> + uint64_t bd_addr_p;
> + uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
> + uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
> + void *dma_baseaddr_r[ENETFEC_MAX_Q];
> + void *dma_baseaddr_t[ENETFEC_MAX_Q];
> + uint64_t cbus_size;
> + unsigned int reg_size;
> + unsigned int bd_size;
> + int hw_ts_rx_en;
> + int hw_ts_tx_en;
> + struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
> + struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q]; };
> +
> +#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); }) #define
> +readl(p) rte_read32(p)
> +
> +static inline struct
> +bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop
> +*bd) {
> + return (bdp >= bd->last) ? bd->base
> + : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size); }
> +
> +static inline struct
> +bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop
> +*bd) {
> + return (bdp <= bd->base) ? bd->last
> + : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size); }
> +
> +static inline int
> +enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd) {
> + return ((const char *)bdp - (const char *)bd->base) >>
> +bd->d_size_log2; }
> +
> +static inline int
> +fls64(unsigned long word)
> +{
> + return (64 - __builtin_clzl(word)) - 1; }
> +
> +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf
> **rx_pkts,
> + uint16_t nb_pkts);
> +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> + uint16_t nb_pkts);
> +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
> + struct bufdesc_prop *bd);
> +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
> + struct rte_mbuf *mbuf);
> +
> +#endif /*__ENETFEC_ETHDEV_H__*/
> diff --git a/drivers/net/enetfec/enet_pmd_logs.h
> b/drivers/net/enetfec/enet_pmd_logs.h
> new file mode 100644
> index 0000000000..e7b3964a0e
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_pmd_logs.h
> @@ -0,0 +1,31 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _ENETFEC_LOGS_H_
> +#define _ENETFEC_LOGS_H_
> +
> +extern int enetfec_logtype_pmd;
> +
> +/* PMD related logs */
> +#define ENETFEC_PMD_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()"
> \
> + fmt "\n", __func__, ##args)
> +
> +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
> +
> +#define ENETFEC_PMD_DEBUG(fmt, args...) \
> + ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
> +#define ENETFEC_PMD_ERR(fmt, args...) \
> + ENETFEC_PMD_LOG(ERR, fmt, ## args)
> +#define ENETFEC_PMD_INFO(fmt, args...) \
> + ENETFEC_PMD_LOG(INFO, fmt, ## args)
> +
> +#define ENETFEC_PMD_WARN(fmt, args...) \
> + ENETFEC_PMD_LOG(WARNING, fmt, ## args)
> +
> +/* DP Logs, toggled out at compile time if level lower than current
> +level */ #define ENETFEC_DP_LOG(level, fmt, args...) \
> + RTE_LOG_DP(level, PMD, fmt, ## args)
> +
> +#endif /* _ENETFEC_LOGS_H_ */
> diff --git a/drivers/net/enetfec/meson.build
> b/drivers/net/enetfec/meson.build new file mode 100644 index
> 0000000000..79dca58dea
> --- /dev/null
> +++ b/drivers/net/enetfec/meson.build
> @@ -0,0 +1,11 @@
> +# SPDX-License-Identifier: BSD-3-Clause # Copyright 2021 NXP
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on linux'
> +endif
> +
> +sources = files('enet_ethdev.c',
> + 'enet_uio.c',
> + 'enet_rxtx.c')
> diff --git a/drivers/net/enetfec/version.map
> b/drivers/net/enetfec/version.map new file mode 100644 index
> 0000000000..b66517b171
> --- /dev/null
> +++ b/drivers/net/enetfec/version.map
> @@ -0,0 +1,3 @@
> +DPDK_22 {
> + local: *;
> +};
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build index
> 24ad121fe4..ac294d8507 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -18,6 +18,7 @@ drivers = [
> 'e1000',
> 'ena',
> 'enetc',
> + 'enetfec',
> 'enic',
> 'failsafe',
> 'fm10k',
> --
> 2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/5] drivers/net: add NXP ENETFEC driver
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add " Apeksha Gupta
` (4 preceding siblings ...)
2021-10-21 4:47 ` [dpdk-dev] [PATCH v6 5/5] net/enetfec: add features Apeksha Gupta
@ 2021-10-27 14:15 ` Ferruh Yigit
5 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-10-27 14:15 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> This patch series introduce the enetfec driver, ENETFEC
> (Fast Ethernet Controller) is a network poll mode driver for
> the inbuilt NIC found in the NXP i.MX 8M Mini SoC.
>
> An overview of the enetfec driver with probe and remove are in patch 1.
> Patch 2 design UIO interface so that user space directly communicate with
> a UIO based hardware device. UIO interface mmap the Control and Status
> Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
> gives access to non-cacheble memory for BD.
>
> Patch 3 adds the RX/TX queue configuration setup operations.
> Patch 4 adds enqueue and dequeue support. Also adds some basic features
> like promiscuous enable, basic stats.
> Patch 5 adds checksum and VLAN features.
>
> Apeksha Gupta (5):
> net/enetfec: introduce NXP ENETFEC driver
> net/enetfec: add UIO support
> net/enetfec: support queue configuration
> net/enetfec: add enqueue and dequeue support
> net/enetfec: add features
>
Hi Apeksha,
I have requested for techboard approval on uio part, comparing
having a new bus implementation.
Meanwhile I put some comments on the driver, but initially
it doesn't compile with latest DPDK.
Can you please send a new version that fixes the build error first,
so more checks can be done, later you can continue with other
comments in another version.
Thanks,
ferruh
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce " Apeksha Gupta
2021-10-21 5:24 ` Hemant Agrawal
@ 2021-10-27 14:18 ` Ferruh Yigit
2021-11-08 18:42 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
2 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-10-27 14:18 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> ENETFEC (Fast Ethernet Controller) is a network poll mode driver
> for NXP SoC i.MX 8M Mini.
>
> This patch adds skeleton for enetfec driver with probe function.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> +Follow instructions available in the document
> +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +to launch **dpdk-testpmd**
> +
> +Limitations
> +~~~~~~~~~~~
> +
> +- Multi queue is not supported.
in 'enetfec_eth_info()'
max_rx_queues/max_tx_queues returned as 3 (ENETFEC_MAX_Q).
If multi queue is not supported why it is not one?
<...>
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -20,6 +20,10 @@ DPDK Release 21.11
> ninja -C build doc
> xdg-open build/doc/guides/html/rel_notes/release_21_11.html
>
> +* **Added NXP ENETFEC PMD.**
> +
> + Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
> + :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
>
Update is in the doc comment, can you please move it down, within the ethdev
driver group in alphabetically sorted manner.
<...>
> +static int
> +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *dev = NULL;
> + struct enetfec_private *fep;
> + const char *name;
> + int rc;
> +
> + name = rte_vdev_device_name(vdev);
> + if (name == NULL)
> + return -EINVAL;
Can name be 'NULL'? Not sure if we need this check, can you please check?
<...>
> +static int
> +pmd_enetfec_remove(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *eth_dev = NULL;
> + int ret;
> +
> + /* find the ethdev entry */
> + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> + if (eth_dev == NULL)
> + return -ENODEV;
> +
> + ret = rte_eth_dev_release_port(eth_dev);
> + if (ret != 0)
> + return -EINVAL;
> +
> + ENETFEC_PMD_INFO("Closing sw device");
Log can be misleading, there is another dev_ops to close the device.
<...>
> @@ -0,0 +1,179 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef __ENETFEC_ETHDEV_H__
> +#define __ENETFEC_ETHDEV_H__
> +
> +#include <rte_ethdev.h>
> +
> +/*
> + * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> + */
> +#define ENETFEC_MAX_Q 3
> +
> +#define ETHER_ADDR_LEN 6
> +#define BD_LEN 49152
> +#define ENETFEC_TX_FR_SIZE 2048
> +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
> +#define MAX_RX_BD_RING_SIZE 512
> +
> +/* full duplex or half duplex */
> +#define HALF_DUPLEX 0x00
> +#define FULL_DUPLEX 0x01
> +#define UNKNOWN_DUPLEX 0xff
> +
Some of the defines in this header is not used at all. What about
only adding structs/defines that are used? And add them as they are
needed?
This guarantees not having unused clutter in the code.
<...>
> +/* Required types */
> +typedef uint8_t u8;
> +typedef uint16_t u16;
> +typedef uint32_t u32;
> +typedef uint64_t u64;
> +
Do we need these type definitions, as far as I can see they are used only
in a few places, why not just use uint##_t types?
<...>
> +static inline struct
> +bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
> +{
> + return (bdp >= bd->last) ? bd->base
> + : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
> +}
> +
> +static inline struct
> +bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
> +{
> + return (bdp <= bd->base) ? bd->last
> + : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
> +}
> +
> +static inline int
> +enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
> +{
> + return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
> +}
> +
> +static inline int
> +fls64(unsigned long word)
> +{
> + return (64 - __builtin_clzl(word)) - 1;
> +}
> +
Same for these static inline functions, can you please add they when that are
needed?
> +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts);
> +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> + uint16_t nb_pkts);
These function definitions are not declared, at least not in this patch.
> +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
> + struct bufdesc_prop *bd);
This is already static inline function, do we need separate definition for it?
> +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
> + struct rte_mbuf *mbuf);
> +
ditto, no function declaration.
<...>
> +
> +/* DP Logs, toggled out at compile time if level lower than current level */
> +#define ENETFEC_DP_LOG(level, fmt, args...) \
> + RTE_LOG_DP(level, PMD, fmt, ## args)
> +
Not used at all.
> +#endif /* _ENETFEC_LOGS_H_ */
> diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
> new file mode 100644
> index 0000000000..79dca58dea
> --- /dev/null
> +++ b/drivers/net/enetfec/meson.build
> @@ -0,0 +1,11 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright 2021 NXP
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on linux'
> +endif
> +
> +sources = files('enet_ethdev.c',
> + 'enet_uio.c',
> + 'enet_rxtx.c')
This should cause build error on this patch, isn't it? Since these files don't
exist in this patch.
Each patch should be built successfully.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v6 2/5] net/enetfec: add UIO support
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-10-27 14:21 ` Ferruh Yigit
2021-11-08 18:44 ` [dpdk-dev] [EXT] " Apeksha Gupta
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-10-27 14:21 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal
On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> Implemented the fec-uio driver in kernel. enetfec PMD uses
> UIO interface to interact with "fec-uio" driver implemented in
> kernel for PHY initialisation and for mapping the allocated memory
> of register & BD from kernel to DPDK which gives access to
> non-cacheable memory for BD.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> +
> +#define NUM_OF_QUEUES 6
I guess this is number of BD queues, may be good to have it in macro name.
> +
> +uint32_t e_cntl;
> +
Is this global variable really needed, most of the times what you need is
per port varible.
For example I can see this variable is updated based on port start/stop,
what if you have multiple ports and they are different start/stop state,
will the value of variable still be correct?
And if it will be global, can you please make it 'static' and prefix driver
namespace to it?
> +/*
> + * This function is called to start or restart the ENETFEC during a link
> + * change, transmit timeout, or to reconfigure the ENETFEC. The network
> + * packet processing for this device must be stopped before this call.
> + */
> +static void
> +enetfec_restart(struct rte_eth_dev *dev)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + uint32_t temp_mac[2];
> + uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
> + uint32_t ecntl = ENETFEC_ETHEREN;
> +
> + /* default mac address */
> + struct rte_ether_addr addr = {
> + .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
> + uint32_t val;
> +
> + /*
> + * enet-mac reset will reset mac address registers too,
> + * so need to reconfigure it.
> + */
> + memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
> + rte_write32(rte_cpu_to_be_32(temp_mac[0]),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
> + rte_write32(rte_cpu_to_be_32(temp_mac[1]),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
> +
Is same MAC address used for all ports?
Also probe sets a different MAC address, which one is valid?
<...>
> +
> +static int
> +enetfec_eth_start(struct rte_eth_dev *dev)
> +{
> + enetfec_restart(dev);
> +
> + return 0;
> +}
Empty line is missing between two functions.
> +/* ENETFEC enable function.
> + * @param[in] base ENETFEC base address
> + */
> +void
> +enetfec_enable(void *base)
> +{
> + rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) | e_cntl,
> + (uint8_t *)base + ENETFEC_ECR);
> +}
> +
> +/* ENETFEC disable function.
> + * @param[in] base ENETFEC base address
> + */
> +void
> +enetfec_disable(void *base)
> +{
> + rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) & ~e_cntl,
> + (uint8_t *)base + ENETFEC_ECR);
> +}
> +
Are these 'enetfec_enable()'/'enetfec_disable()' functions used out of this file,
if not why not make them static, and remove definition in header file?
> +static int
> +enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
'dev' is used, so can drop '__rte_unused'.
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> +
> + dev->data->dev_started = 0;
> + enetfec_disable(fep->hw_baseaddr_v);
> +
> + return 0;
> +}
<...>
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v6 3/5] net/enetfec: support queue configuration
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-10-27 14:23 ` Ferruh Yigit
2021-11-08 18:45 ` [dpdk-dev] [EXT] " Apeksha Gupta
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-10-27 14:23 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal
On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> This patch adds Rx/Tx queue configuration setup operations.
> On packet reception the respective BD Ring status bit is set
> which is then used for packet processing.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
>
> +/* Supported Rx offloads */
> +static uint64_t dev_rx_offloads_sup =
> + DEV_RX_OFFLOAD_IPV4_CKSUM |
> + DEV_RX_OFFLOAD_UDP_CKSUM |
> + DEV_RX_OFFLOAD_TCP_CKSUM |
> + DEV_RX_OFFLOAD_VLAN_STRIP |
> + DEV_RX_OFFLOAD_CHECKSUM;
> +
> +static uint64_t dev_tx_offloads_sup =
> + DEV_TX_OFFLOAD_IPV4_CKSUM |
> + DEV_TX_OFFLOAD_UDP_CKSUM |
> + DEV_TX_OFFLOAD_TCP_CKSUM;
> +
The macro names are updated in ethdev, can you please update them?
Also these offloads are advertised, but some of them are not
checked anywhere in the driver, like 'DEV_TX_OFFLOAD_*_CKSUM'.
Are they really supported?
If they are not supported in the datapath, please don't advertise
them.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/5] net/enetfec: add enqueue and dequeue support
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
@ 2021-10-27 14:25 ` Ferruh Yigit
2021-11-08 18:47 ` [dpdk-dev] [EXT] " Apeksha Gupta
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-10-27 14:25 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal
On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> This patch adds burst enqueue and dequeue operations to the enetfec
> PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
> used to enable this feature. By default loopback mode is disabled.
> Basic features added like promiscuous enable, basic stats.
>
In the patch title you can prefer "Rx/Tx support" instead of
"enqueue and dequeue support", which is more common usage.
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> --- a/doc/guides/nics/features/enetfec.ini
> +++ b/doc/guides/nics/features/enetfec.ini
> @@ -4,6 +4,8 @@
> ; Refer to default.ini for the full list of available PMD features.
> ;
> [Features]
> +Basic stats = Y
> +Promiscuous mode = Y
Can you please keep the order same with default.ini file.
<...>
> @@ -226,6 +264,110 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
> return 0;
> }
>
> +static int
> +enetfec_eth_close(__rte_unused struct rte_eth_dev *dev)
> +{
'dev' is used.
> + enet_free_buffers(dev);
> + return 0;
> +}
> +
> +static int
> +enetfec_eth_link_update(struct rte_eth_dev *dev,
> + int wait_to_complete __rte_unused)
> +{
> + struct rte_eth_link link;
> + unsigned int lstatus = 1;
> +
> + if (dev == NULL) {
> + ENETFEC_PMD_ERR("Invalid device in link_update.\n");
Duplicated '\n'.
> + return 0;
> + }
> +
> + memset(&link, 0, sizeof(struct rte_eth_link));
> +
> + link.link_status = lstatus;
> + link.link_speed = ETH_SPEED_NUM_1G;
> +
> + ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
> + "Up");
> +
> + return rte_eth_linkstatus_set(dev, &link);
> +}
> +
> +static int
> +enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
'dev' is used.
<...>
> +static int
> +enetfec_stats_get(struct rte_eth_dev *dev,
> + struct rte_eth_stats *stats)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + struct rte_eth_stats *eth_stats = &fep->stats;
> +
> + if (stats == NULL)
> + return -1;
No need to check this, ethdev layer already does.
> +
> + memset(stats, 0, sizeof(struct rte_eth_stats));
> +
Same here, ethdev does this.
<...>
> +
> + /*
> + * Set default mac address
> + */
> + macaddr.addr_bytes[0] = 1;
> + macaddr.addr_bytes[1] = 1;
> + macaddr.addr_bytes[2] = 1;
> + macaddr.addr_bytes[3] = 1;
> + macaddr.addr_bytes[4] = 1;
> + macaddr.addr_bytes[5] = 1;
if it is fixed, you can set the addr while declaring the variable:
struct rte_ether_addr macaddr = {
.addr_bytes = { 0x1, 0x1, 0x1, 0x1, 0x1, 0x1 }
};
<...>
> index 0000000000..445fa97e77
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_rxtx.c
> @@ -0,0 +1,445 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#include <signal.h>
> +#include <rte_mbuf.h>
> +#include <rte_io.h>
> +#include "enet_regs.h"
> +#include "enet_ethdev.h"
> +#include "enet_pmd_logs.h"
> +
> +#define ENETFEC_LOOPBACK 0
> +#define ENETFEC_DUMP 0
> +
Instead of compile time flags, why not convert them to devargs so
they can be updated without recompile?
This also make sure all code is enabled and prevent possible dead
code by time.
<...>
> +
> +#if ENETFEC_LOOPBACK
> +static volatile bool lb_quit;
> +
> +static void fec_signal_handler(int signum)
> +{
> + if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
> + printf("\n\n %s: Signal %d received, preparing to exit...\n",
> + __func__, signum);
> + lb_quit = true;
> + }
> +}
Not sure if handling signals in the driver is a good idea, this is more an
application level desicion. Please remember that DPDK is library and this
PMD is one of many PMDs in the library.
Also please don't use 'printf' directly.
> +
> +static void
> +enetfec_lb_rxtx(void *rxq1)
> +{
> + struct rte_mempool *pool;
> + struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL;
> + struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL;
> + unsigned short status;
> + unsigned short pkt_len = 0;
> + int index_r = 0, index_t = 0;
> + u8 *data;
> + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
> + struct rte_eth_stats *stats = &rxq->fep->stats;
> + unsigned int i;
> + struct enetfec_private *fep;
> + struct enetfec_priv_tx_q *txq;
> + fep = rxq->fep->dev->data->dev_private;
> + txq = fep->tx_queues[0];
> +
> + pool = rxq->pool;
> + rx_bdp = rxq->bd.cur;
> + tx_bdp = txq->bd.cur;
> +
> + signal(SIGTSTP, fec_signal_handler);
> + while (!lb_quit) {
> +chk_again:
> + status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc));
> + if (status & RX_BD_EMPTY) {
> + if (!lb_quit)
> + goto chk_again;
> + rxq->bd.cur = rx_bdp;
> + txq->bd.cur = tx_bdp;
> + return;
> + }
> +
> + /* Check for errors. */
> + status ^= RX_BD_LAST;
> + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
> + RX_BD_CR | RX_BD_OV | RX_BD_LAST |
> + RX_BD_TR)) {
> + stats->ierrors++;
> + if (status & RX_BD_OV) {
> + /* FIFO overrun */
> + ENETFEC_PMD_ERR("rx_fifo_error\n");
> + goto rx_processing_done;
> + }
> + if (status & (RX_BD_LG | RX_BD_SH
> + | RX_BD_LAST)) {
> + /* Frame too long or too short. */
> + ENETFEC_PMD_ERR("rx_length_error\n");
> + if (status & RX_BD_LAST)
> + ENETFEC_PMD_ERR("rcv is not +last\n");
duplicated '\n', but more importantly this is datapath, are you sure to use
debug logs in datapath?
'ENETFEC_DP_LOG' should be the to use in the datapath, since it is optimized
out based on the default value it has, to not impact datapath.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v6 5/5] net/enetfec: add features
2021-10-21 4:47 ` [dpdk-dev] [PATCH v6 5/5] net/enetfec: add features Apeksha Gupta
@ 2021-10-27 14:26 ` Ferruh Yigit
0 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-10-27 14:26 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko, ferruh.yigit
Cc: dev, sachin.saxena, hemant.agrawal
On 10/21/2021 5:47 AM, Apeksha Gupta wrote:
> This patch adds checksum and VLAN offloads in enetfec network
> poll mode driver.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> @@ -611,9 +615,20 @@ static int
> enetfec_eth_init(struct rte_eth_dev *dev)
> {
> struct enetfec_private *fep = dev->data->dev_private;
> + struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf;
> + uint64_t rx_offloads = eth_conf->rxmode.offloads;
>
> fep->full_duplex = FULL_DUPLEX;
> dev->dev_ops = &enetfec_ops;
> + if (fep->quirks & QUIRK_VLAN)
> + /* enable hw VLAN support */
> + rx_offloads |= DEV_RX_OFFLOAD_VLAN;
> +
> + if (fep->quirks & QUIRK_CSUM) {
> + /* enable hw accelerator */
> + rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> + fep->flag_csum |= RX_FLAG_CSUM_EN;
> + }
Driver is force enabling these Rx offloads even user is not asking for them?
Is it because HW doesn't support disabling them?
If it is configurable it should honor user configuration,
if not configurable please document as limitation.
<...>
> +
> + if (rxq->fep->bufdesc_ex &&
> + (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
> + if ((rte_read32(&ebdp->bd_esc) &
> + rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
> + /* don't check it */
> + mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD;
warning: "PKT_RX_IP_CKSUM_BAD" is deprecated
> + } else {
> + mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD;
warning: "PKT_RX_IP_CKSUM_GOOD" is deprecated
> + }
> + }
> +
> + /* Handle received VLAN packets */
> + if (vlan_packet_rcvd) {
> + mbuf->vlan_tci = vlan_tag;
> + mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
warning: "PKT_RX_VLAN_STRIPPED" is deprecated
warning: "PKT_RX_VLAN" is deprecated
> + }
> +
> rxq->rx_mbuf[index] = new_mbuf;
> rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
> &bdp->bd_bufaddr);
> @@ -411,6 +458,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>
> if (txq->fep->bufdesc_ex) {
> struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
> +
> + if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD)
Why checking Rx flag on the transmit function? Is it typo?
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v7 0/5] drivers/net: add NXP ENETFEC driver
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce " Apeksha Gupta
2021-10-21 5:24 ` Hemant Agrawal
2021-10-27 14:18 ` Ferruh Yigit
@ 2021-11-03 19:20 ` Apeksha Gupta
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce " Apeksha Gupta
` (5 more replies)
2 siblings, 6 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-03 19:20 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch series introduce the enetfec driver, ENETFEC
(Fast Ethernet Controller) is a network poll mode driver for
the inbuilt NIC found in the NXP i.MX 8M Mini SoC.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO interface so that user space directly communicate with
a UIO based hardware device. UIO interface mmap the Control and Status
Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
gives access to non-cacheble memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Patch 5 adds checksum and VLAN features.
Apeksha Gupta (5):
net/enetfec: introduce NXP ENETFEC driver
net/enetfec: add UIO support
net/enetfec: support queue configuration
net/enetfec: add Rx/Tx support
net/enetfec: add features
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 135 +++++
doc/guides/nics/features/enetfec.ini | 14 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 6 +-
drivers/net/enetfec/enet_ethdev.c | 742 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 170 ++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++
drivers/net/enetfec/enet_regs.h | 116 ++++
drivers/net/enetfec/enet_rxtx.c | 499 +++++++++++++++++
drivers/net/enetfec/enet_uio.c | 278 +++++++++
drivers/net/enetfec/enet_uio.h | 64 +++
drivers/net/enetfec/meson.build | 11 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 2 +-
15 files changed, 2077 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
@ 2021-11-03 19:20 ` Apeksha Gupta
2021-11-03 23:27 ` Ferruh Yigit
` (2 more replies)
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 2/5] net/enetfec: add UIO support Apeksha Gupta
` (4 subsequent siblings)
5 siblings, 3 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-03 19:20 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
ENETFEC (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC i.MX 8M Mini.
This patch adds skeleton for enetfec driver with probe function.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
v7:
- Fix compilation
- code cleanup
v6:
- Fix document build errors
---
MAINTAINERS | 7 ++
doc/guides/nics/enetfec.rst | 131 +++++++++++++++++++++++++
doc/guides/nics/features/enetfec.ini | 9 ++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 6 +-
drivers/net/enetfec/enet_ethdev.c | 85 ++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 58 +++++++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
drivers/net/enetfec/meson.build | 9 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 2 +-
11 files changed, 340 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 0e5951f8f1..d000eb81af 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -882,6 +882,13 @@ F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
+NXP enetfec
+M: Apeksha Gupta <apeksha.gupta@nxp.com>
+M: Sachin Saxena <sachin.saxena@nxp.com>
+F: drivers/net/enetfec/
+F: doc/guides/nics/enetfec.rst
+F: doc/guides/nics/features/enetfec.ini
+
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 0000000000..dfcd032098
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,131 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both
+high performance and low power consumption. ENETFEC PMD is a hardware
+programmable packet forwarding engine to provide high performance
+Ethernet interface. It has only 1 GB Ethernet interface with RJ45
+connector.
+
+The diagram below shows a system level overview of ENETFEC:
+
+ .. code-block:: console
+
+ =====================================================
+ Userspace
+ +-----------------------------------------+
+ | ENETFEC Driver |
+ | +-------------------------+ |
+ | | virtual ethernet device | |
+ +-----------------------------------------+
+ ^ |
+ | |
+ | |
+ RXQ | | TXQ
+ | |
+ | v
+ =====================================================
+ Kernel Space
+ +---------+
+ | fec-uio |
+ ====================+=========+======================
+ Hardware
+ +-----------------------------------------+
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+-------------------+
+ | PHY |
+ +-----+
+
+ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
+userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
+blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
+initialisation and for mapping the allocated memory of register & buffer
+descriptor with DPDK which gives access to non-cacheable memory for buffer
+descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
+driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- Linux
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
+
+ .. note::
+
+ Branch is 'lf-5.10.y'
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **dpdk-testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 0000000000..bdfbdbd9d4
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 784d5d39f6..777fdab4a0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 502cc5ceb2..aed380c21f 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -20,7 +20,6 @@ DPDK Release 21.11
ninja -C build doc
xdg-open build/doc/guides/html/rel_notes/release_21_11.html
-
New Features
------------
@@ -135,6 +134,11 @@ New Features
Added an ethdev API which can help users get device configuration.
+* **Added NXP ENETFEC PMD.**
+
+ Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
+ :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
+
* **Updated AF_XDP PMD.**
* Disabled secondary process support.
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 0000000000..a6c4bcbf2e
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_pmd_logs.h"
+#include "enet_ethdev.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+#define ENETFEC_CDEV_INVALID_FD -1
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+ ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+
+ return 0;
+
+failed_init:
+ ENETFEC_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ int ret;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (eth_dev == NULL)
+ return -ENODEV;
+
+ ret = rte_eth_dev_release_port(eth_dev);
+ if (ret != 0)
+ return -EINVAL;
+
+ ENETFEC_PMD_INFO("Release enetfec sw device");
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 0000000000..0e4558dd86
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef __ENETFEC_ETHDEV_H__
+#define __ENETFEC_ETHDEV_H__
+
+/*
+ * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
+ */
+
+#define ENETFEC_MAX_Q 3
+
+/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
+ * descriptor base is x_bd_base. Currently available buffer are x_cur
+ * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
+ * that is sent by the controller.
+ * The tx_cur and dirty_tx are same in completely full and empty
+ * conditions. Actual condition is determined by empty & ready bits.
+ */
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
+ struct rte_mempool *pool;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
+ bool bufdesc_ex;
+ unsigned int tx_align;
+ unsigned int rx_align;
+ int full_duplex;
+ unsigned int phy_speed;
+ uint32_t quirks;
+ int flag_csum;
+ int flag_pause;
+ int flag_wol;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ int link;
+ void *hw_baseaddr_v;
+ uint64_t hw_baseaddr_p;
+ void *bd_addr_v;
+ uint64_t bd_addr_p;
+ uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
+ uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
+ void *dma_baseaddr_r[ENETFEC_MAX_Q];
+ void *dma_baseaddr_t[ENETFEC_MAX_Q];
+ uint64_t cbus_size;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ int hw_ts_rx_en;
+ int hw_ts_tx_en;
+ struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
+};
+
+#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 0000000000..e7b3964a0e
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _ENETFEC_LOGS_H_
+#define _ENETFEC_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENETFEC_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENETFEC_PMD_DEBUG(fmt, args...) \
+ ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
+#define ENETFEC_PMD_ERR(fmt, args...) \
+ ENETFEC_PMD_LOG(ERR, fmt, ## args)
+#define ENETFEC_PMD_INFO(fmt, args...) \
+ ENETFEC_PMD_LOG(INFO, fmt, ## args)
+
+#define ENETFEC_PMD_WARN(fmt, args...) \
+ ENETFEC_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENETFEC_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENETFEC_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 0000000000..6d6c64c94b
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+sources = files('enet_ethdev.c')
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 0000000000..b66517b171
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index bcf488f203..ac294d8507 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -12,13 +12,13 @@ drivers = [
'bnx2x',
'bnxt',
'bonding',
- 'cnxk',
'cxgbe',
'dpaa',
'dpaa2',
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v7 2/5] net/enetfec: add UIO support
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-03 19:20 ` Apeksha Gupta
2021-11-04 18:25 ` Ferruh Yigit
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 3/5] net/enetfec: support queue configuration Apeksha Gupta
` (3 subsequent siblings)
5 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-03 19:20 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with "fec-uio" driver implemented in
kernel for PHY initialisation and for mapping the allocated memory
of register & BD from kernel to DPDK which gives access to
non-cacheable memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 227 ++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 14 ++
drivers/net/enetfec/enet_regs.h | 106 ++++++++++++
drivers/net/enetfec/enet_uio.c | 278 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++++++
drivers/net/enetfec/meson.build | 3 +-
6 files changed, 691 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index a6c4bcbf2e..410c395039 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -13,16 +13,212 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_pmd_logs.h"
#include "enet_ethdev.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
#define ENETFEC_CDEV_INVALID_FD -1
+#define BIT(nr) (1u << (nr))
+
+/* FEC receive acceleration */
+#define ENETFEC_RACC_IPDIS BIT(1)
+#define ENETFEC_RACC_PRODIS BIT(2)
+#define ENETFEC_RACC_SHIFT16 BIT(7)
+#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
+ ENETFEC_RACC_PRODIS)
+
+#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
+#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
+
+/* Pause frame field and FIFO threshold */
+#define ENETFEC_FCE BIT(5)
+#define ENETFEC_RSEM_V 0x84
+#define ENETFEC_RSFL_V 16
+#define ENETFEC_RAEM_V 0x8
+#define ENETFEC_RAFL_V 0x8
+#define ENETFEC_OPD_V 0xFFF0
+
+#define NUM_OF_BD_QUEUES 6
+
+static uint32_t enetfec_e_cntl;
+
+/*
+ * This function is called to start or restart the ENETFEC during a link
+ * change, transmit timeout, or to reconfigure the ENETFEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t temp_mac[2];
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENETFEC_ETHEREN;
+
+ /* default mac address */
+ struct rte_ether_addr addr = {
+ .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
+ uint32_t val;
+
+ /*
+ * enet-mac reset will reset mac address registers too,
+ * so need to reconfigure it.
+ */
+ memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
+ rte_write32(rte_cpu_to_be_32(temp_mac[0]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ rte_write32(rte_cpu_to_be_32(temp_mac[1]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(rte_cpu_to_le_32(0x04),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ /* align IP header */
+ val |= ENETFEC_RACC_SHIFT16;
+ val &= ~ENETFEC_RACC_OPTIONS;
+ rte_write32(rte_cpu_to_le_32(val),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= BIT(6);
+ ecntl |= BIT(5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENETFEC_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
+ } else {
+ rcntl &= ~ENETFEC_FCE;
+ }
+
+ rte_write32(rte_cpu_to_le_32(rcntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* enable ENETFEC endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENETFEC store and forward mode */
+ rte_write32(rte_cpu_to_le_32(1 << 8),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
+ }
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENETFEC_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENETFEC_RXC_DLY;
+ /* Enable the MIB statistic event counters */
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
+
+ ecntl |= 0x70000000;
+ enetfec_e_cntl = ecntl;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(rte_cpu_to_le_32(ecntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
+
+ return 0;
+}
+
+static int
+enetfec_eth_start(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+
+/* ENETFEC disable function.
+ * @param[in] base ENETFEC base address
+ */
+static void
+enetfec_disable(void *base)
+{
+ rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) & ~enetfec_e_cntl,
+ (uint8_t *)base + ENETFEC_ECR);
+}
+
+static int
+enetfec_eth_stop(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ dev->data->dev_started = 0;
+ enetfec_disable(fep->hw_baseaddr_v);
+
+ return 0;
+}
+
+static const struct eth_dev_ops enetfec_ops = {
+ .dev_configure = enetfec_eth_configure,
+ .dev_start = enetfec_eth_start,
+ .dev_stop = enetfec_eth_stop
+};
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ fep->full_duplex = FULL_DUPLEX;
+ dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
+
return 0;
}
@@ -33,6 +229,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -46,6 +244,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENETFEC_MAX_Q;
+ fep->max_tx_queues = ENETFEC_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT
+ | QUIRK_RACC;
+
+ rc = enetfec_configure();
+ if (rc != 0)
+ return -ENOMEM;
+ rc = config_enetfec_uio(fep);
+ if (rc != 0)
+ return -ENOMEM;
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / NUM_OF_BD_QUEUES;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 0e4558dd86..0d16e48d12 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -5,12 +5,26 @@
#ifndef __ENETFEC_ETHDEV_H__
#define __ENETFEC_ETHDEV_H__
+#include <rte_ethdev.h>
+
+/* full duplex or half duplex */
+#define HALF_DUPLEX 0x00
+#define FULL_DUPLEX 0x01
+#define UNKNOWN_DUPLEX 0xff
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ETH_ALEN RTE_ETHER_ADDR_LEN
+
/*
* ENETFEC with AVB IP can support maximum 3 rx and tx queues.
*/
#define ENETFEC_MAX_Q 3
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
* descriptor base is x_bd_base. Currently available buffer are x_cur
* and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 0000000000..5415ed77ea
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __ENETFEC_REGS_H
+#define __ENETFEC_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */
+#define RX_BD_INT 0x00800000
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+
+/*
+ * 0 The next BD in consecutive location
+ * 1 The next BD in ENETFECn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of buffer descriptor */
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+#define TX_BD_WRAP ((ushort)0x2000)
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+
+#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_RD_START_2 : ENETFEC_RD_START_0))
+#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_TD_START_2 : ENETFEC_TD_START_0))
+#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0))
+
+#define ENETFEC_ETHEREN ((uint)0x00000002)
+#define ENETFEC_TXC_DLY ((uint)0x00010000)
+#define ENETFEC_RXC_DLY ((uint)0x00020000)
+
+/* ENETFEC MAC is in controller */
+#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENETFEC_EIR 0x004 /* Interrupt event register */
+#define ENETFEC_EIMR 0x008 /* Interrupt mask register */
+#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENETFEC_ECR 0x024 /* Ethernet control register */
+#define ENETFEC_MSCR 0x044 /* MII speed control register */
+#define ENETFEC_MIBC 0x064 /* MIB control and status register */
+#define ENETFEC_RCR 0x084 /* Receive control register */
+#define ENETFEC_TCR 0x0c4 /* Transmit Control register */
+#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */
+#define ENETFEC_IALR 0x11c /* hash table 32 bits low */
+#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */
+#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */
+#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */
+#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */
+#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */
+#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */
+#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */
+#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */
+#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/
+#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */
+
+#endif /*__ENETFEC_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 0000000000..c939b4736b
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+static int enetfec_count;
+
+/** @brief Checks if a file name contains a certain substring.
+ * This function assumes a filename format of: [text][number].
+ * @param [in] filename File name
+ * @param [in] match String to match in file name
+ *
+ * @retval true if file name matches the criteria
+ * @retval false if file name does not match the criteria
+ */
+static bool
+file_name_match_extract(const char filename[], const char match[])
+{
+ char *substr = NULL;
+
+ substr = strstr(filename, match);
+ if (substr == NULL)
+ return false;
+
+ return true;
+}
+
+/*
+ * @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for success
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ if (ret <= 0) {
+ ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name);
+ return ret;
+ }
+ close(fd);
+
+ /* NULL-ify string */
+ line[ret] = '\0';
+
+ return 0;
+}
+
+/*
+ * @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ *
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
+ char uio_map_p_addr_str[32];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return NULL;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (enetfec_count > 0) {
+ ENETFEC_PMD_INFO("Mapped!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ return -1;
+ }
+
+ ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->bd_addr_p = uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ enetfec_count++;
+
+ return 0;
+}
+
+int
+enetfec_configure(void)
+{
+ char uio_name[32];
+ int uio_minor_number = -1;
+ int ret;
+ DIR *d = NULL;
+ struct dirent *dir;
+
+ d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
+ if (d == NULL) {
+ ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
+ return -1;
+ }
+
+ /* Iterate through all subdirs */
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 1) ||
+ !strncmp(dir->d_name, "..", 2))
+ continue;
+
+ if (file_name_match_extract(dir->d_name, "uio")) {
+ /*
+ * As substring <uio> was found in <d_name>
+ * read number following <uio> substring in <d_name>
+ */
+ ret = sscanf(dir->d_name + strlen("uio"), "%d",
+ &uio_minor_number);
+ if (ret < 0)
+ ENETFEC_PMD_ERR("Error: not find minor number\n");
+ /*
+ * Open file uioX/name and read first line which
+ * contains the name for the device. Based on the
+ * name check if this UIO device is for enetfec.
+ */
+ memset(uio_name, 0, sizeof(uio_name));
+ ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
+ dir->d_name, "name", uio_name);
+ if (ret != 0) {
+ ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ closedir(d);
+ return -1;
+ }
+
+ if (file_name_match_extract(uio_name,
+ FEC_UIO_DEVICE_NAME)) {
+ enetfec_uio_job.uio_minor_number =
+ uio_minor_number;
+ ENETFEC_PMD_INFO("enetfec device uio name: %s",
+ uio_name);
+ }
+ }
+ }
+ closedir(d);
+ return 0;
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 0000000000..4a031d3f46
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+/*
+ * Name of UIO device. User space FEC will have a corresponding
+ * UIO device.
+ * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH.
+ *
+ * @note Must be kept in sync with FEC kernel driver
+ * define #FEC_UIO_DEVICE_NAME !
+ */
+#define FEC_UIO_DEVICE_NAME "imx-fec-uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int enetfec_configure(void);
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(void);
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 6d6c64c94b..57f316b8a5 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -6,4 +6,5 @@ if not is_linux
reason = 'only supported on linux'
endif
-sources = files('enet_ethdev.c')
+sources = files('enet_ethdev.c',
+ 'enet_uio.c')
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v7 3/5] net/enetfec: support queue configuration
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-11-03 19:20 ` Apeksha Gupta
2021-11-04 18:26 ` Ferruh Yigit
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
` (2 subsequent siblings)
5 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-03 19:20 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds Rx/Tx queue configuration setup operations.
On packet reception the respective BD Ring status bit is set
which is then used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 230 +++++++++++++++++++++++++++++-
drivers/net/enetfec/enet_ethdev.h | 73 ++++++++++
2 files changed, 302 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 410c395039..aa96093eb8 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -45,6 +45,19 @@
static uint32_t enetfec_e_cntl;
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_CHECKSUM;
+
+static uint64_t dev_tx_offloads_sup =
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM;
+
/*
* This function is called to start or restart the ENETFEC during a link
* change, transmit timeout, or to reconfigure the ENETFEC. The network
@@ -204,10 +217,225 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_queues = ENETFEC_MAX_Q;
+ dev_info->max_tx_queues = ENETFEC_MAX_Q;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ dev_info->tx_offload_capa = dev_tx_offloads_sup;
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Tx deferred start is not supported */
+ if (tx_conf->tx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (txq == NULL) {
+ ENETFEC_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.queue_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(0, &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Rx deferred start is not supported */
+ if (rx_conf->rx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Rx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (rxq == NULL) {
+ ENETFEC_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.queue_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENETFEC_PMD_ERR("mbuf failed\n");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr) > 0)
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ if (rxq->rx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ }
+ rte_free(rxq);
+ return errno;
+}
+
static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
- .dev_stop = enetfec_eth_stop
+ .dev_stop = enetfec_eth_stop,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup
};
static int
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 0d16e48d12..36202ba6c7 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -12,10 +12,14 @@
#define FULL_DUPLEX 0x01
#define UNKNOWN_DUPLEX 0xff
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
#define PKT_MAX_BUF_SIZE 1984
#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
#define ETH_ALEN RTE_ETHER_ADDR_LEN
+#define __iomem
+
/*
* ENETFEC with AVB IP can support maximum 3 rx and tx queues.
*/
@@ -25,6 +29,49 @@
#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
#define readl(p) rte_read32(p)
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int queue_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
* descriptor base is x_bd_base. Currently available buffer are x_cur
* and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
@@ -69,4 +116,30 @@ struct enetfec_private {
struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
};
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
+}
+
+static inline int
+fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
#endif /*__ENETFEC_ETHDEV_H__*/
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v7 4/5] net/enetfec: add Rx/Tx support
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
` (2 preceding siblings ...)
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-03 19:20 ` Apeksha Gupta
2021-11-04 18:28 ` Ferruh Yigit
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 5/5] net/enetfec: add features Apeksha Gupta
2021-11-04 18:31 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
5 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-03 19:20 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds burst enqueue and dequeue operations to the enetfec
PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
used to enable this feature. By default loopback mode is disabled.
Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 2 +
drivers/net/enetfec/enet_ethdev.c | 189 +++++++++++-
drivers/net/enetfec/enet_ethdev.h | 24 ++
drivers/net/enetfec/enet_rxtx.c | 445 +++++++++++++++++++++++++++
drivers/net/enetfec/meson.build | 3 +-
6 files changed, 663 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index dfcd032098..47836630b6 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -82,6 +82,8 @@ driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index bdfbdbd9d4..3d8aa5b627 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Promiscuous mode = Y
+Basic stats = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index aa96093eb8..42b6cac26f 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -41,6 +41,8 @@
#define ENETFEC_RAFL_V 0x8
#define ENETFEC_OPD_V 0xFFF0
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
#define NUM_OF_BD_QUEUES 6
static uint32_t enetfec_e_cntl;
@@ -179,6 +181,40 @@ enetfec_restart(struct rte_eth_dev *dev)
rte_delay_us(10);
}
+static void
+enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
static int
enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
{
@@ -192,6 +228,8 @@ static int
enetfec_eth_start(struct rte_eth_dev *dev)
{
enetfec_restart(dev);
+ dev->rx_pkt_burst = &enetfec_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_xmit_pkts;
return 0;
}
@@ -217,6 +255,105 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_close(struct rte_eth_dev *dev)
+{
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused)
+{
+ struct rte_eth_link link;
+ unsigned int lstatus = 1;
+
+ if (dev == NULL) {
+ ENETFEC_PMD_ERR("Invalid device in link_update.");
+ return 0;
+ }
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ link.link_status = lstatus;
+ link.link_speed = ETH_SPEED_NUM_1G;
+
+ ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ "Up");
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+enetfec_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(rte_cpu_to_le_32(tmp),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(rte_cpu_to_le_32(0x04400002),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0x10800049),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+
+ return 0;
+}
+
static int
enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -228,6 +365,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static void
+enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
static const unsigned short offset_des_active_rxq[] = {
ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
};
@@ -433,6 +582,12 @@ static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
.dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
+ .link_update = enetfec_eth_link_update,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup
@@ -459,7 +614,9 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
int rc;
int i;
unsigned int bdsize;
-
+ struct rte_ether_addr macaddr = {
+ .addr_bytes = { 0x1, 0x1, 0x1, 0x1, 0x1, 0x1 }
+ };
name = rte_vdev_device_name(vdev);
if (name == NULL)
return -EINVAL;
@@ -501,6 +658,21 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set default mac address
+ */
+ enetfec_set_mac_address(dev, &macaddr);
+
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
@@ -509,6 +681,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
failed_init:
ENETFEC_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -516,6 +690,8 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
int ret;
/* find the ethdev entry */
@@ -523,11 +699,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
if (eth_dev == NULL)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+ enetfec_eth_stop(eth_dev);
+
ret = rte_eth_dev_release_port(eth_dev);
if (ret != 0)
return -EINVAL;
ENETFEC_PMD_INFO("Release enetfec sw device");
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 36202ba6c7..e48f958ad9 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -7,6 +7,11 @@
#include <rte_ethdev.h>
+#define ETHER_ADDR_LEN 6
+#define BD_LEN 49152
+#define ENETFEC_TX_FR_SIZE 2048
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+
/* full duplex or half duplex */
#define HALF_DUPLEX 0x00
#define FULL_DUPLEX 0x01
@@ -19,6 +24,20 @@
#define ETH_ALEN RTE_ETHER_ADDR_LEN
#define __iomem
+#if defined(RTE_ARCH_ARM)
+#if defined(RTE_ARCH_64)
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+
+#else /* RTE_ARCH_32 */
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+#else
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
/*
* ENETFEC with AVB IP can support maximum 3 rx and tx queues.
@@ -142,4 +161,9 @@ enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
}
+uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 0000000000..6ac4624553
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,445 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+#define ENETFEC_LOOPBACK 0
+#define ENETFEC_DUMP 0
+
+#if ENETFEC_DUMP
+static void
+enet_dump(struct enetfec_priv_tx_q *txq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENETFEC_PMD_DEBUG("TX ring dump\n");
+ ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = txq->bd.base;
+ do {
+ ENETFEC_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == txq->bd.cur ? 'S' : ' ',
+ bdp == txq->dirty_tx ? 'H' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ txq->tx_mbuf[index]);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ index++;
+ } while (bdp != txq->bd.base);
+}
+
+static void
+enet_dump_rx(struct enetfec_priv_rx_q *rxq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENETFEC_PMD_DEBUG("RX ring dump\n");
+ ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = rxq->bd.base;
+ do {
+ ENETFEC_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == rxq->bd.cur ? 'S' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ rxq->rx_mbuf[index]);
+ rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index],
+ rxq->rx_mbuf[index]->pkt_len);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ index++;
+ } while (bdp != rxq->bd.base);
+}
+#endif
+
+#if ENETFEC_LOOPBACK
+static volatile bool lb_quit;
+
+static void fec_signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
+ ENETFEC_PMD_INFO("\n\n %s: Signal %d received, preparing to exit...\n",
+ __func__, signum);
+ lb_quit = true;
+ }
+}
+
+static void
+enetfec_lb_rxtx(void *rxq1)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL;
+ struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len = 0;
+ int index_r = 0, index_t = 0;
+ u8 *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ unsigned int i;
+ struct enetfec_private *fep;
+ struct enetfec_priv_tx_q *txq;
+ fep = rxq->fep->dev->data->dev_private;
+ txq = fep->tx_queues[0];
+
+ pool = rxq->pool;
+ rx_bdp = rxq->bd.cur;
+ tx_bdp = txq->bd.cur;
+
+ signal(SIGTSTP, fec_signal_handler);
+ while (!lb_quit) {
+chk_again:
+ status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc));
+ if (status & RX_BD_EMPTY) {
+ if (!lb_quit)
+ goto chk_again;
+ rxq->bd.cur = rx_bdp;
+ txq->bd.cur = tx_bdp;
+ return;
+ }
+
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ ENETFEC_DP_LOG(DEBUG, "rx_fifo_error");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_DP_LOG("DEBUG, rx_length_error");
+ if (status & RX_BD_LAST)
+ ENETFEC_DP_LOG("DEBUG, rcv is not +last");
+ }
+ /* CRC Error */
+ if (status & RX_BD_CR)
+ ENETFEC_DP_LOG("DEBUG, rx_crc_errors");
+
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_DP_LOG("DEBUG, rx_frame_error");
+ mbuf = NULL;
+ goto rx_processing_done;
+ }
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(!new_mbuf)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Process the incoming frame. */
+ pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen));
+
+ /* shows data with respect to the data_off field. */
+ index_r = enet_get_bd_index(rx_bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index_r];
+
+ /* adjust pkt_len */
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4);
+ if (rxq->fep->quirks & QUIRK_RACC)
+ rte_pktmbuf_adj(mbuf, 2);
+
+ /* Replace Buffer in BD */
+ rxq->rx_mbuf[index_r] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &rx_bdp->bd_bufaddr);
+
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+
+ /* TX begins: First clean the ring then process packet */
+ index_t = enet_get_bd_index(tx_bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc));
+ if (status & TX_BD_READY)
+ stats->oerrors++;
+ break;
+ if (txq->tx_mbuf[index_t]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index_t]);
+ txq->tx_mbuf[index_t] = NULL;
+ }
+
+ if (mbuf == NULL)
+ continue;
+
+ /* Fill in a Tx ring entry */
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ pkt_len = rte_pktmbuf_pkt_len(mbuf);
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+
+ for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &tx_bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen);
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+
+ /* Save mbuf pointer to clean later */
+ txq->tx_mbuf[index_t] = mbuf;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd);
+ }
+}
+#endif
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+#if ENETFEC_LOOPBACK
+ enetfec_lb_rxtx(rxq1);
+#endif
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while ((status & RX_BD_EMPTY) == 0) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(new_mbuf == NULL)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENETFEC_DP_LOG(DEBUG, "rx_fifo_error");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_DP_LOG(DEBUG, "rx_length_error");
+ if (status & RX_BD_LAST)
+ ENETFEC_DP_LOG(DEBUG, "rcv is not +last");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENETFEC_DP_LOG(DEBUG, "rx_crc_errors");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_DP_LOG(DEBUG, "rx_frame_error");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ uint8_t *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENETFEC_PMD_DEBUG("SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 57f316b8a5..79dca58dea 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -7,4 +7,5 @@ if not is_linux
endif
sources = files('enet_ethdev.c',
- 'enet_uio.c')
+ 'enet_uio.c',
+ 'enet_rxtx.c')
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v7 5/5] net/enetfec: add features
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
` (3 preceding siblings ...)
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
@ 2021-11-03 19:20 ` Apeksha Gupta
2021-11-04 18:31 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
5 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-03 19:20 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds checksum and VLAN offloads in enetfec network
poll mode driver.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 3 ++
drivers/net/enetfec/enet_ethdev.c | 17 ++++++++-
drivers/net/enetfec/enet_ethdev.h | 1 +
drivers/net/enetfec/enet_regs.h | 10 +++++
drivers/net/enetfec/enet_rxtx.c | 56 +++++++++++++++++++++++++++-
6 files changed, 87 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 47836630b6..132f0209e5 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -84,6 +84,8 @@ ENETFEC Features
- Basic stats
- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 3d8aa5b627..2a34351b43 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -5,6 +5,9 @@
;
[Features]
Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Basic stats = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 42b6cac26f..026752fa91 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -106,7 +106,11 @@ enetfec_restart(struct rte_eth_dev *dev)
val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
/* align IP header */
val |= ENETFEC_RACC_SHIFT16;
- val &= ~ENETFEC_RACC_OPTIONS;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENETFEC_RACC_OPTIONS;
+ else
+ val &= ~ENETFEC_RACC_OPTIONS;
rte_write32(rte_cpu_to_le_32(val),
(uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
@@ -597,9 +601,20 @@ static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
fep->full_duplex = FULL_DUPLEX;
dev->dev_ops = &enetfec_ops;
+ if (fep->quirks & QUIRK_VLAN)
+ /* enable hw VLAN support */
+ rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+
+ if (fep->quirks & QUIRK_CSUM) {
+ /* enable hw accelerator */
+ rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+ }
rte_eth_dev_probing_finish(dev);
return 0;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index e48f958ad9..e4cb8ef5a5 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -11,6 +11,7 @@
#define BD_LEN 49152
#define ENETFEC_TX_FR_SIZE 2048
#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
/* full duplex or half duplex */
#define HALF_DUPLEX 0x00
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
index 5415ed77ea..a300c6f8bc 100644
--- a/drivers/net/enetfec/enet_regs.h
+++ b/drivers/net/enetfec/enet_regs.h
@@ -27,6 +27,12 @@
#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENETFEC_RX_VLAN 0x00000004
+
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+
/* Ethernet transmit use control and status of buffer descriptor */
#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
@@ -56,6 +62,10 @@
#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
/* GBIT supported in controller */
#define QUIRK_GBIT (1 << 3)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan */
+#define QUIRK_VLAN (1 << 6)
/* RACC register supported by controller */
#define QUIRK_RACC (1 << 12)
/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index 6ac4624553..3c110fa824 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -5,6 +5,7 @@
#include <signal.h>
#include <rte_mbuf.h>
#include <rte_io.h>
+#include <ethdev_driver.h>
#include "enet_regs.h"
#include "enet_ethdev.h"
#include "enet_pmd_logs.h"
@@ -245,11 +246,17 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
unsigned short status;
unsigned short pkt_len;
int pkt_received = 0, index = 0;
- void *data;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
pool = rxq->pool;
bdp = rxq->bd.cur;
+
#if ENETFEC_LOOPBACK
enetfec_lb_rxtx(rxq1);
#endif
@@ -302,6 +309,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
mbuf = rxq->rx_mbuf[index];
data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
rte_prefetch0(data);
rte_pktmbuf_append((struct rte_mbuf *)mbuf,
pkt_len - 4);
@@ -311,6 +319,48 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
rx_pkts[pkt_received] = mbuf;
pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)
+ ((uint8_t *)data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove((uint8_t *)mbuf_data + VLAN_HLEN,
+ data, ETH_ALEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if ((rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
+ /* don't check it */
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED
+ | RTE_MBUF_F_RX_VLAN;
+ }
+
rxq->rx_mbuf[index] = new_mbuf;
rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
&bdp->bd_bufaddr);
@@ -411,6 +461,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
if (txq->fep->bufdesc_ex) {
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == RTE_MBUF_F_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
rte_write32(0, &ebdp->bd_bdu);
rte_write32(rte_cpu_to_le_32(estatus),
&ebdp->bd_esc);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-03 23:27 ` Ferruh Yigit
2021-11-04 18:24 ` Ferruh Yigit
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 0/5] drivers/net: add " Apeksha Gupta
2 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-03 23:27 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> @@ -882,6 +882,13 @@ F: drivers/net/enetc/
> F: doc/guides/nics/enetc.rst
> F: doc/guides/nics/features/enetc.ini
>
> +NXP enetfec
> +M: Apeksha Gupta<apeksha.gupta@nxp.com>
> +M: Sachin Saxena<sachin.saxena@nxp.com>
> +F: drivers/net/enetfec/
> +F: doc/guides/nics/enetfec.rst
> +F: doc/guides/nics/features/enetfec.ini
> +
Hi Apeksha,
There was a request from techboard discussion to mark driver as 'experimental'
because of external 'fec-uio' kernel module dependency that has not been
upstreamed yet and carries a possibility to change that may impact the driver.
I guess meeting notes are not public yet, but please consult to Hemant for details.
Driver can be marked as experimental with below two changes:
1- Highlight in maintainers file as:
NXP enetfec - EXPERIMENTAL
<...>
2- In driver guide (enetfec.rst), in main section, briefly highlight that
driver is taken as experimental with explaining the reasoning behind it.
Thanks,
ferruh
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-03 23:27 ` Ferruh Yigit
@ 2021-11-04 18:24 ` Ferruh Yigit
2021-11-08 19:13 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 0/5] drivers/net: add " Apeksha Gupta
2 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-04 18:24 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> ENETFEC (Fast Ethernet Controller) is a network poll mode driver
> for NXP SoC i.MX 8M Mini.
>
> This patch adds skeleton for enetfec driver with probe function.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> v7:
> - Fix compilation
> - code cleanup
>
> v6:
> - Fix document build errors
> ---
> MAINTAINERS | 7 ++
> doc/guides/nics/enetfec.rst | 131 +++++++++++++++++++++++++
> doc/guides/nics/features/enetfec.ini | 9 ++
> doc/guides/nics/index.rst | 1 +
> doc/guides/rel_notes/release_21_11.rst | 6 +-
> drivers/net/enetfec/enet_ethdev.c | 85 ++++++++++++++++
> drivers/net/enetfec/enet_ethdev.h | 58 +++++++++++
> drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
> drivers/net/enetfec/meson.build | 9 ++
> drivers/net/enetfec/version.map | 3 +
> drivers/net/meson.build | 2 +-
> 11 files changed, 340 insertions(+), 2 deletions(-)
> create mode 100644 doc/guides/nics/enetfec.rst
> create mode 100644 doc/guides/nics/features/enetfec.ini
> create mode 100644 drivers/net/enetfec/enet_ethdev.c
> create mode 100644 drivers/net/enetfec/enet_ethdev.h
> create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
> create mode 100644 drivers/net/enetfec/meson.build
> create mode 100644 drivers/net/enetfec/version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 0e5951f8f1..d000eb81af 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -882,6 +882,13 @@ F: drivers/net/enetc/
> F: doc/guides/nics/enetc.rst
> F: doc/guides/nics/features/enetc.ini
>
> +NXP enetfec
> +M: Apeksha Gupta <apeksha.gupta@nxp.com>
> +M: Sachin Saxena <sachin.saxena@nxp.com>
> +F: drivers/net/enetfec/
> +F: doc/guides/nics/enetfec.rst
> +F: doc/guides/nics/features/enetfec.ini
> +
> NXP pfe
> M: Gagandeep Singh <g.singh@nxp.com>
> F: doc/guides/nics/pfe.rst
> diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> new file mode 100644
> index 0000000000..dfcd032098
> --- /dev/null
> +++ b/doc/guides/nics/enetfec.rst
> @@ -0,0 +1,131 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2021 NXP
> +
> +ENETFEC Poll Mode Driver
> +========================
> +
> +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
> +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
> +
> +More information can be found at NXP Official Website
> +<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
> +
> +ENETFEC
> +-------
> +
> +This section provides an overview of the NXP ENETFEC and how it is
> +integrated into the DPDK.
> +
> +Contents summary
> +
> +- ENETFEC overview
> +- ENETFEC features
> +- Supported ENETFEC SoCs
> +- Prerequisites
> +- Driver compilation and testing
> +- Limitations
> +
> +ENETFEC Overview
> +~~~~~~~~~~~~~~~~
> +The i.MX 8M Mini Media Applications Processor is built to achieve both
> +high performance and low power consumption. ENETFEC PMD is a hardware
> +programmable packet forwarding engine to provide high performance
> +Ethernet interface. It has only 1 GB Ethernet interface with RJ45
> +connector.
> +
> +The diagram below shows a system level overview of ENETFEC:
> +
> + .. code-block:: console
> +
> + =====================================================
> + Userspace
> + +-----------------------------------------+
> + | ENETFEC Driver |
> + | +-------------------------+ |
> + | | virtual ethernet device | |
> + +-----------------------------------------+
> + ^ |
> + | |
> + | |
> + RXQ | | TXQ
> + | |
> + | v
> + =====================================================
> + Kernel Space
> + +---------+
> + | fec-uio |
> + ====================+=========+======================
> + Hardware
> + +-----------------------------------------+
> + | i.MX 8M MINI EVK |
> + | +-----+ |
> + | | MAC | |
> + +---------------+-----+-------------------+
> + | PHY |
> + +-----+
> +
> +ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
> +userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
> +blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
> +initialisation and for mapping the allocated memory of register & buffer
> +descriptor with DPDK which gives access to non-cacheable memory for buffer
> +descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
> +driver.
> +
> +- ENETFEC driver registers the device in virtual device driver.
> +- RTE framework scans and will invoke the probe function of ENETFEC driver.
> +- The probe function will set the basic device registers and also setups BD rings.
> +- On packet Rx the respective BD Ring status bit is set which is then used for
> + packet processing.
> +- Then Tx is done first followed by Rx via logical interfaces.
> +
> +ENETFEC Features
> +~~~~~~~~~~~~~~~~~
> +
> +- Linux
> +- ARMv8
> +
> +Supported ENETFEC SoCs
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +- i.MX 8M Mini
> +
> +Prerequisites
> +~~~~~~~~~~~~~
> +
> +There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
> +compatible board:
> +
> +1. **ARM 64 Tool Chain**
> +
> + For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
> +
> +2. **Linux Kernel**
> +
> + It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
> +
> + .. note::
> +
> + Branch is 'lf-5.10.y'
> +
> +3. **Rootfile system**
> +
> + Any *aarch64* supporting filesystem can be used. For example,
> + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
> + from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
> +
> +4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
> + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
> + run DPDK application.
> +
> +Driver compilation and testing
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Follow instructions available in the document
> +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +to launch **dpdk-testpmd**
> +
> +Limitations
> +~~~~~~~~~~~
> +
> +- Multi queue is not supported.
The question on the above limitation is went unanswered in previous version,
instead of not replying and sending a new version, it would be better if you
can answer the questions to have some common understanding.
Back to the question,
in 'enetfec_eth_info()', 'max_rx_queues'/'max_tx_queues' set to 'ENETFEC_MAX_Q'
and
#define ENETFEC_MAX_Q 3
Also file comment says:
* ENETFEC with AVB IP can support maximum 3 rx and tx queues.
The device reports and documents 3 queues are supported, but there is a
limitation documented saying multi queue is not supported, which one is
correct?
> diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
> new file mode 100644
> index 0000000000..bdfbdbd9d4
> --- /dev/null
> +++ b/doc/guides/nics/features/enetfec.ini
> @@ -0,0 +1,9 @@
> +;
> +; Supported features of the 'enetfec' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Linux = Y
> +ARMv8 = Y
> +Usage doc = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 784d5d39f6..777fdab4a0 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -26,6 +26,7 @@ Network Interface Controller Drivers
> e1000em
> ena
> enetc
> + enetfec
> enic
> fm10k
> hinic
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 502cc5ceb2..aed380c21f 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -20,7 +20,6 @@ DPDK Release 21.11
> ninja -C build doc
> xdg-open build/doc/guides/html/rel_notes/release_21_11.html
>
> -
unrelated change.
> New Features
> ------------
>
> @@ -135,6 +134,11 @@ New Features
>
> Added an ethdev API which can help users get device configuration.
>
> +* **Added NXP ENETFEC PMD.**
> +
> + Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
> + :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
> +
PMDs are ordered by vendor name, can you pleae move this block below
'Mellanox' PMD update.
> * **Updated AF_XDP PMD.**
>
> * Disabled secondary process support.
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> new file mode 100644
> index 0000000000..a6c4bcbf2e
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -0,0 +1,85 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <stdio.h>
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <sys/mman.h>
> +#include <rte_kvargs.h>
> +#include <ethdev_vdev.h>
> +#include <rte_bus_vdev.h>
> +#include <rte_dev.h>
> +#include <rte_ether.h>
> +#include "enet_pmd_logs.h"
> +#include "enet_ethdev.h"
> +
> +#define ENETFEC_NAME_PMD net_enetfec
> +#define ENETFEC_CDEV_INVALID_FD -1
Is this macro used at all?
> +
> +static int
> +enetfec_eth_init(struct rte_eth_dev *dev)
> +{
> + rte_eth_dev_probing_finish(dev);
> + return 0;
> +}
> +
> +static int
> +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *dev = NULL;
> + struct enetfec_private *fep;
> + const char *name;
> + int rc;
> +
> + name = rte_vdev_device_name(vdev);
> + if (name == NULL)
> + return -EINVAL;
At this function 'name' shouldn't be null, I think you can drop
the check.
But can you pleae double check to be sure?
> + ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
> +
> + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
> + if (dev == NULL)
> + return -ENOMEM;
> +
> + /* setup board info structure */
> + fep = dev->data->dev_private;
> + fep->dev = dev;
> + rc = enetfec_eth_init(dev);
> + if (rc)
> + goto failed_init;
> +
> + return 0;
> +
> +failed_init:
> + ENETFEC_PMD_ERR("Failed to init");
> + return rc;
> +}
> +
> +static int
> +pmd_enetfec_remove(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *eth_dev = NULL;
> + int ret;
> +
> + /* find the ethdev entry */
> + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> + if (eth_dev == NULL)
> + return -ENODEV;
> +
> + ret = rte_eth_dev_release_port(eth_dev);
> + if (ret != 0)
> + return -EINVAL;
> +
> + ENETFEC_PMD_INFO("Release enetfec sw device");
> + return 0;
> +}
> +
> +static struct rte_vdev_driver pmd_enetfec_drv = {
> + .probe = pmd_enetfec_probe,
> + .remove = pmd_enetfec_remove,
> +};
> +
> +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
> +RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
> diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
> new file mode 100644
> index 0000000000..0e4558dd86
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.h
> @@ -0,0 +1,58 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef __ENETFEC_ETHDEV_H__
> +#define __ENETFEC_ETHDEV_H__
> +
> +/*
> + * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> + */
> +
> +#define ENETFEC_MAX_Q 3
> +
> +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
> + * descriptor base is x_bd_base. Currently available buffer are x_cur
> + * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
> + * that is sent by the controller.
> + * The tx_cur and dirty_tx are same in completely full and empty
> + * conditions. Actual condition is determined by empty & ready bits.
is above comment correct?
Where are mentioned 'x_bd_base', 'rx_cur', 'tx_cur', 'dirty_tx', etc...
> + */
> +struct enetfec_private {
> + struct rte_eth_dev *dev;
> + struct rte_eth_stats stats;
> + struct rte_mempool *pool;
> + uint16_t max_rx_queues;
> + uint16_t max_tx_queues;
> + unsigned int total_tx_ring_size;
> + unsigned int total_rx_ring_size;
> + bool bufdesc_ex;
> + unsigned int tx_align;
> + unsigned int rx_align;
> + int full_duplex;
> + unsigned int phy_speed;
> + uint32_t quirks;
> + int flag_csum;
> + int flag_pause;
> + int flag_wol;
> + bool rgmii_txc_delay;
> + bool rgmii_rxc_delay;
> + int link;
> + void *hw_baseaddr_v;
> + uint64_t hw_baseaddr_p;
> + void *bd_addr_v;
> + uint64_t bd_addr_p;
> + uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
> + uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
> + void *dma_baseaddr_r[ENETFEC_MAX_Q];
> + void *dma_baseaddr_t[ENETFEC_MAX_Q];
> + uint64_t cbus_size;
> + unsigned int reg_size;
> + unsigned int bd_size;
> + int hw_ts_rx_en;
> + int hw_ts_tx_en;
> + struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
> + struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
most of these fields are not used at all, why not construct the struct
by adding the fields as you use them? This prevent clutter to remain,
like 'flag_wol' above, does it used at all in the driver?
> +};
> +
> +#endif /*__ENETFEC_ETHDEV_H__*/
> diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
> new file mode 100644
> index 0000000000..e7b3964a0e
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_pmd_logs.h
> @@ -0,0 +1,31 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#ifndef _ENETFEC_LOGS_H_
> +#define _ENETFEC_LOGS_H_
> +
> +extern int enetfec_logtype_pmd;
> +
> +/* PMD related logs */
> +#define ENETFEC_PMD_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
> + fmt "\n", __func__, ##args)
> +
> +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
> +
> +#define ENETFEC_PMD_DEBUG(fmt, args...) \
> + ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
> +#define ENETFEC_PMD_ERR(fmt, args...) \
> + ENETFEC_PMD_LOG(ERR, fmt, ## args)
> +#define ENETFEC_PMD_INFO(fmt, args...) \
> + ENETFEC_PMD_LOG(INFO, fmt, ## args)
> +
> +#define ENETFEC_PMD_WARN(fmt, args...) \
> + ENETFEC_PMD_LOG(WARNING, fmt, ## args)
> +
> +/* DP Logs, toggled out at compile time if level lower than current level */
> +#define ENETFEC_DP_LOG(level, fmt, args...) \
> + RTE_LOG_DP(level, PMD, fmt, ## args)
> +
> +#endif /* _ENETFEC_LOGS_H_ */
> diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
> new file mode 100644
> index 0000000000..6d6c64c94b
> --- /dev/null
> +++ b/drivers/net/enetfec/meson.build
> @@ -0,0 +1,9 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright 2021 NXP
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on linux'
> +endif
> +
> +sources = files('enet_ethdev.c')
> diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
> new file mode 100644
> index 0000000000..b66517b171
> --- /dev/null
> +++ b/drivers/net/enetfec/version.map
> @@ -0,0 +1,3 @@
> +DPDK_22 {
> + local: *;
> +};
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index bcf488f203..ac294d8507 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -12,13 +12,13 @@ drivers = [
> 'bnx2x',
> 'bnxt',
> 'bonding',
> - 'cnxk',
Better to not remove competitor drivers ;)
> 'cxgbe',
> 'dpaa',
> 'dpaa2',
> 'e1000',
> 'ena',
> 'enetc',
> + 'enetfec',
> 'enic',
> 'failsafe',
> 'fm10k',
>
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v7 2/5] net/enetfec: add UIO support
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-11-04 18:25 ` Ferruh Yigit
2021-11-08 20:24 ` [dpdk-dev] [EXT] " Apeksha Gupta
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-04 18:25 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> Implemented the fec-uio driver in kernel. enetfec PMD uses
> UIO interface to interact with "fec-uio" driver implemented in
> kernel for PHY initialisation and for mapping the allocated memory
> of register & BD from kernel to DPDK which gives access to
> non-cacheable memory for BD.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> drivers/net/enetfec/enet_ethdev.c | 227 ++++++++++++++++++++++++
> drivers/net/enetfec/enet_ethdev.h | 14 ++
> drivers/net/enetfec/enet_regs.h | 106 ++++++++++++
> drivers/net/enetfec/enet_uio.c | 278 ++++++++++++++++++++++++++++++
> drivers/net/enetfec/enet_uio.h | 64 +++++++
> drivers/net/enetfec/meson.build | 3 +-
> 6 files changed, 691 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/enetfec/enet_regs.h
> create mode 100644 drivers/net/enetfec/enet_uio.c
> create mode 100644 drivers/net/enetfec/enet_uio.h
>
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> index a6c4bcbf2e..410c395039 100644
> --- a/drivers/net/enetfec/enet_ethdev.c
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -13,16 +13,212 @@
> #include <rte_bus_vdev.h>
> #include <rte_dev.h>
> #include <rte_ether.h>
> +#include <rte_io.h>
> #include "enet_pmd_logs.h"
> #include "enet_ethdev.h"
> +#include "enet_regs.h"
> +#include "enet_uio.h"
>
> #define ENETFEC_NAME_PMD net_enetfec
> #define ENETFEC_CDEV_INVALID_FD -1
> +#define BIT(nr) (1u << (nr))
We already have 'RTE_BIT32' macro, it can be used instead of defining
a new macro.
> +
> +/* FEC receive acceleration */
> +#define ENETFEC_RACC_IPDIS BIT(1)
> +#define ENETFEC_RACC_PRODIS BIT(2)
> +#define ENETFEC_RACC_SHIFT16 BIT(7)
> +#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
> + ENETFEC_RACC_PRODIS)
> +
> +#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
> +#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
> +
> +/* Pause frame field and FIFO threshold */
> +#define ENETFEC_FCE BIT(5)
> +#define ENETFEC_RSEM_V 0x84
> +#define ENETFEC_RSFL_V 16
> +#define ENETFEC_RAEM_V 0x8
> +#define ENETFEC_RAFL_V 0x8
> +#define ENETFEC_OPD_V 0xFFF0
> +
> +#define NUM_OF_BD_QUEUES 6
> +
> +static uint32_t enetfec_e_cntl;
> +
Again, question on the usage of this global variable in previous version
is not answered, let me copy/paste here:
Is this global variable really needed, most of the times what you need is
per port varible.
For example I can see this variable is updated based on port start/stop,
what if you have multiple ports and they are different start/stop state,
will the value of variable still be correct?
> +/*
> + * This function is called to start or restart the ENETFEC during a link
> + * change, transmit timeout, or to reconfigure the ENETFEC. The network
> + * packet processing for this device must be stopped before this call.
> + */
> +static void
> +enetfec_restart(struct rte_eth_dev *dev)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + uint32_t temp_mac[2];
> + uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
> + uint32_t ecntl = ENETFEC_ETHEREN;
> +
> + /* default mac address */
> + struct rte_ether_addr addr = {
> + .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
This mac address is set on device, but device data, 'dev->data->mac_addrs',
doesn't have it.
- Is MAC set required on each restart?
- What about save the MAC address to 'dev->data->mac_addrs', and use that
value in restart? In that case the MAC value device data and acual device
config matches.
> + uint32_t val;
> +
> + /*
> + * enet-mac reset will reset mac address registers too,
> + * so need to reconfigure it.
> + */
> + memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
> + rte_write32(rte_cpu_to_be_32(temp_mac[0]),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
> + rte_write32(rte_cpu_to_be_32(temp_mac[1]),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
> +
> + /* Clear any outstanding interrupt. */
> + writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
> +
> + /* Enable MII mode */
> + if (fep->full_duplex == FULL_DUPLEX) {
> + /* FD enable */
> + rte_write32(rte_cpu_to_le_32(0x04),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
> + } else {
> + /* No Rcv on Xmit */
> + rcntl |= 0x02;
> + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
> + }
> +
> + if (fep->quirks & QUIRK_RACC) {
> + val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
> + /* align IP header */
> + val |= ENETFEC_RACC_SHIFT16;
> + val &= ~ENETFEC_RACC_OPTIONS;
> + rte_write32(rte_cpu_to_le_32(val),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
> + rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
> + }
> +
> + /*
> + * The phy interface and speed need to get configured
> + * differently on enet-mac.
> + */
> + if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
> + /* Enable flow control and length check */
> + rcntl |= 0x40000000 | 0x00000020;
> +
> + /* RGMII, RMII or MII */
> + rcntl |= BIT(6);
> + ecntl |= BIT(5);
> + }
> +
> + /* enable pause frame*/
> + if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
> + ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
> + /*&& ndev->phydev && ndev->phydev->pause*/)) {
> + rcntl |= ENETFEC_FCE;
> +
> + /* set FIFO threshold parameter to reduce overrun */
> + rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
> + rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
> + rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
> + rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
> +
> + /* OPD */
> + rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
> + } else {
> + rcntl &= ~ENETFEC_FCE;
> + }
> +
> + rte_write32(rte_cpu_to_le_32(rcntl),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
> +
> + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
> + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
> +
> + if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
> + /* enable ENETFEC endian swap */
> + ecntl |= (1 << 8);
> + /* enable ENETFEC store and forward mode */
> + rte_write32(rte_cpu_to_le_32(1 << 8),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
> + }
> + if (fep->bufdesc_ex)
> + ecntl |= (1 << 4);
> + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
> + fep->rgmii_txc_delay)
> + ecntl |= ENETFEC_TXC_DLY;
> + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
> + fep->rgmii_rxc_delay)
> + ecntl |= ENETFEC_RXC_DLY;
> + /* Enable the MIB statistic event counters */
> + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
> +
> + ecntl |= 0x70000000;
> + enetfec_e_cntl = ecntl;
> + /* And last, enable the transmit and receive processing */
> + rte_write32(rte_cpu_to_le_32(ecntl),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
> + rte_delay_us(10);
> +}
> +
> +static int
> +enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
unnecessary '__rte_unused'
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v7 3/5] net/enetfec: support queue configuration
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-04 18:26 ` Ferruh Yigit
0 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-04 18:26 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> This patch adds Rx/Tx queue configuration setup operations.
> On packet reception the respective BD Ring status bit is set
> which is then used for packet processing.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> drivers/net/enetfec/enet_ethdev.c | 230 +++++++++++++++++++++++++++++-
> drivers/net/enetfec/enet_ethdev.h | 73 ++++++++++
> 2 files changed, 302 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> index 410c395039..aa96093eb8 100644
> --- a/drivers/net/enetfec/enet_ethdev.c
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -45,6 +45,19 @@
>
> static uint32_t enetfec_e_cntl;
>
> +/* Supported Rx offloads */
> +static uint64_t dev_rx_offloads_sup =
> + DEV_RX_OFFLOAD_IPV4_CKSUM |
> + DEV_RX_OFFLOAD_UDP_CKSUM |
> + DEV_RX_OFFLOAD_TCP_CKSUM |
> + DEV_RX_OFFLOAD_VLAN_STRIP |
> + DEV_RX_OFFLOAD_CHECKSUM;
> +
> +static uint64_t dev_tx_offloads_sup =
> + DEV_TX_OFFLOAD_IPV4_CKSUM |
> + DEV_TX_OFFLOAD_UDP_CKSUM |
> + DEV_TX_OFFLOAD_TCP_CKSUM;
> +
The comment in the previous version seems ignored, copying down:
The macro names are updated in ethdev, can you please update them?
like: DEV_RX_OFFLOAD_IPV4_CKSUM -> RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
> /*
> * This function is called to start or restart the ENETFEC during a link
> * change, transmit timeout, or to reconfigure the ENETFEC. The network
> @@ -204,10 +217,225 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
> return 0;
> }
>
> +static int
> +enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
> + struct rte_eth_dev_info *dev_info)
> +{
> + dev_info->max_rx_queues = ENETFEC_MAX_Q;
> + dev_info->max_tx_queues = ENETFEC_MAX_Q;
> + dev_info->rx_offload_capa = dev_rx_offloads_sup;
> + dev_info->tx_offload_capa = dev_tx_offloads_sup;
> + return 0;
> +}
> +
> +static const unsigned short offset_des_active_rxq[] = {
> + ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
> +};
> +
> +static const unsigned short offset_des_active_txq[] = {
> + ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
> +};
> +
> +static int
> +enetfec_tx_queue_setup(struct rte_eth_dev *dev,
> + uint16_t queue_idx,
> + uint16_t nb_desc,
> + unsigned int socket_id __rte_unused,
> + const struct rte_eth_txconf *tx_conf)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + unsigned int i;
> + struct bufdesc *bdp, *bd_base;
> + struct enetfec_priv_tx_q *txq;
> + unsigned int size;
> + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
> + sizeof(struct bufdesc);
> + unsigned int dsize_log2 = fls64(dsize);
> +
> + /* Tx deferred start is not supported */
> + if (tx_conf->tx_deferred_start) {
> + ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
> + (void *)dev);
> + return -EINVAL;
> + }
> +
> + /* allocate transmit queue */
> + txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
> + if (txq == NULL) {
> + ENETFEC_PMD_ERR("transmit queue allocation failed");
> + return -ENOMEM;
> + }
> +
> + if (nb_desc > MAX_TX_BD_RING_SIZE) {
> + nb_desc = MAX_TX_BD_RING_SIZE;
> + ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n");
Redundant '\n', as macro already adds one.
Can you please search and fix all usages?
> + }
> + txq->bd.ring_size = nb_desc;
> + fep->total_tx_ring_size += txq->bd.ring_size;
> + fep->tx_queues[queue_idx] = txq;
> +
> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
> +
> + /* Set transmit descriptor base. */
> + txq = fep->tx_queues[queue_idx];
> + txq->fep = fep;
> + size = dsize * txq->bd.ring_size;
> + bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
> + txq->bd.queue_id = queue_idx;
> + txq->bd.base = bd_base;
> + txq->bd.cur = bd_base;
> + txq->bd.d_size = dsize;
> + txq->bd.d_size_log2 = dsize_log2;
> + txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
> + offset_des_active_txq[queue_idx];
> + bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
> + txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
> + bdp = txq->bd.base;
> + bdp = txq->bd.cur;
> +
> + for (i = 0; i < txq->bd.ring_size; i++) {
> + /* Initialize the BD for every fragment in the page. */
> + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
> + if (txq->tx_mbuf[i] != NULL) {
> + rte_pktmbuf_free(txq->tx_mbuf[i]);
> + txq->tx_mbuf[i] = NULL;
> + }
> + rte_write32(0, &bdp->bd_bufaddr);
> + bdp = enet_get_nextdesc(bdp, &txq->bd);
> + }
> +
> + /* Set the last buffer to wrap */
> + bdp = enet_get_prevdesc(bdp, &txq->bd);
> + rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
> + rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
> + txq->dirty_tx = bdp;
> + dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
> + return 0;
> +}
> +
> +static int
> +enetfec_rx_queue_setup(struct rte_eth_dev *dev,
> + uint16_t queue_idx,
> + uint16_t nb_rx_desc,
> + unsigned int socket_id __rte_unused,
> + const struct rte_eth_rxconf *rx_conf,
> + struct rte_mempool *mb_pool)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + unsigned int i;
> + struct bufdesc *bd_base;
> + struct bufdesc *bdp;
> + struct enetfec_priv_rx_q *rxq;
> + unsigned int size;
> + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
> + sizeof(struct bufdesc);
> + unsigned int dsize_log2 = fls64(dsize);
> +
> + /* Rx deferred start is not supported */
> + if (rx_conf->rx_deferred_start) {
> + ENETFEC_PMD_ERR("%p:Rx deferred start not supported",
> + (void *)dev);
> + return -EINVAL;
> + }
> +
> + /* allocate receive queue */
> + rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
> + if (rxq == NULL) {
> + ENETFEC_PMD_ERR("receive queue allocation failed");
> + return -ENOMEM;
> + }
> +
> + if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
> + nb_rx_desc = MAX_RX_BD_RING_SIZE;
> + ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n");
> + }
> +
> + rxq->bd.ring_size = nb_rx_desc;
> + fep->total_rx_ring_size += rxq->bd.ring_size;
> + fep->rx_queues[queue_idx] = rxq;
> +
> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
> + rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
> + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
> +
> + /* Set receive descriptor base. */
> + rxq = fep->rx_queues[queue_idx];
> + rxq->pool = mb_pool;
> + size = dsize * rxq->bd.ring_size;
> + bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
> + rxq->bd.queue_id = queue_idx;
> + rxq->bd.base = bd_base;
> + rxq->bd.cur = bd_base;
> + rxq->bd.d_size = dsize;
> + rxq->bd.d_size_log2 = dsize_log2;
> + rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
> + offset_des_active_rxq[queue_idx];
> + bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
> + rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
> +
> + rxq->fep = fep;
> + bdp = rxq->bd.base;
> + rxq->bd.cur = bdp;
> +
> + for (i = 0; i < nb_rx_desc; i++) {
> + /* Initialize Rx buffers from pktmbuf pool */
> + struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
> + if (mbuf == NULL) {
> + ENETFEC_PMD_ERR("mbuf failed\n");
> + goto err_alloc;
Wrong indentation.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/5] net/enetfec: add Rx/Tx support
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
@ 2021-11-04 18:28 ` Ferruh Yigit
2021-11-09 16:20 ` [dpdk-dev] [EXT] " Apeksha Gupta
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-04 18:28 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> This patch adds burst enqueue and dequeue operations to the enetfec
> PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
> used to enable this feature. By default loopback mode is disabled.
> Basic features added like promiscuous enable, basic stats.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> +static int
> +enetfec_eth_link_update(struct rte_eth_dev *dev,
> + int wait_to_complete __rte_unused)
> +{
> + struct rte_eth_link link;
> + unsigned int lstatus = 1;
> +
> + if (dev == NULL) {
'dev' can't be null for a dev_ops,
unless it is called internally in PMD which seems not the case here.
> + ENETFEC_PMD_ERR("Invalid device in link_update.");
> + return 0;
> + }
> +
> + memset(&link, 0, sizeof(struct rte_eth_link));
> +
> + link.link_status = lstatus;
> + link.link_speed = ETH_SPEED_NUM_1G;
Isn't there an actual way to get real link status from device?
> +
> + ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
> + "Up");
> +
> + return rte_eth_linkstatus_set(dev, &link);
> +}
> +
<...>
> @@ -501,6 +658,21 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> fep->bd_addr_p = fep->bd_addr_p + bdsize;
> }
>
> + /* Copy the station address into the dev structure, */
> + dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
> + if (dev->data->mac_addrs == NULL) {
> + ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
> + ETHER_ADDR_LEN);
> + rc = -ENOMEM;
> + goto err;
> + }
> +
> + /*
> + * Set default mac address
> + */
> + enetfec_set_mac_address(dev, &macaddr);
In each device start, a different MAC address is set by 'enetfec_restart()',
I also put some comment there, but there seems two different MAC address set
in two different parts of the driver.
> +
> + fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
> rc = enetfec_eth_init(dev);
> if (rc)
> goto failed_init;
> @@ -509,6 +681,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
>
> failed_init:
> ENETFEC_PMD_ERR("Failed to init");
> +err:
> + rte_eth_dev_release_port(dev);
> return rc;
> }
>
> @@ -516,6 +690,8 @@ static int
> pmd_enetfec_remove(struct rte_vdev_device *vdev)
> {
> struct rte_eth_dev *eth_dev = NULL;
> + struct enetfec_private *fep;
> + struct enetfec_priv_rx_q *rxq;
> int ret;
>
> /* find the ethdev entry */
> @@ -523,11 +699,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
> if (eth_dev == NULL)
> return -ENODEV;
>
> + fep = eth_dev->data->dev_private;
> + /* Free descriptor base of first RX queue as it was configured
> + * first in enetfec_eth_init().
> + */
> + rxq = fep->rx_queues[0];
> + rte_free(rxq->bd.base);
> + enet_free_queue(eth_dev);
> + enetfec_eth_stop(eth_dev);
> +
> ret = rte_eth_dev_release_port(eth_dev);
> if (ret != 0)
> return -EINVAL;
>
> ENETFEC_PMD_INFO("Release enetfec sw device");
> + munmap(fep->hw_baseaddr_v, fep->cbus_size);
instead of unmap directly here, what about having a function in 'enet_uio.c',
and call that cleanup function from here?
> +
> return 0;
> }
>
> diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
> index 36202ba6c7..e48f958ad9 100644
> --- a/drivers/net/enetfec/enet_ethdev.h
> +++ b/drivers/net/enetfec/enet_ethdev.h
> @@ -7,6 +7,11 @@
>
> #include <rte_ethdev.h>
>
> +#define ETHER_ADDR_LEN 6
Below defines 'ETH_ALEN', seems for same reason.
And in DPDK we already have 'RTE_ETHER_ADDR_LEN'.
Tor prevent all these redundancy, why not drop 'ETH_ALEN' & 'ETHER_ADDR_LEN',
and just use 'RTE_ETHER_ADDR_LEN'.
> +#define BD_LEN 49152
> +#define ENETFEC_TX_FR_SIZE 2048
> +#define ETH_HLEN RTE_ETHER_HDR_LEN
> +
> /* full duplex or half duplex */
> #define HALF_DUPLEX 0x00
> #define FULL_DUPLEX 0x01
> @@ -19,6 +24,20 @@
> #define ETH_ALEN RTE_ETHER_ADDR_LEN
>
> #define __iomem
> +#if defined(RTE_ARCH_ARM)
> +#if defined(RTE_ARCH_64)
> +#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
> +#define dcbf_64(p) dcbf(p)
> +
> +#else /* RTE_ARCH_32 */
> +#define dcbf(p) RTE_SET_USED(p)
> +#define dcbf_64(p) dcbf(p)
> +#endif
> +
> +#else
> +#define dcbf(p) RTE_SET_USED(p)
> +#define dcbf_64(p) dcbf(p)
> +#endif
>
> /*
> * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> @@ -142,4 +161,9 @@ enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
> return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
> }
>
> +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts);
> +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> + uint16_t nb_pkts);
> +
> #endif /*__ENETFEC_ETHDEV_H__*/
> diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
> new file mode 100644
> index 0000000000..6ac4624553
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_rxtx.c
> @@ -0,0 +1,445 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#include <signal.h>
> +#include <rte_mbuf.h>
> +#include <rte_io.h>
> +#include "enet_regs.h"
> +#include "enet_ethdev.h"
> +#include "enet_pmd_logs.h"
> +
> +#define ENETFEC_LOOPBACK 0
> +#define ENETFEC_DUMP 0
There was a request to convert them into devargs, again seems
silently ignored, copy/paste from previous version:
Instead of compile time flags, why not convert them to devargs so
they can be updated without recompile?
This also make sure all code is enabled and prevent possible dead
code by time.
> +
> +#if ENETFEC_DUMP
> +static void
> +enet_dump(struct enetfec_priv_tx_q *txq)
> +{
> + struct bufdesc *bdp;
> + int index = 0;
> +
> + ENETFEC_PMD_DEBUG("TX ring dump\n");
> + ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
> +
> + bdp = txq->bd.base;
> + do {
> + ENETFEC_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n",
> + index,
> + bdp == txq->bd.cur ? 'S' : ' ',
> + bdp == txq->dirty_tx ? 'H' : ' ',
> + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
> + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
> + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
> + txq->tx_mbuf[index]);
> + bdp = enet_get_nextdesc(bdp, &txq->bd);
> + index++;
> + } while (bdp != txq->bd.base);
> +}
> +
> +static void
> +enet_dump_rx(struct enetfec_priv_rx_q *rxq)
> +{
> + struct bufdesc *bdp;
> + int index = 0;
> +
> + ENETFEC_PMD_DEBUG("RX ring dump\n");
> + ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
> +
> + bdp = rxq->bd.base;
> + do {
> + ENETFEC_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n",
> + index,
> + bdp == rxq->bd.cur ? 'S' : ' ',
> + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
> + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
> + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
> + rxq->rx_mbuf[index]);
> + rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index],
> + rxq->rx_mbuf[index]->pkt_len);
> + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> + index++;
> + } while (bdp != rxq->bd.base);
> +}
> +#endif
> +
> +#if ENETFEC_LOOPBACK
> +static volatile bool lb_quit;
> +
> +static void fec_signal_handler(int signum)
> +{
> + if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
> + ENETFEC_PMD_INFO("\n\n %s: Signal %d received, preparing to exit...\n",
> + __func__, signum);
> + lb_quit = true;
> + }
> +}
> +
Again another comment ignored from previos version, we shouldn't have
signals handled by driver, that is application task.
I am stopping reviewing here, there are too many things from previous
version just ignored, can you please check/answer comments on the
previous version of the set?
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/5] drivers/net: add NXP ENETFEC driver
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
` (4 preceding siblings ...)
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 5/5] net/enetfec: add features Apeksha Gupta
@ 2021-11-04 18:31 ` Ferruh Yigit
5 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-04 18:31 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> This patch series introduce the enetfec driver, ENETFEC
> (Fast Ethernet Controller) is a network poll mode driver for
> the inbuilt NIC found in the NXP i.MX 8M Mini SoC.
>
> An overview of the enetfec driver with probe and remove are in patch 1.
> Patch 2 design UIO interface so that user space directly communicate with
> a UIO based hardware device. UIO interface mmap the Control and Status
> Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
> gives access to non-cacheble memory for BD.
>
> Patch 3 adds the RX/TX queue configuration setup operations.
> Patch 4 adds enqueue and dequeue support. Also adds some basic features
> like promiscuous enable, basic stats.
> Patch 5 adds checksum and VLAN features.
>
> Apeksha Gupta (5):
> net/enetfec: introduce NXP ENETFEC driver
> net/enetfec: add UIO support
> net/enetfec: support queue configuration
> net/enetfec: add Rx/Tx support
> net/enetfec: add features
>
Hi Apeksha,
There are many comments to the previous version seems just ignored.
I did not finished reviewing this version, but to proceed can you
please first address comments to both previous version and this version?
Also there are some questions, it would be great if you can answer them.
Thanks,
ferruh
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v6 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-10-27 14:18 ` Ferruh Yigit
@ 2021-11-08 18:42 ` Apeksha Gupta
0 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-08 18:42 UTC (permalink / raw)
To: Ferruh Yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, October 27, 2021 7:49 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>;
> david.marchand@redhat.com; andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant
> Agrawal <hemant.agrawal@nxp.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce NXP
> ENETFEC driver
>
> Caution: EXT Email
>
> On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> > ENETFEC (Fast Ethernet Controller) is a network poll mode driver
> > for NXP SoC i.MX 8M Mini.
> >
> > This patch adds skeleton for enetfec driver with probe function.
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>
> <...>
>
> > +Follow instructions available in the document
> > +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> > +to launch **dpdk-testpmd**
> > +
> > +Limitations
> > +~~~~~~~~~~~
> > +
> > +- Multi queue is not supported.
>
> in 'enetfec_eth_info()'
> max_rx_queues/max_tx_queues returned as 3 (ENETFEC_MAX_Q).
> If multi queue is not supported why it is not one?
[Apeksha] I agree, As multi queue is not supported we will update ENETFEC_MAX_Q from 3 to 1 in v8 patch series.
>
> <...>
>
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -20,6 +20,10 @@ DPDK Release 21.11
> > ninja -C build doc
> > xdg-open build/doc/guides/html/rel_notes/release_21_11.html
> >
> > +* **Added NXP ENETFEC PMD.**
> > +
> > + Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See
> the
> > + :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
> >
>
> Update is in the doc comment, can you please move it down, within the
> ethdev
> driver group in alphabetically sorted manner.
[Apeksha] okay.
>
> <...>
>
> > +static int
> > +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> > +{
> > + struct rte_eth_dev *dev = NULL;
> > + struct enetfec_private *fep;
> > + const char *name;
> > + int rc;
> > +
> > + name = rte_vdev_device_name(vdev);
> > + if (name == NULL)
> > + return -EINVAL;
>
> Can name be 'NULL'? Not sure if we need this check, can you please check?
[Apeksha] yes, this check is required as ' rte_vdev_device_name' may return NULL.
rte_vdev_device_name(const struct rte_vdev_device *dev)
{
if (dev && dev->device.name)
return dev->device.name;
return NULL;
}
>
> <...>
>
> > +static int
> > +pmd_enetfec_remove(struct rte_vdev_device *vdev)
> > +{
> > + struct rte_eth_dev *eth_dev = NULL;
> > + int ret;
> > +
> > + /* find the ethdev entry */
> > + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> > + if (eth_dev == NULL)
> > + return -ENODEV;
> > +
> > + ret = rte_eth_dev_release_port(eth_dev);
> > + if (ret != 0)
> > + return -EINVAL;
> > +
> > + ENETFEC_PMD_INFO("Closing sw device");
>
> Log can be misleading, there is another dev_ops to close the device.
[Apeksha] Okay. Updated in v7 series.
>
> <...>
>
> > @@ -0,0 +1,179 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef __ENETFEC_ETHDEV_H__
> > +#define __ENETFEC_ETHDEV_H__
> > +
> > +#include <rte_ethdev.h>
> > +
> > +/*
> > + * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> > + */
> > +#define ENETFEC_MAX_Q 3
> > +
> > +#define ETHER_ADDR_LEN 6
> > +#define BD_LEN 49152
> > +#define ENETFEC_TX_FR_SIZE 2048
> > +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
> > +#define MAX_RX_BD_RING_SIZE 512
> > +
> > +/* full duplex or half duplex */
> > +#define HALF_DUPLEX 0x00
> > +#define FULL_DUPLEX 0x01
> > +#define UNKNOWN_DUPLEX 0xff
> > +
>
> Some of the defines in this header is not used at all. What about
> only adding structs/defines that are used? And add them as they are
> needed?
> This guarantees not having unused clutter in the code.
[Apeksha] I agree. We will update in v8 version.
>
> <...>
>
> > +/* Required types */
> > +typedef uint8_t u8;
> > +typedef uint16_t u16;
> > +typedef uint32_t u32;
> > +typedef uint64_t u64;
> > +
>
> Do we need these type definitions, as far as I can see they are used only
> in a few places, why not just use uint##_t types?
>
> <...>
>
> > +static inline struct
> > +bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop
> *bd)
> > +{
> > + return (bdp >= bd->last) ? bd->base
> > + : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
> > +}
> > +
> > +static inline struct
> > +bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop
> *bd)
> > +{
> > + return (bdp <= bd->base) ? bd->last
> > + : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
> > +}
> > +
> > +static inline int
> > +enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
> > +{
> > + return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
> > +}
> > +
> > +static inline int
> > +fls64(unsigned long word)
> > +{
> > + return (64 - __builtin_clzl(word)) - 1;
> > +}
> > +
>
> Same for these static inline functions, can you please add they when that are
> needed?
>
> > +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf
> **rx_pkts,
> > + uint16_t nb_pkts);
> > +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> > + uint16_t nb_pkts);
>
> These function definitions are not declared, at least not in this patch.
>
> > +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
> > + struct bufdesc_prop *bd);
>
> This is already static inline function, do we need separate definition for it?
>
> > +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
> > + struct rte_mbuf *mbuf);
> > +
>
> ditto, no function declaration.
>
> <...>
>
> > +
> > +/* DP Logs, toggled out at compile time if level lower than current level */
> > +#define ENETFEC_DP_LOG(level, fmt, args...) \
> > + RTE_LOG_DP(level, PMD, fmt, ## args)
> > +
>
> Not used at all.
>
> > +#endif /* _ENETFEC_LOGS_H_ */
> > diff --git a/drivers/net/enetfec/meson.build
> b/drivers/net/enetfec/meson.build
> > new file mode 100644
> > index 0000000000..79dca58dea
> > --- /dev/null
> > +++ b/drivers/net/enetfec/meson.build
> > @@ -0,0 +1,11 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright 2021 NXP
> > +
> > +if not is_linux
> > + build = false
> > + reason = 'only supported on linux'
> > +endif
> > +
> > +sources = files('enet_ethdev.c',
> > + 'enet_uio.c',
> > + 'enet_rxtx.c')
>
> This should cause build error on this patch, isn't it? Since these files don't
> exist in this patch.
> Each patch should be built successfully.
[Apeksha] I agree, all code cleanup comments are handled in v7 patch series. Also as suggested each patch is built successfully.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v6 2/5] net/enetfec: add UIO support
2021-10-27 14:21 ` Ferruh Yigit
@ 2021-11-08 18:44 ` Apeksha Gupta
0 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-08 18:44 UTC (permalink / raw)
To: Ferruh Yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, October 27, 2021 7:52 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>;
> david.marchand@redhat.com; andrew.rybchenko@oktetlabs.ru;
> ferruh.yigit@intel.com
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant
> Agrawal <hemant.agrawal@nxp.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v6 2/5] net/enetfec: add UIO support
>
> Caution: EXT Email
>
> On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> > Implemented the fec-uio driver in kernel. enetfec PMD uses
> > UIO interface to interact with "fec-uio" driver implemented in
> > kernel for PHY initialisation and for mapping the allocated memory
> > of register & BD from kernel to DPDK which gives access to
> > non-cacheable memory for BD.
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>
> <...>
>
> > +
> > +#define NUM_OF_QUEUES 6
>
> I guess this is number of BD queues, may be good to have it in macro name.
[Apeksha] okay, updated in v7 patch series.
>
> > +
> > +uint32_t e_cntl;
> > +
>
> Is this global variable really needed, most of the times what you need is
> per port varible.
> For example I can see this variable is updated based on port start/stop,
> what if you have multiple ports and they are different start/stop state,
> will the value of variable still be correct?
>
> And if it will be global, can you please make it 'static' and prefix driver
> namespace to it?
[Apeksha] Sure, we will update.
>
> > +/*
> > + * This function is called to start or restart the ENETFEC during a link
> > + * change, transmit timeout, or to reconfigure the ENETFEC. The network
> > + * packet processing for this device must be stopped before this call.
> > + */
> > +static void
> > +enetfec_restart(struct rte_eth_dev *dev)
> > +{
> > + struct enetfec_private *fep = dev->data->dev_private;
> > + uint32_t temp_mac[2];
> > + uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
> > + uint32_t ecntl = ENETFEC_ETHEREN;
> > +
> > + /* default mac address */
> > + struct rte_ether_addr addr = {
> > + .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
> > + uint32_t val;
> > +
> > + /*
> > + * enet-mac reset will reset mac address registers too,
> > + * so need to reconfigure it.
> > + */
> > + memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
> > + rte_write32(rte_cpu_to_be_32(temp_mac[0]),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
> > + rte_write32(rte_cpu_to_be_32(temp_mac[1]),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
> > +
>
> Is same MAC address used for all ports?
>
> Also probe sets a different MAC address, which one is valid?
[Apeksha] we will remove MAC address from here in next version. Also mac address feature is yet to be implemented. At present sets the fixed mac address from probe.
> <...>
>
> > +
> > +static int
> > +enetfec_eth_start(struct rte_eth_dev *dev)
> > +{
> > + enetfec_restart(dev);
> > +
> > + return 0;
> > +}
>
> Empty line is missing between two functions.
[Apeksha] okay.
>
> > +/* ENETFEC enable function.
> > + * @param[in] base ENETFEC base address
> > + */
> > +void
> > +enetfec_enable(void *base)
> > +{
> > + rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) | e_cntl,
> > + (uint8_t *)base + ENETFEC_ECR);
> > +}
> > +
> > +/* ENETFEC disable function.
> > + * @param[in] base ENETFEC base address
> > + */
> > +void
> > +enetfec_disable(void *base)
> > +{
> > + rte_write32(rte_read32((uint8_t *)base + ENETFEC_ECR) & ~e_cntl,
> > + (uint8_t *)base + ENETFEC_ECR);
> > +}
> > +
>
> Are these 'enetfec_enable()'/'enetfec_disable()' functions used out of this
> file,
> if not why not make them static, and remove definition in header file?
[Apeksha] I do agree as these functions not used out of the file. Updated in v7 patch series.
>
> > +static int
> > +enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
>
> 'dev' is used, so can drop '__rte_unused'.
[Apeksha] okay.
>
> > +{
> > + struct enetfec_private *fep = dev->data->dev_private;
> > +
> > + dev->data->dev_started = 0;
> > + enetfec_disable(fep->hw_baseaddr_v);
> > +
> > + return 0;
> > +}
>
> <...>
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v6 3/5] net/enetfec: support queue configuration
2021-10-27 14:23 ` Ferruh Yigit
@ 2021-11-08 18:45 ` Apeksha Gupta
0 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-08 18:45 UTC (permalink / raw)
To: Ferruh Yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, October 27, 2021 7:53 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>; david.marchand@redhat.com;
> andrew.rybchenko@oktetlabs.ru; ferruh.yigit@intel.com
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v6 3/5] net/enetfec: support queue
> configuration
>
> Caution: EXT Email
>
> On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> > This patch adds Rx/Tx queue configuration setup operations.
> > On packet reception the respective BD Ring status bit is set
> > which is then used for packet processing.
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>
> <...>
>
> >
> > +/* Supported Rx offloads */
> > +static uint64_t dev_rx_offloads_sup =
> > + DEV_RX_OFFLOAD_IPV4_CKSUM |
> > + DEV_RX_OFFLOAD_UDP_CKSUM |
> > + DEV_RX_OFFLOAD_TCP_CKSUM |
> > + DEV_RX_OFFLOAD_VLAN_STRIP |
> > + DEV_RX_OFFLOAD_CHECKSUM;
> > +
> > +static uint64_t dev_tx_offloads_sup =
> > + DEV_TX_OFFLOAD_IPV4_CKSUM |
> > + DEV_TX_OFFLOAD_UDP_CKSUM |
> > + DEV_TX_OFFLOAD_TCP_CKSUM;
> > +
>
> The macro names are updated in ethdev, can you please update them?
>
>
> Also these offloads are advertised, but some of them are not
> checked anywhere in the driver, like 'DEV_TX_OFFLOAD_*_CKSUM'.
> Are they really supported?
> If they are not supported in the datapath, please don't advertise
> them.
[Apeksha] Sure, we will update the macro names and remove all unused offloads.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v6 4/5] net/enetfec: add enqueue and dequeue support
2021-10-27 14:25 ` Ferruh Yigit
@ 2021-11-08 18:47 ` Apeksha Gupta
0 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-08 18:47 UTC (permalink / raw)
To: Ferruh Yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, October 27, 2021 7:56 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>; david.marchand@redhat.com;
> andrew.rybchenko@oktetlabs.ru; ferruh.yigit@intel.com
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v6 4/5] net/enetfec: add enqueue and
> dequeue support
>
> Caution: EXT Email
>
> On 10/21/2021 5:46 AM, Apeksha Gupta wrote:
> > This patch adds burst enqueue and dequeue operations to the enetfec
> > PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK'
> is
> > used to enable this feature. By default loopback mode is disabled.
> > Basic features added like promiscuous enable, basic stats.
> >
>
> In the patch title you can prefer "Rx/Tx support" instead of
> "enqueue and dequeue support", which is more common usage.
>
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>
> <...>
>
> > --- a/doc/guides/nics/features/enetfec.ini
> > +++ b/doc/guides/nics/features/enetfec.ini
> > @@ -4,6 +4,8 @@
> > ; Refer to default.ini for the full list of available PMD features.
> > ;
> > [Features]
> > +Basic stats = Y
> > +Promiscuous mode = Y
>
> Can you please keep the order same with default.ini file.
[Apeksha] yes, updated in v7 patch series.
>
> <...>
>
> > @@ -226,6 +264,110 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev
> *dev)
> > return 0;
> > }
> >
> > +static int
> > +enetfec_eth_close(__rte_unused struct rte_eth_dev *dev)
> > +{
>
> 'dev' is used.
>
> > + enet_free_buffers(dev);
> > + return 0;
> > +}
> > +
> > +static int
> > +enetfec_eth_link_update(struct rte_eth_dev *dev,
> > + int wait_to_complete __rte_unused)
> > +{
> > + struct rte_eth_link link;
> > + unsigned int lstatus = 1;
> > +
> > + if (dev == NULL) {
> > + ENETFEC_PMD_ERR("Invalid device in link_update.\n");
>
> Duplicated '\n'.
>
> > + return 0;
> > + }
> > +
> > + memset(&link, 0, sizeof(struct rte_eth_link));
> > +
> > + link.link_status = lstatus;
> > + link.link_speed = ETH_SPEED_NUM_1G;
> > +
> > + ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
> > + "Up");
> > +
> > + return rte_eth_linkstatus_set(dev, &link);
> > +}
> > +
> > +static int
> > +enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
>
> 'dev' is used.
>
> <...>
>
> > +static int
> > +enetfec_stats_get(struct rte_eth_dev *dev,
> > + struct rte_eth_stats *stats)
> > +{
> > + struct enetfec_private *fep = dev->data->dev_private;
> > + struct rte_eth_stats *eth_stats = &fep->stats;
> > +
> > + if (stats == NULL)
> > + return -1;
>
> No need to check this, ethdev layer already does.
>
> > +
> > + memset(stats, 0, sizeof(struct rte_eth_stats));
> > +
>
> Same here, ethdev does this.
>
> <...>
>
> > +
> > + /*
> > + * Set default mac address
> > + */
> > + macaddr.addr_bytes[0] = 1;
> > + macaddr.addr_bytes[1] = 1;
> > + macaddr.addr_bytes[2] = 1;
> > + macaddr.addr_bytes[3] = 1;
> > + macaddr.addr_bytes[4] = 1;
> > + macaddr.addr_bytes[5] = 1;
>
> if it is fixed, you can set the addr while declaring the variable:
> struct rte_ether_addr macaddr = {
> .addr_bytes = { 0x1, 0x1, 0x1, 0x1, 0x1, 0x1 }
> };
[Apeksha] As suggested all above comments are handled in v7 patch series.
>
>
> <...>
>
> > index 0000000000..445fa97e77
> > --- /dev/null
> > +++ b/drivers/net/enetfec/enet_rxtx.c
> > @@ -0,0 +1,445 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2021 NXP
> > + */
> > +
> > +#include <signal.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_io.h>
> > +#include "enet_regs.h"
> > +#include "enet_ethdev.h"
> > +#include "enet_pmd_logs.h"
> > +
> > +#define ENETFEC_LOOPBACK 0
> > +#define ENETFEC_DUMP 0
> > +
>
> Instead of compile time flags, why not convert them to devargs so
> they can be updated without recompile?
>
> This also make sure all code is enabled and prevent possible dead
> code by time.
[Apeksha] These are added for debug purpose only, we didn't want the per packet check. We are removing this functionality.
As suggested, later we will implement and add it in proper format.
>
> <...>
>
> > +
> > +#if ENETFEC_LOOPBACK
> > +static volatile bool lb_quit;
> > +
> > +static void fec_signal_handler(int signum)
> > +{
> > + if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
> > + printf("\n\n %s: Signal %d received, preparing to exit...\n",
> > + __func__, signum);
> > + lb_quit = true;
> > + }
> > +}
>
> Not sure if handling signals in the driver is a good idea, this is more an
> application level desicion. Please remember that DPDK is library and this
> PMD is one of many PMDs in the library.
>
> Also please don't use 'printf' directly.
>
> > +
> > +static void
> > +enetfec_lb_rxtx(void *rxq1)
> > +{
> > + struct rte_mempool *pool;
> > + struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL;
> > + struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL;
> > + unsigned short status;
> > + unsigned short pkt_len = 0;
> > + int index_r = 0, index_t = 0;
> > + u8 *data;
> > + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
> > + struct rte_eth_stats *stats = &rxq->fep->stats;
> > + unsigned int i;
> > + struct enetfec_private *fep;
> > + struct enetfec_priv_tx_q *txq;
> > + fep = rxq->fep->dev->data->dev_private;
> > + txq = fep->tx_queues[0];
> > +
> > + pool = rxq->pool;
> > + rx_bdp = rxq->bd.cur;
> > + tx_bdp = txq->bd.cur;
> > +
> > + signal(SIGTSTP, fec_signal_handler);
> > + while (!lb_quit) {
> > +chk_again:
> > + status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc));
> > + if (status & RX_BD_EMPTY) {
> > + if (!lb_quit)
> > + goto chk_again;
> > + rxq->bd.cur = rx_bdp;
> > + txq->bd.cur = tx_bdp;
> > + return;
> > + }
> > +
> > + /* Check for errors. */
> > + status ^= RX_BD_LAST;
> > + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
> > + RX_BD_CR | RX_BD_OV | RX_BD_LAST |
> > + RX_BD_TR)) {
> > + stats->ierrors++;
> > + if (status & RX_BD_OV) {
> > + /* FIFO overrun */
> > + ENETFEC_PMD_ERR("rx_fifo_error\n");
> > + goto rx_processing_done;
> > + }
> > + if (status & (RX_BD_LG | RX_BD_SH
> > + | RX_BD_LAST)) {
> > + /* Frame too long or too short. */
> > + ENETFEC_PMD_ERR("rx_length_error\n");
> > + if (status & RX_BD_LAST)
> > + ENETFEC_PMD_ERR("rcv is not +last\n");
>
> duplicated '\n', but more importantly this is datapath, are you sure to use
> debug logs in datapath?
>
> 'ENETFEC_DP_LOG' should be the to use in the datapath, since it is optimized
> out based on the default value it has, to not impact datapath.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v7 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-04 18:24 ` Ferruh Yigit
@ 2021-11-08 19:13 ` Apeksha Gupta
0 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-08 19:13 UTC (permalink / raw)
To: Ferruh Yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, November 4, 2021 11:54 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>; david.marchand@redhat.com;
> andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: [EXT] Re: [PATCH v7 1/5] net/enetfec: introduce NXP ENETFEC driver
>
> Caution: EXT Email
>
> On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> > ENETFEC (Fast Ethernet Controller) is a network poll mode driver
> > for NXP SoC i.MX 8M Mini.
> >
> > This patch adds skeleton for enetfec driver with probe function.
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> > ---
> > v7:
> > - Fix compilation
> > - code cleanup
> >
> > v6:
> > - Fix document build errors
> > ---
> > MAINTAINERS | 7 ++
> > doc/guides/nics/enetfec.rst | 131 +++++++++++++++++++++++++
> > doc/guides/nics/features/enetfec.ini | 9 ++
> > doc/guides/nics/index.rst | 1 +
> > doc/guides/rel_notes/release_21_11.rst | 6 +-
> > drivers/net/enetfec/enet_ethdev.c | 85 ++++++++++++++++
> > drivers/net/enetfec/enet_ethdev.h | 58 +++++++++++
> > drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
> > drivers/net/enetfec/meson.build | 9 ++
> > drivers/net/enetfec/version.map | 3 +
> > drivers/net/meson.build | 2 +-
> > 11 files changed, 340 insertions(+), 2 deletions(-)
> > create mode 100644 doc/guides/nics/enetfec.rst
> > create mode 100644 doc/guides/nics/features/enetfec.ini
> > create mode 100644 drivers/net/enetfec/enet_ethdev.c
> > create mode 100644 drivers/net/enetfec/enet_ethdev.h
> > create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
> > create mode 100644 drivers/net/enetfec/meson.build
> > create mode 100644 drivers/net/enetfec/version.map
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 0e5951f8f1..d000eb81af 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -882,6 +882,13 @@ F: drivers/net/enetc/
> > F: doc/guides/nics/enetc.rst
> > F: doc/guides/nics/features/enetc.ini
> >
> > +NXP enetfec
> > +M: Apeksha Gupta <apeksha.gupta@nxp.com>
> > +M: Sachin Saxena <sachin.saxena@nxp.com>
> > +F: drivers/net/enetfec/
> > +F: doc/guides/nics/enetfec.rst
> > +F: doc/guides/nics/features/enetfec.ini
> > +
> > NXP pfe
> > M: Gagandeep Singh <g.singh@nxp.com>
> > F: doc/guides/nics/pfe.rst
> > diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> > new file mode 100644
> > index 0000000000..dfcd032098
> > --- /dev/null
> > +++ b/doc/guides/nics/enetfec.rst
> > @@ -0,0 +1,131 @@
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > + Copyright 2021 NXP
> > +
> > +ENETFEC Poll Mode Driver
> > +========================
> > +
> > +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
> > +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
> > +
> > +More information can be found at NXP Official Website
> >
> +<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.
> nxp.com%2Fproducts%2Fprocessors-and-microcontrollers%2Farm-
> processors%2Fi-mx-applications-processors%2Fi-mx-8-processors%2Fi-mx-8m-
> mini-arm-cortex-a53-cortex-m4-audio-voice-
> video%3Ai.MX8MMINI&data=04%7C01%7Capeksha.gupta%40nxp.com%7
> Ce70ec7cb0fb04034c1ca08d99fc05bde%7C686ea1d3bc2b4c6fa92cd99c5c3016
> 35%7C0%7C1%7C637716470792212600%7CUnknown%7CTWFpbGZsb3d8eyJWI
> joiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C300
> 0&sdata=L%2BY2mjo9ynd%2FtNz3TkjQWokl4Am%2BQL21A0Z09n2HQEs%
> 3D&reserved=0>
> > +
> > +ENETFEC
> > +-------
> > +
> > +This section provides an overview of the NXP ENETFEC and how it is
> > +integrated into the DPDK.
> > +
> > +Contents summary
> > +
> > +- ENETFEC overview
> > +- ENETFEC features
> > +- Supported ENETFEC SoCs
> > +- Prerequisites
> > +- Driver compilation and testing
> > +- Limitations
> > +
> > +ENETFEC Overview
> > +~~~~~~~~~~~~~~~~
> > +The i.MX 8M Mini Media Applications Processor is built to achieve both
> > +high performance and low power consumption. ENETFEC PMD is a hardware
> > +programmable packet forwarding engine to provide high performance
> > +Ethernet interface. It has only 1 GB Ethernet interface with RJ45
> > +connector.
> > +
> > +The diagram below shows a system level overview of ENETFEC:
> > +
> > + .. code-block:: console
> > +
> > + =====================================================
> > + Userspace
> > + +-----------------------------------------+
> > + | ENETFEC Driver |
> > + | +-------------------------+ |
> > + | | virtual ethernet device | |
> > + +-----------------------------------------+
> > + ^ |
> > + | |
> > + | |
> > + RXQ | | TXQ
> > + | |
> > + | v
> > + =====================================================
> > + Kernel Space
> > + +---------+
> > + | fec-uio |
> > + ====================+=========+======================
> > + Hardware
> > + +-----------------------------------------+
> > + | i.MX 8M MINI EVK |
> > + | +-----+ |
> > + | | MAC | |
> > + +---------------+-----+-------------------+
> > + | PHY |
> > + +-----+
> > +
> > +ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
> > +userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
> > +blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
> > +initialisation and for mapping the allocated memory of register & buffer
> > +descriptor with DPDK which gives access to non-cacheable memory for buffer
> > +descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
> > +driver.
> > +
> > +- ENETFEC driver registers the device in virtual device driver.
> > +- RTE framework scans and will invoke the probe function of ENETFEC driver.
> > +- The probe function will set the basic device registers and also setups BD
> rings.
> > +- On packet Rx the respective BD Ring status bit is set which is then used for
> > + packet processing.
> > +- Then Tx is done first followed by Rx via logical interfaces.
> > +
> > +ENETFEC Features
> > +~~~~~~~~~~~~~~~~~
> > +
> > +- Linux
> > +- ARMv8
> > +
> > +Supported ENETFEC SoCs
> > +~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +- i.MX 8M Mini
> > +
> > +Prerequisites
> > +~~~~~~~~~~~~~
> > +
> > +There are three main pre-requisites for executing ENETFEC PMD on a i.MX
> 8M Mini
> > +compatible board:
> > +
> > +1. **ARM 64 Tool Chain**
> > +
> > + For example, the `*aarch64* Linaro Toolchain
> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Freleases
> .linaro.org%2Fcomponents%2Ftoolchain%2Fbinaries%2F7.4-
> 2019.02%2Faarch64-linux-gnu%2Fgcc-linaro-7.4.1-2019.02-x86_64_aarch64-
> linux-
> gnu.tar.xz&data=04%7C01%7Capeksha.gupta%40nxp.com%7Ce70ec7cb0f
> b04034c1ca08d99fc05bde%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C1
> %7C637716470792212600%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAw
> MDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata
> =s9WLK%2Bm%2Fvq9o4a9vBg%2FMApcO2xu0GVFN9qLXeLPkXYU%3D&res
> erved=0>`_.
> > +
> > +2. **Linux Kernel**
> > +
> > + It can be obtained from `NXP's Github hosting
> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsource.
> codeaurora.org%2Fexternal%2Fqoriq%2Fqoriq-
> components%2Flinux&data=04%7C01%7Capeksha.gupta%40nxp.com%7Ce
> 70ec7cb0fb04034c1ca08d99fc05bde%7C686ea1d3bc2b4c6fa92cd99c5c301635
> %7C0%7C1%7C637716470792212600%7CUnknown%7CTWFpbGZsb3d8eyJWIjoi
> MC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000
> &sdata=r0j8ETMAGJlBSFaxF6iBdUFW4%2BOnOZHGG2CN0rpApqw%3D&am
> p;reserved=0>`_.
> > +
> > + .. note::
> > +
> > + Branch is 'lf-5.10.y'
> > +
> > +3. **Rootfile system**
> > +
> > + Any *aarch64* supporting filesystem can be used. For example,
> > + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be
> obtained
> > + from `here
> <https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcdimage.
> ubuntu.com%2Fubuntu-base%2Freleases%2F18.04%2Frelease%2Fubuntu-base-
> 18.04.1-base-
> arm64.tar.gz&data=04%7C01%7Capeksha.gupta%40nxp.com%7Ce70ec7cb
> 0fb04034c1ca08d99fc05bde%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7
> C1%7C637716470792212600%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLj
> AwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sd
> ata=jfVUxHzFLG6hYcmf76wNX2NqZui49LvGZEyF9Vh7jOc%3D&reserved=0>
> `_.
> > +
> > +4. The Ethernet device will be registered as virtual device, so ENETFEC has
> dependency on
> > + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value
> `net_enetfec` to
> > + run DPDK application.
> > +
> > +Driver compilation and testing
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Follow instructions available in the document
> > +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> > +to launch **dpdk-testpmd**
> > +
> > +Limitations
> > +~~~~~~~~~~~
> > +
> > +- Multi queue is not supported.
>
> The question on the above limitation is went unanswered in previous version,
> instead of not replying and sending a new version, it would be better if you
> can answer the questions to have some common understanding.
>
> Back to the question,
>
> in 'enetfec_eth_info()', 'max_rx_queues'/'max_tx_queues' set to
> 'ENETFEC_MAX_Q'
> and
> #define ENETFEC_MAX_Q 3
>
> Also file comment says:
> * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
>
> The device reports and documents 3 queues are supported, but there is a
> limitation documented saying multi queue is not supported, which one is
> correct?
[Apeksha] I apologize. In v7 series, we have fixed the compilation issues and some code cleanup.
Multi-queue is not supported, will update related code change in next version.
>
> > diff --git a/doc/guides/nics/features/enetfec.ini
> b/doc/guides/nics/features/enetfec.ini
> > new file mode 100644
> > index 0000000000..bdfbdbd9d4
> > --- /dev/null
> > +++ b/doc/guides/nics/features/enetfec.ini
> > @@ -0,0 +1,9 @@
> > +;
> > +; Supported features of the 'enetfec' network poll mode driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +[Features]
> > +Linux = Y
> > +ARMv8 = Y
> > +Usage doc = Y
> > diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> > index 784d5d39f6..777fdab4a0 100644
> > --- a/doc/guides/nics/index.rst
> > +++ b/doc/guides/nics/index.rst
> > @@ -26,6 +26,7 @@ Network Interface Controller Drivers
> > e1000em
> > ena
> > enetc
> > + enetfec
> > enic
> > fm10k
> > hinic
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> > index 502cc5ceb2..aed380c21f 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -20,7 +20,6 @@ DPDK Release 21.11
> > ninja -C build doc
> > xdg-open build/doc/guides/html/rel_notes/release_21_11.html
> >
> > -
>
> unrelated change.
>
> > New Features
> > ------------
> >
> > @@ -135,6 +134,11 @@ New Features
> >
> > Added an ethdev API which can help users get device configuration.
> >
> > +* **Added NXP ENETFEC PMD.**
> > +
> > + Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
> > + :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
> > +
>
> PMDs are ordered by vendor name, can you pleae move this block below
> 'Mellanox' PMD update.
[Apeksha] okay, we will update in next version.
>
> > * **Updated AF_XDP PMD.**
> >
> > * Disabled secondary process support.
> > diff --git a/drivers/net/enetfec/enet_ethdev.c
> b/drivers/net/enetfec/enet_ethdev.c
> > new file mode 100644
> > index 0000000000..a6c4bcbf2e
> > --- /dev/null
> > +++ b/drivers/net/enetfec/enet_ethdev.c
> > @@ -0,0 +1,85 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#include <stdio.h>
> > +#include <fcntl.h>
> > +#include <stdlib.h>
> > +#include <unistd.h>
> > +#include <errno.h>
> > +#include <sys/mman.h>
> > +#include <rte_kvargs.h>
> > +#include <ethdev_vdev.h>
> > +#include <rte_bus_vdev.h>
> > +#include <rte_dev.h>
> > +#include <rte_ether.h>
> > +#include "enet_pmd_logs.h"
> > +#include "enet_ethdev.h"
> > +
> > +#define ENETFEC_NAME_PMD net_enetfec
> > +#define ENETFEC_CDEV_INVALID_FD -1
>
> Is this macro used at all?
[Apeksha] No, we will remove it.
>
> > +
> > +static int
> > +enetfec_eth_init(struct rte_eth_dev *dev)
> > +{
> > + rte_eth_dev_probing_finish(dev);
> > + return 0;
> > +}
> > +
> > +static int
> > +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> > +{
> > + struct rte_eth_dev *dev = NULL;
> > + struct enetfec_private *fep;
> > + const char *name;
> > + int rc;
> > +
> > + name = rte_vdev_device_name(vdev);
> > + if (name == NULL)
> > + return -EINVAL;
>
> At this function 'name' shouldn't be null, I think you can drop
> the check.
> But can you pleae double check to be sure?
[Apeksha] It is required. API may return the NULL.
>
> > + ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
> > +
> > + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
> > + if (dev == NULL)
> > + return -ENOMEM;
> > +
> > + /* setup board info structure */
> > + fep = dev->data->dev_private;
> > + fep->dev = dev;
> > + rc = enetfec_eth_init(dev);
> > + if (rc)
> > + goto failed_init;
> > +
> > + return 0;
> > +
> > +failed_init:
> > + ENETFEC_PMD_ERR("Failed to init");
> > + return rc;
> > +}
> > +
> > +static int
> > +pmd_enetfec_remove(struct rte_vdev_device *vdev)
> > +{
> > + struct rte_eth_dev *eth_dev = NULL;
> > + int ret;
> > +
> > + /* find the ethdev entry */
> > + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> > + if (eth_dev == NULL)
> > + return -ENODEV;
> > +
> > + ret = rte_eth_dev_release_port(eth_dev);
> > + if (ret != 0)
> > + return -EINVAL;
> > +
> > + ENETFEC_PMD_INFO("Release enetfec sw device");
> > + return 0;
> > +}
> > +
> > +static struct rte_vdev_driver pmd_enetfec_drv = {
> > + .probe = pmd_enetfec_probe,
> > + .remove = pmd_enetfec_remove,
> > +};
> > +
> > +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
> > +RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
> > diff --git a/drivers/net/enetfec/enet_ethdev.h
> b/drivers/net/enetfec/enet_ethdev.h
> > new file mode 100644
> > index 0000000000..0e4558dd86
> > --- /dev/null
> > +++ b/drivers/net/enetfec/enet_ethdev.h
> > @@ -0,0 +1,58 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef __ENETFEC_ETHDEV_H__
> > +#define __ENETFEC_ETHDEV_H__
> > +
> > +/*
> > + * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> > + */
> > +
> > +#define ENETFEC_MAX_Q 3
> > +
> > +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
> > + * descriptor base is x_bd_base. Currently available buffer are x_cur
> > + * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
> > + * that is sent by the controller.
> > + * The tx_cur and dirty_tx are same in completely full and empty
> > + * conditions. Actual condition is determined by empty & ready bits.
>
> is above comment correct?
>
> Where are mentioned 'x_bd_base', 'rx_cur', 'tx_cur', 'dirty_tx', etc...
>
> > + */
> > +struct enetfec_private {
> > + struct rte_eth_dev *dev;
> > + struct rte_eth_stats stats;
> > + struct rte_mempool *pool;
> > + uint16_t max_rx_queues;
> > + uint16_t max_tx_queues;
> > + unsigned int total_tx_ring_size;
> > + unsigned int total_rx_ring_size;
> > + bool bufdesc_ex;
> > + unsigned int tx_align;
> > + unsigned int rx_align;
> > + int full_duplex;
> > + unsigned int phy_speed;
> > + uint32_t quirks;
> > + int flag_csum;
> > + int flag_pause;
> > + int flag_wol;
> > + bool rgmii_txc_delay;
> > + bool rgmii_rxc_delay;
> > + int link;
> > + void *hw_baseaddr_v;
> > + uint64_t hw_baseaddr_p;
> > + void *bd_addr_v;
> > + uint64_t bd_addr_p;
> > + uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
> > + uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
> > + void *dma_baseaddr_r[ENETFEC_MAX_Q];
> > + void *dma_baseaddr_t[ENETFEC_MAX_Q];
> > + uint64_t cbus_size;
> > + unsigned int reg_size;
> > + unsigned int bd_size;
> > + int hw_ts_rx_en;
> > + int hw_ts_tx_en;
> > + struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
> > + struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
>
> most of these fields are not used at all, why not construct the struct
> by adding the fields as you use them? This prevent clutter to remain,
> like 'flag_wol' above, does it used at all in the driver?
[Apeksha] I do agree, We will update in next version.
>
> > +};
> > +
> > +#endif /*__ENETFEC_ETHDEV_H__*/
> > diff --git a/drivers/net/enetfec/enet_pmd_logs.h
> b/drivers/net/enetfec/enet_pmd_logs.h
> > new file mode 100644
> > index 0000000000..e7b3964a0e
> > --- /dev/null
> > +++ b/drivers/net/enetfec/enet_pmd_logs.h
> > @@ -0,0 +1,31 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2020-2021 NXP
> > + */
> > +
> > +#ifndef _ENETFEC_LOGS_H_
> > +#define _ENETFEC_LOGS_H_
> > +
> > +extern int enetfec_logtype_pmd;
> > +
> > +/* PMD related logs */
> > +#define ENETFEC_PMD_LOG(level, fmt, args...) \
> > + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
> > + fmt "\n", __func__, ##args)
> > +
> > +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
> > +
> > +#define ENETFEC_PMD_DEBUG(fmt, args...) \
> > + ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
> > +#define ENETFEC_PMD_ERR(fmt, args...) \
> > + ENETFEC_PMD_LOG(ERR, fmt, ## args)
> > +#define ENETFEC_PMD_INFO(fmt, args...) \
> > + ENETFEC_PMD_LOG(INFO, fmt, ## args)
> > +
> > +#define ENETFEC_PMD_WARN(fmt, args...) \
> > + ENETFEC_PMD_LOG(WARNING, fmt, ## args)
> > +
> > +/* DP Logs, toggled out at compile time if level lower than current level */
> > +#define ENETFEC_DP_LOG(level, fmt, args...) \
> > + RTE_LOG_DP(level, PMD, fmt, ## args)
> > +
> > +#endif /* _ENETFEC_LOGS_H_ */
> > diff --git a/drivers/net/enetfec/meson.build
> b/drivers/net/enetfec/meson.build
> > new file mode 100644
> > index 0000000000..6d6c64c94b
> > --- /dev/null
> > +++ b/drivers/net/enetfec/meson.build
> > @@ -0,0 +1,9 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright 2021 NXP
> > +
> > +if not is_linux
> > + build = false
> > + reason = 'only supported on linux'
> > +endif
> > +
> > +sources = files('enet_ethdev.c')
> > diff --git a/drivers/net/enetfec/version.map
> b/drivers/net/enetfec/version.map
> > new file mode 100644
> > index 0000000000..b66517b171
> > --- /dev/null
> > +++ b/drivers/net/enetfec/version.map
> > @@ -0,0 +1,3 @@
> > +DPDK_22 {
> > + local: *;
> > +};
> > diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> > index bcf488f203..ac294d8507 100644
> > --- a/drivers/net/meson.build
> > +++ b/drivers/net/meson.build
> > @@ -12,13 +12,13 @@ drivers = [
> > 'bnx2x',
> > 'bnxt',
> > 'bonding',
> > - 'cnxk',
>
> Better to not remove competitor drivers ;)
[Apeksha] I apologize. By mistake removed.
>
> > 'cxgbe',
> > 'dpaa',
> > 'dpaa2',
> > 'e1000',
> > 'ena',
> > 'enetc',
> > + 'enetfec',
> > 'enic',
> > 'failsafe',
> > 'fm10k',
> >
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v7 2/5] net/enetfec: add UIO support
2021-11-04 18:25 ` Ferruh Yigit
@ 2021-11-08 20:24 ` Apeksha Gupta
2021-11-08 21:51 ` Ferruh Yigit
0 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-08 20:24 UTC (permalink / raw)
To: Ferruh Yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, November 4, 2021 11:56 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>; david.marchand@redhat.com;
> andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: [EXT] Re: [PATCH v7 2/5] net/enetfec: add UIO support
>
> Caution: EXT Email
>
> On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> > Implemented the fec-uio driver in kernel. enetfec PMD uses
> > UIO interface to interact with "fec-uio" driver implemented in
> > kernel for PHY initialisation and for mapping the allocated memory
> > of register & BD from kernel to DPDK which gives access to
> > non-cacheable memory for BD.
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> > ---
> > drivers/net/enetfec/enet_ethdev.c | 227 ++++++++++++++++++++++++
> > drivers/net/enetfec/enet_ethdev.h | 14 ++
> > drivers/net/enetfec/enet_regs.h | 106 ++++++++++++
> > drivers/net/enetfec/enet_uio.c | 278 ++++++++++++++++++++++++++++++
> > drivers/net/enetfec/enet_uio.h | 64 +++++++
> > drivers/net/enetfec/meson.build | 3 +-
> > 6 files changed, 691 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/net/enetfec/enet_regs.h
> > create mode 100644 drivers/net/enetfec/enet_uio.c
> > create mode 100644 drivers/net/enetfec/enet_uio.h
> >
> > diff --git a/drivers/net/enetfec/enet_ethdev.c
> b/drivers/net/enetfec/enet_ethdev.c
> > index a6c4bcbf2e..410c395039 100644
> > --- a/drivers/net/enetfec/enet_ethdev.c
> > +++ b/drivers/net/enetfec/enet_ethdev.c
> > @@ -13,16 +13,212 @@
> > #include <rte_bus_vdev.h>
> > #include <rte_dev.h>
> > #include <rte_ether.h>
> > +#include <rte_io.h>
> > #include "enet_pmd_logs.h"
> > #include "enet_ethdev.h"
> > +#include "enet_regs.h"
> > +#include "enet_uio.h"
> >
> > #define ENETFEC_NAME_PMD net_enetfec
> > #define ENETFEC_CDEV_INVALID_FD -1
> > +#define BIT(nr) (1u << (nr))
>
> We already have 'RTE_BIT32' macro, it can be used instead of defining
> a new macro.
[Apeksha] sure, we will use RTE_BIT32 macro.
>
> > +
> > +/* FEC receive acceleration */
> > +#define ENETFEC_RACC_IPDIS BIT(1)
> > +#define ENETFEC_RACC_PRODIS BIT(2)
> > +#define ENETFEC_RACC_SHIFT16 BIT(7)
> > +#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
> > + ENETFEC_RACC_PRODIS)
> > +
> > +#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
> > +#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
> > +
> > +/* Pause frame field and FIFO threshold */
> > +#define ENETFEC_FCE BIT(5)
> > +#define ENETFEC_RSEM_V 0x84
> > +#define ENETFEC_RSFL_V 16
> > +#define ENETFEC_RAEM_V 0x8
> > +#define ENETFEC_RAFL_V 0x8
> > +#define ENETFEC_OPD_V 0xFFF0
> > +
> > +#define NUM_OF_BD_QUEUES 6
> > +
> > +static uint32_t enetfec_e_cntl;
> > +
>
> Again, question on the usage of this global variable in previous version
> is not answered, let me copy/paste here:
>
>
> Is this global variable really needed, most of the times what you need is
> per port varible.
> For example I can see this variable is updated based on port start/stop,
> what if you have multiple ports and they are different start/stop state,
> will the value of variable still be correct?
[Apeksha] This driver is implemented for IMX8MM board which has only one port.
So implemented accordingly. We will check when multiple ports supported.
>
> > +/*
> > + * This function is called to start or restart the ENETFEC during a link
> > + * change, transmit timeout, or to reconfigure the ENETFEC. The network
> > + * packet processing for this device must be stopped before this call.
> > + */
> > +static void
> > +enetfec_restart(struct rte_eth_dev *dev)
> > +{
> > + struct enetfec_private *fep = dev->data->dev_private;
> > + uint32_t temp_mac[2];
> > + uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
> > + uint32_t ecntl = ENETFEC_ETHEREN;
> > +
> > + /* default mac address */
> > + struct rte_ether_addr addr = {
> > + .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
>
> This mac address is set on device, but device data, 'dev->data->mac_addrs',
> doesn't have it.
> - Is MAC set required on each restart?
> - What about save the MAC address to 'dev->data->mac_addrs', and use that
> value in restart? In that case the MAC value device data and acual device
> config matches.
[Apeksha] MAC set is not required on each restart. So we will remove this unwanted code.
MAC address will set from probe function only.
>
> > + uint32_t val;
> > +
> > + /*
> > + * enet-mac reset will reset mac address registers too,
> > + * so need to reconfigure it.
> > + */
> > + memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
> > + rte_write32(rte_cpu_to_be_32(temp_mac[0]),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
> > + rte_write32(rte_cpu_to_be_32(temp_mac[1]),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
> > +
> > + /* Clear any outstanding interrupt. */
> > + writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
> > +
> > + /* Enable MII mode */
> > + if (fep->full_duplex == FULL_DUPLEX) {
> > + /* FD enable */
> > + rte_write32(rte_cpu_to_le_32(0x04),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
> > + } else {
> > + /* No Rcv on Xmit */
> > + rcntl |= 0x02;
> > + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
> > + }
> > +
> > + if (fep->quirks & QUIRK_RACC) {
> > + val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
> > + /* align IP header */
> > + val |= ENETFEC_RACC_SHIFT16;
> > + val &= ~ENETFEC_RACC_OPTIONS;
> > + rte_write32(rte_cpu_to_le_32(val),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
> > + rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
> > + }
> > +
> > + /*
> > + * The phy interface and speed need to get configured
> > + * differently on enet-mac.
> > + */
> > + if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
> > + /* Enable flow control and length check */
> > + rcntl |= 0x40000000 | 0x00000020;
> > +
> > + /* RGMII, RMII or MII */
> > + rcntl |= BIT(6);
> > + ecntl |= BIT(5);
> > + }
> > +
> > + /* enable pause frame*/
> > + if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
> > + ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
> > + /*&& ndev->phydev && ndev->phydev->pause*/)) {
> > + rcntl |= ENETFEC_FCE;
> > +
> > + /* set FIFO threshold parameter to reduce overrun */
> > + rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
> > + rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
> > + rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
> > + rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
> > +
> > + /* OPD */
> > + rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
> > + } else {
> > + rcntl &= ~ENETFEC_FCE;
> > + }
> > +
> > + rte_write32(rte_cpu_to_le_32(rcntl),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
> > +
> > + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
> > + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
> > +
> > + if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
> > + /* enable ENETFEC endian swap */
> > + ecntl |= (1 << 8);
> > + /* enable ENETFEC store and forward mode */
> > + rte_write32(rte_cpu_to_le_32(1 << 8),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
> > + }
> > + if (fep->bufdesc_ex)
> > + ecntl |= (1 << 4);
> > + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
> > + fep->rgmii_txc_delay)
> > + ecntl |= ENETFEC_TXC_DLY;
> > + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
> > + fep->rgmii_rxc_delay)
> > + ecntl |= ENETFEC_RXC_DLY;
> > + /* Enable the MIB statistic event counters */
> > + rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
> > +
> > + ecntl |= 0x70000000;
> > + enetfec_e_cntl = ecntl;
> > + /* And last, enable the transmit and receive processing */
> > + rte_write32(rte_cpu_to_le_32(ecntl),
> > + (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
> > + rte_delay_us(10);
> > +}
> > +
> > +static int
> > +enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
>
> unnecessary '__rte_unused'
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v7 2/5] net/enetfec: add UIO support
2021-11-08 20:24 ` [dpdk-dev] [EXT] " Apeksha Gupta
@ 2021-11-08 21:51 ` Ferruh Yigit
0 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-08 21:51 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
On 11/8/2021 8:24 PM, Apeksha Gupta wrote:
>
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Thursday, November 4, 2021 11:56 PM
>> To: Apeksha Gupta <apeksha.gupta@nxp.com>; david.marchand@redhat.com;
>> andrew.rybchenko@oktetlabs.ru
>> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant Agrawal
>> <hemant.agrawal@nxp.com>
>> Subject: [EXT] Re: [PATCH v7 2/5] net/enetfec: add UIO support
>>
>> Caution: EXT Email
>>
>> On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
>>> Implemented the fec-uio driver in kernel. enetfec PMD uses
>>> UIO interface to interact with "fec-uio" driver implemented in
>>> kernel for PHY initialisation and for mapping the allocated memory
>>> of register & BD from kernel to DPDK which gives access to
>>> non-cacheable memory for BD.
>>>
>>> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
>>> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>>> ---
>>> drivers/net/enetfec/enet_ethdev.c | 227 ++++++++++++++++++++++++
>>> drivers/net/enetfec/enet_ethdev.h | 14 ++
>>> drivers/net/enetfec/enet_regs.h | 106 ++++++++++++
>>> drivers/net/enetfec/enet_uio.c | 278 ++++++++++++++++++++++++++++++
>>> drivers/net/enetfec/enet_uio.h | 64 +++++++
>>> drivers/net/enetfec/meson.build | 3 +-
>>> 6 files changed, 691 insertions(+), 1 deletion(-)
>>> create mode 100644 drivers/net/enetfec/enet_regs.h
>>> create mode 100644 drivers/net/enetfec/enet_uio.c
>>> create mode 100644 drivers/net/enetfec/enet_uio.h
>>>
>>> diff --git a/drivers/net/enetfec/enet_ethdev.c
>> b/drivers/net/enetfec/enet_ethdev.c
>>> index a6c4bcbf2e..410c395039 100644
>>> --- a/drivers/net/enetfec/enet_ethdev.c
>>> +++ b/drivers/net/enetfec/enet_ethdev.c
>>> @@ -13,16 +13,212 @@
>>> #include <rte_bus_vdev.h>
>>> #include <rte_dev.h>
>>> #include <rte_ether.h>
>>> +#include <rte_io.h>
>>> #include "enet_pmd_logs.h"
>>> #include "enet_ethdev.h"
>>> +#include "enet_regs.h"
>>> +#include "enet_uio.h"
>>>
>>> #define ENETFEC_NAME_PMD net_enetfec
>>> #define ENETFEC_CDEV_INVALID_FD -1
>>> +#define BIT(nr) (1u << (nr))
>>
>> We already have 'RTE_BIT32' macro, it can be used instead of defining
>> a new macro.
> [Apeksha] sure, we will use RTE_BIT32 macro.
>
>>
>>> +
>>> +/* FEC receive acceleration */
>>> +#define ENETFEC_RACC_IPDIS BIT(1)
>>> +#define ENETFEC_RACC_PRODIS BIT(2)
>>> +#define ENETFEC_RACC_SHIFT16 BIT(7)
>>> +#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
>>> + ENETFEC_RACC_PRODIS)
>>> +
>>> +#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
>>> +#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
>>> +
>>> +/* Pause frame field and FIFO threshold */
>>> +#define ENETFEC_FCE BIT(5)
>>> +#define ENETFEC_RSEM_V 0x84
>>> +#define ENETFEC_RSFL_V 16
>>> +#define ENETFEC_RAEM_V 0x8
>>> +#define ENETFEC_RAFL_V 0x8
>>> +#define ENETFEC_OPD_V 0xFFF0
>>> +
>>> +#define NUM_OF_BD_QUEUES 6
>>> +
>>> +static uint32_t enetfec_e_cntl;
>>> +
>>
>> Again, question on the usage of this global variable in previous version
>> is not answered, let me copy/paste here:
>>
>>
>> Is this global variable really needed, most of the times what you need is
>> per port varible.
>> For example I can see this variable is updated based on port start/stop,
>> what if you have multiple ports and they are different start/stop state,
>> will the value of variable still be correct?
> [Apeksha] This driver is implemented for IMX8MM board which has only one port.
> So implemented accordingly. We will check when multiple ports supported.
>
If only a single port is supported, isn't it even easier to have this variable
as device specific data, instead of global variable. You can have it as part of
'struct enetfec_private'.
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v8 0/5] drivers/net: add NXP ENETFEC driver
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-03 23:27 ` Ferruh Yigit
2021-11-04 18:24 ` Ferruh Yigit
@ 2021-11-09 11:34 ` Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 1/5] net/enetfec: introduce " Apeksha Gupta
` (4 more replies)
2 siblings, 5 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-09 11:34 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch series introduce the enetfec driver, ENETFEC (Fast Ethernet
Controller) is a network poll mode driver for the inbuilt NIC found in
the NXP i.MX 8M Mini SoC.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO interface so that user space directly communicate with
a UIO based hardware device. UIO interface mmap the Control and Status
Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
gives access to non-cacheble memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Patch 5 adds checksum and VLAN features.
Apeksha Gupta (5):
net/enetfec: introduce NXP ENETFEC driver
net/enetfec: add UIO support
net/enetfec: support queue configuration
net/enetfec: add Rx/Tx support
net/enetfec: add features
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 137 +++++
doc/guides/nics/features/enetfec.ini | 14 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/net/enetfec/enet_ethdev.c | 706 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 155 ++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++
drivers/net/enetfec/enet_regs.h | 116 ++++
drivers/net/enetfec/enet_rxtx.c | 273 ++++++++++
drivers/net/enetfec/enet_uio.c | 284 ++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++
drivers/net/enetfec/meson.build | 11 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
15 files changed, 1808 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v8 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 0/5] drivers/net: add " Apeksha Gupta
@ 2021-11-09 11:34 ` Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 2/5] net/enetfec: add UIO support Apeksha Gupta
` (3 subsequent siblings)
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-09 11:34 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
ENETFEC (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC i.MX 8M Mini.
This patch adds skeleton for enetfec driver with probe function.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
MAINTAINERS | 7 ++
doc/guides/nics/enetfec.rst | 133 +++++++++++++++++++++++++
doc/guides/nics/features/enetfec.ini | 9 ++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/net/enetfec/enet_ethdev.c | 84 ++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 46 +++++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
drivers/net/enetfec/meson.build | 9 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
11 files changed, 329 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e157e12f88..2aa81efe20 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -889,6 +889,13 @@ F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
+NXP enetfec - EXPERIMENTAL
+M: Apeksha Gupta <apeksha.gupta@nxp.com>
+M: Sachin Saxena <sachin.saxena@nxp.com>
+F: drivers/net/enetfec/
+F: doc/guides/nics/enetfec.rst
+F: doc/guides/nics/features/enetfec.ini
+
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 0000000000..f0460c3ea7
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,133 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK. Driver is taken as **experimental** as driver
+itself detects the uio device, reads address and mmap them within the
+driver.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both
+high performance and low power consumption. ENETFEC PMD is a hardware
+programmable packet forwarding engine to provide high performance
+Ethernet interface. It has only 1 GB Ethernet interface with RJ45
+connector.
+
+The diagram below shows a system level overview of ENETFEC:
+
+ .. code-block:: console
+
+ =====================================================
+ Userspace
+ +-----------------------------------------+
+ | ENETFEC Driver |
+ | +-------------------------+ |
+ | | virtual ethernet device | |
+ +-----------------------------------------+
+ ^ |
+ | |
+ | |
+ RXQ | | TXQ
+ | |
+ | v
+ =====================================================
+ Kernel Space
+ +---------+
+ | fec-uio |
+ ====================+=========+======================
+ Hardware
+ +-----------------------------------------+
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+-------------------+
+ | PHY |
+ +-----+
+
+ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
+userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
+blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
+initialisation and for mapping the allocated memory of register & buffer
+descriptor with DPDK which gives access to non-cacheable memory for buffer
+descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
+driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- Linux
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
+
+ .. note::
+
+ Branch is 'lf-5.10.y'
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **dpdk-testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 0000000000..bdfbdbd9d4
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 784d5d39f6..777fdab4a0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 01923e2deb..6e80805cfd 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -221,6 +221,11 @@ New Features
* Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
* Added socket direct mode bonding support.
+* **Added NXP ENETFEC PMD.**
+
+ Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
+ :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
+
* **Updated Solarflare network PMD.**
Updated the Solarflare ``sfc_efx`` driver with changes including:
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 0000000000..50390f4ea4
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_pmd_logs.h"
+#include "enet_ethdev.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+ ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+
+ return 0;
+
+failed_init:
+ ENETFEC_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ int ret;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (eth_dev == NULL)
+ return -ENODEV;
+
+ ret = rte_eth_dev_release_port(eth_dev);
+ if (ret != 0)
+ return -EINVAL;
+
+ ENETFEC_PMD_INFO("Release enetfec sw device");
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 0000000000..e6b55e1ae6
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef __ENETFEC_ETHDEV_H__
+#define __ENETFEC_ETHDEV_H__
+
+/*
+ * ENETFEC can support 1 rx and tx queue..
+ */
+
+#define ENETFEC_MAX_Q 1
+
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
+ struct rte_mempool *pool;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
+ bool bufdesc_ex;
+ int full_duplex;
+ uint32_t quirks;
+ uint32_t enetfec_e_cntl;
+ int flag_csum;
+ int flag_pause;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ int link;
+ void *hw_baseaddr_v;
+ uint64_t hw_baseaddr_p;
+ void *bd_addr_v;
+ uint64_t bd_addr_p;
+ uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
+ uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
+ void *dma_baseaddr_r[ENETFEC_MAX_Q];
+ void *dma_baseaddr_t[ENETFEC_MAX_Q];
+ uint64_t cbus_size;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
+};
+
+#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 0000000000..e7b3964a0e
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _ENETFEC_LOGS_H_
+#define _ENETFEC_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENETFEC_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENETFEC_PMD_DEBUG(fmt, args...) \
+ ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
+#define ENETFEC_PMD_ERR(fmt, args...) \
+ ENETFEC_PMD_LOG(ERR, fmt, ## args)
+#define ENETFEC_PMD_INFO(fmt, args...) \
+ ENETFEC_PMD_LOG(INFO, fmt, ## args)
+
+#define ENETFEC_PMD_WARN(fmt, args...) \
+ ENETFEC_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENETFEC_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENETFEC_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 0000000000..6d6c64c94b
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+sources = files('enet_ethdev.c')
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 0000000000..b66517b171
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index bcf488f203..04be346509 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -19,6 +19,7 @@ drivers = [
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v8 2/5] net/enetfec: add UIO support
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 0/5] drivers/net: add " Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-09 11:34 ` Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 3/5] net/enetfec: support queue configuration Apeksha Gupta
` (2 subsequent siblings)
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-09 11:34 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with "fec-uio" driver implemented in
kernel for PHY initialisation and for mapping the allocated memory
of register & BD from kernel to DPDK which gives access to
non-cacheable memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 209 ++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 11 ++
drivers/net/enetfec/enet_regs.h | 106 +++++++++++
drivers/net/enetfec/enet_uio.c | 284 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++++++
drivers/net/enetfec/meson.build | 3 +-
6 files changed, 676 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 50390f4ea4..fe6b5e539f 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -13,14 +13,192 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_pmd_logs.h"
#include "enet_ethdev.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
+/* FEC receive acceleration */
+#define ENETFEC_RACC_IPDIS RTE_BIT32(1)
+#define ENETFEC_RACC_PRODIS RTE_BIT32(2)
+#define ENETFEC_RACC_SHIFT16 RTE_BIT32(7)
+#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
+ ENETFEC_RACC_PRODIS)
+
+#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
+#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
+
+/* Pause frame field and FIFO threshold */
+#define ENETFEC_FCE RTE_BIT32(5)
+#define ENETFEC_RSEM_V 0x84
+#define ENETFEC_RSFL_V 16
+#define ENETFEC_RAEM_V 0x8
+#define ENETFEC_RAFL_V 0x8
+#define ENETFEC_OPD_V 0xFFF0
+
+#define NUM_OF_BD_QUEUES 6
+
+/*
+ * This function is called to start or restart the ENETFEC during a link
+ * change, transmit timeout, or to reconfigure the ENETFEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENETFEC_ETHEREN;
+ uint32_t val;
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(rte_cpu_to_le_32(0x04),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ /* align IP header */
+ val |= ENETFEC_RACC_SHIFT16;
+ val &= ~ENETFEC_RACC_OPTIONS;
+ rte_write32(rte_cpu_to_le_32(val),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= RTE_BIT32(6);
+ ecntl |= RTE_BIT32(5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENETFEC_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
+ } else {
+ rcntl &= ~ENETFEC_FCE;
+ }
+
+ rte_write32(rte_cpu_to_le_32(rcntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* enable ENETFEC endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENETFEC store and forward mode */
+ rte_write32(rte_cpu_to_le_32(1 << 8),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
+ }
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENETFEC_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENETFEC_RXC_DLY;
+ /* Enable the MIB statistic event counters */
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
+
+ ecntl |= 0x70000000;
+ fep->enetfec_e_cntl = ecntl;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(rte_cpu_to_le_32(ecntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_configure(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
+
+ return 0;
+}
+
+static int
+enetfec_eth_start(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+
+/* ENETFEC disable function.
+ * @param[in] base ENETFEC base address
+ */
+static void
+enetfec_disable(struct enetfec_private *fep)
+{
+ rte_write32(rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR)
+ & ~(fep->enetfec_e_cntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+}
+
+static int
+enetfec_eth_stop(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ dev->data->dev_started = 0;
+ enetfec_disable(fep);
+
+ return 0;
+}
+
+static const struct eth_dev_ops enetfec_ops = {
+ .dev_configure = enetfec_eth_configure,
+ .dev_start = enetfec_eth_start,
+ .dev_stop = enetfec_eth_stop
+};
+
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ fep->full_duplex = FULL_DUPLEX;
+ dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
return 0;
}
@@ -32,6 +210,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -45,6 +225,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENETFEC_MAX_Q;
+ fep->max_tx_queues = ENETFEC_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT
+ | QUIRK_RACC;
+
+ rc = enetfec_configure();
+ if (rc != 0)
+ return -ENOMEM;
+ rc = config_enetfec_uio(fep);
+ if (rc != 0)
+ return -ENOMEM;
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / NUM_OF_BD_QUEUES;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index e6b55e1ae6..0f0684ab11 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -5,12 +5,23 @@
#ifndef __ENETFEC_ETHDEV_H__
#define __ENETFEC_ETHDEV_H__
+#include <rte_ethdev.h>
+
+/* full duplex */
+#define FULL_DUPLEX 0x00
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+
/*
* ENETFEC can support 1 rx and tx queue..
*/
#define ENETFEC_MAX_Q 1
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
struct enetfec_private {
struct rte_eth_dev *dev;
struct rte_eth_stats stats;
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 0000000000..5415ed77ea
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __ENETFEC_REGS_H
+#define __ENETFEC_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */
+#define RX_BD_INT 0x00800000
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+
+/*
+ * 0 The next BD in consecutive location
+ * 1 The next BD in ENETFECn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of buffer descriptor */
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+#define TX_BD_WRAP ((ushort)0x2000)
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+
+#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_RD_START_2 : ENETFEC_RD_START_0))
+#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_TD_START_2 : ENETFEC_TD_START_0))
+#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0))
+
+#define ENETFEC_ETHEREN ((uint)0x00000002)
+#define ENETFEC_TXC_DLY ((uint)0x00010000)
+#define ENETFEC_RXC_DLY ((uint)0x00020000)
+
+/* ENETFEC MAC is in controller */
+#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENETFEC_EIR 0x004 /* Interrupt event register */
+#define ENETFEC_EIMR 0x008 /* Interrupt mask register */
+#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENETFEC_ECR 0x024 /* Ethernet control register */
+#define ENETFEC_MSCR 0x044 /* MII speed control register */
+#define ENETFEC_MIBC 0x064 /* MIB control and status register */
+#define ENETFEC_RCR 0x084 /* Receive control register */
+#define ENETFEC_TCR 0x0c4 /* Transmit Control register */
+#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */
+#define ENETFEC_IALR 0x11c /* hash table 32 bits low */
+#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */
+#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */
+#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */
+#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */
+#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */
+#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */
+#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */
+#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */
+#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/
+#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */
+
+#endif /*__ENETFEC_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 0000000000..99e9dccf5a
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,284 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+static int enetfec_count;
+
+/** @brief Checks if a file name contains a certain substring.
+ * This function assumes a filename format of: [text][number].
+ * @param [in] filename File name
+ * @param [in] match String to match in file name
+ *
+ * @retval true if file name matches the criteria
+ * @retval false if file name does not match the criteria
+ */
+static bool
+file_name_match_extract(const char filename[], const char match[])
+{
+ char *substr = NULL;
+
+ substr = strstr(filename, match);
+ if (substr == NULL)
+ return false;
+
+ return true;
+}
+
+/*
+ * @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for success
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ if (ret <= 0) {
+ ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name);
+ return ret;
+ }
+ close(fd);
+
+ /* NULL-ify string */
+ line[ret] = '\0';
+
+ return 0;
+}
+
+/*
+ * @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ *
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
+ char uio_map_p_addr_str[32];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return NULL;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (enetfec_count > 0) {
+ ENETFEC_PMD_INFO("Mapped!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ return -1;
+ }
+
+ ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->bd_addr_p = uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ enetfec_count++;
+
+ return 0;
+}
+
+int
+enetfec_configure(void)
+{
+ char uio_name[32];
+ int uio_minor_number = -1;
+ int ret;
+ DIR *d = NULL;
+ struct dirent *dir;
+
+ d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
+ if (d == NULL) {
+ ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
+ return -1;
+ }
+
+ /* Iterate through all subdirs */
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 1) ||
+ !strncmp(dir->d_name, "..", 2))
+ continue;
+
+ if (file_name_match_extract(dir->d_name, "uio")) {
+ /*
+ * As substring <uio> was found in <d_name>
+ * read number following <uio> substring in <d_name>
+ */
+ ret = sscanf(dir->d_name + strlen("uio"), "%d",
+ &uio_minor_number);
+ if (ret < 0)
+ ENETFEC_PMD_ERR("Error: not find minor number\n");
+ /*
+ * Open file uioX/name and read first line which
+ * contains the name for the device. Based on the
+ * name check if this UIO device is for enetfec.
+ */
+ memset(uio_name, 0, sizeof(uio_name));
+ ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
+ dir->d_name, "name", uio_name);
+ if (ret != 0) {
+ ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ closedir(d);
+ return -1;
+ }
+
+ if (file_name_match_extract(uio_name,
+ FEC_UIO_DEVICE_NAME)) {
+ enetfec_uio_job.uio_minor_number =
+ uio_minor_number;
+ ENETFEC_PMD_INFO("enetfec device uio name: %s",
+ uio_name);
+ }
+ }
+ }
+ closedir(d);
+ return 0;
+}
+
+void
+enetfec_cleanup(struct enetfec_private *fep)
+{
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 0000000000..fec8ba6f95
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+/*
+ * Name of UIO device. User space FEC will have a corresponding
+ * UIO device.
+ * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH.
+ *
+ * @note Must be kept in sync with FEC kernel driver
+ * define #FEC_UIO_DEVICE_NAME !
+ */
+#define FEC_UIO_DEVICE_NAME "imx-fec-uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int enetfec_configure(void);
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(struct enetfec_private *fep);
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 6d6c64c94b..57f316b8a5 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -6,4 +6,5 @@ if not is_linux
reason = 'only supported on linux'
endif
-sources = files('enet_ethdev.c')
+sources = files('enet_ethdev.c',
+ 'enet_uio.c')
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v8 3/5] net/enetfec: support queue configuration
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 0/5] drivers/net: add " Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-11-09 11:34 ` Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 5/5] net/enetfec: add features Apeksha Gupta
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-09 11:34 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds Rx/Tx queue configuration setup operations.
On packet reception the respective BD Ring status bit is set
which is then used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 222 +++++++++++++++++++++++++++++-
drivers/net/enetfec/enet_ethdev.h | 74 ++++++++++
2 files changed, 295 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index fe6b5e539f..f70489ff91 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -41,6 +41,11 @@
#define NUM_OF_BD_QUEUES 6
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN;
+
/*
* This function is called to start or restart the ENETFEC during a link
* change, transmit timeout, or to reconfigure the ENETFEC. The network
@@ -186,10 +191,225 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_pktlen = ENETFEC_MAX_RX_PKT_LEN;
+ dev_info->max_rx_queues = ENETFEC_MAX_Q;
+ dev_info->max_tx_queues = ENETFEC_MAX_Q;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Tx deferred start is not supported */
+ if (tx_conf->tx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (txq == NULL) {
+ ENETFEC_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.queue_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(0, &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Rx deferred start is not supported */
+ if (rx_conf->rx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Rx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (rxq == NULL) {
+ ENETFEC_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.queue_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENETFEC_PMD_ERR("mbuf failed");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr) > 0)
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ if (rxq->rx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ }
+ rte_free(rxq);
+ return errno;
+}
+
static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
- .dev_stop = enetfec_eth_stop
+ .dev_stop = enetfec_eth_stop,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup
};
static int
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 0f0684ab11..babc7190fb 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -10,8 +10,13 @@
/* full duplex */
#define FULL_DUPLEX 0x00
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
#define PKT_MAX_BUF_SIZE 1984
#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ENETFEC_MAX_RX_PKT_LEN 3000
+
+#define __iomem
/*
* ENETFEC can support 1 rx and tx queue..
@@ -22,6 +27,49 @@
#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
#define readl(p) rte_read32(p)
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int queue_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
struct enetfec_private {
struct rte_eth_dev *dev;
struct rte_eth_stats stats;
@@ -54,4 +102,30 @@ struct enetfec_private {
struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
};
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
+}
+
+static inline int
+fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
#endif /*__ENETFEC_ETHDEV_H__*/
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v8 4/5] net/enetfec: add Rx/Tx support
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 0/5] drivers/net: add " Apeksha Gupta
` (2 preceding siblings ...)
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-09 11:34 ` Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 5/5] net/enetfec: add features Apeksha Gupta
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-09 11:34 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds burst enqueue and dequeue operations to the enetfec
PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
used to enable this feature. By default loopback mode is disabled.
Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 2 +
drivers/net/enetfec/enet_ethdev.c | 183 ++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 23 +++
drivers/net/enetfec/enet_rxtx.c | 220 +++++++++++++++++++++++++++
drivers/net/enetfec/meson.build | 3 +-
6 files changed, 432 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index f0460c3ea7..34af51c461 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -84,6 +84,8 @@ driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index bdfbdbd9d4..3d8aa5b627 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Promiscuous mode = Y
+Basic stats = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index f70489ff91..8c8788ad8f 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -39,6 +39,8 @@
#define ENETFEC_RAFL_V 0x8
#define ENETFEC_OPD_V 0xFFF0
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
#define NUM_OF_BD_QUEUES 6
/* Supported Rx offloads */
@@ -152,6 +154,40 @@ enetfec_restart(struct rte_eth_dev *dev)
rte_delay_us(10);
}
+static void
+enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
static int
enetfec_eth_configure(struct rte_eth_dev *dev)
{
@@ -165,6 +201,8 @@ static int
enetfec_eth_start(struct rte_eth_dev *dev)
{
enetfec_restart(dev);
+ dev->rx_pkt_burst = &enetfec_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_xmit_pkts;
return 0;
}
@@ -191,6 +229,100 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_close(struct rte_eth_dev *dev)
+{
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused)
+{
+ struct rte_eth_link link;
+ unsigned int lstatus = 1;
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ link.link_status = lstatus;
+ link.link_speed = ETH_SPEED_NUM_1G;
+
+ ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ "Up");
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+enetfec_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(rte_cpu_to_le_32(tmp),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(rte_cpu_to_le_32(0x04400002),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0x10800049),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+
+ return 0;
+}
+
static int
enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -202,6 +334,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static void
+enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
static const unsigned short offset_des_active_rxq[] = {
ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
};
@@ -407,6 +551,12 @@ static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
.dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
+ .link_update = enetfec_eth_link_update,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup
@@ -432,6 +582,9 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
int rc;
int i;
unsigned int bdsize;
+ struct rte_ether_addr macaddr = {
+ .addr_bytes = { 0x1, 0x1, 0x1, 0x1, 0x1, 0x1 }
+ };
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -474,6 +627,21 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ RTE_ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set default mac address
+ */
+ enetfec_set_mac_address(dev, &macaddr);
+
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
@@ -482,6 +650,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
failed_init:
ENETFEC_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -489,6 +659,8 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
int ret;
/* find the ethdev entry */
@@ -496,11 +668,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
if (eth_dev == NULL)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+ enetfec_eth_stop(eth_dev);
+
ret = rte_eth_dev_release_port(eth_dev);
if (ret != 0)
return -EINVAL;
ENETFEC_PMD_INFO("Release enetfec sw device");
+ enetfec_cleanup(fep);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index babc7190fb..c6f8cf7f03 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -7,6 +7,10 @@
#include <rte_ethdev.h>
+#define BD_LEN 49152
+#define ENETFEC_TX_FR_SIZE 2048
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+
/* full duplex */
#define FULL_DUPLEX 0x00
@@ -17,6 +21,20 @@
#define ENETFEC_MAX_RX_PKT_LEN 3000
#define __iomem
+#if defined(RTE_ARCH_ARM)
+#if defined(RTE_ARCH_64)
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+
+#else /* RTE_ARCH_32 */
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+#else
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
/*
* ENETFEC can support 1 rx and tx queue..
@@ -128,4 +146,9 @@ enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
}
+uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 0000000000..4e6a263e67
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while ((status & RX_BD_EMPTY) == 0) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(new_mbuf == NULL)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENETFEC_DP_LOG(DEBUG, "rx_fifo_error");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_DP_LOG(DEBUG, "rx_length_error");
+ if (status & RX_BD_LAST)
+ ENETFEC_DP_LOG(DEBUG, "rcv is not +last");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENETFEC_DP_LOG(DEBUG, "rx_crc_errors");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_DP_LOG(DEBUG, "rx_frame_error");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ uint8_t *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENETFEC_PMD_DEBUG("SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 57f316b8a5..79dca58dea 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -7,4 +7,5 @@ if not is_linux
endif
sources = files('enet_ethdev.c',
- 'enet_uio.c')
+ 'enet_uio.c',
+ 'enet_rxtx.c')
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v8 5/5] net/enetfec: add features
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 0/5] drivers/net: add " Apeksha Gupta
` (3 preceding siblings ...)
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
@ 2021-11-09 11:34 ` Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
4 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-09 11:34 UTC (permalink / raw)
To: david.marchand, ferruh.yigit, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds checksum and VLAN offloads in enetfec network
poll mode driver.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 3 ++
drivers/net/enetfec/enet_ethdev.c | 14 ++++++-
drivers/net/enetfec/enet_ethdev.h | 1 +
drivers/net/enetfec/enet_regs.h | 10 +++++
drivers/net/enetfec/enet_rxtx.c | 55 +++++++++++++++++++++++++++-
6 files changed, 82 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 34af51c461..3a7c4e7468 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -86,6 +86,8 @@ ENETFEC Features
- Basic stats
- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 3d8aa5b627..2a34351b43 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -5,6 +5,9 @@
;
[Features]
Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Basic stats = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 8c8788ad8f..80ee452a19 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -79,7 +79,11 @@ enetfec_restart(struct rte_eth_dev *dev)
val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
/* align IP header */
val |= ENETFEC_RACC_SHIFT16;
- val &= ~ENETFEC_RACC_OPTIONS;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENETFEC_RACC_OPTIONS;
+ else
+ val &= ~ENETFEC_RACC_OPTIONS;
rte_write32(rte_cpu_to_le_32(val),
(uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
@@ -191,7 +195,12 @@ enet_free_buffers(struct rte_eth_dev *dev)
static int
enetfec_eth_configure(struct rte_eth_dev *dev)
{
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
return 0;
@@ -570,6 +579,7 @@ enetfec_eth_init(struct rte_eth_dev *dev)
fep->full_duplex = FULL_DUPLEX;
dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index c6f8cf7f03..60b920d414 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -10,6 +10,7 @@
#define BD_LEN 49152
#define ENETFEC_TX_FR_SIZE 2048
#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
/* full duplex */
#define FULL_DUPLEX 0x00
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
index 5415ed77ea..a300c6f8bc 100644
--- a/drivers/net/enetfec/enet_regs.h
+++ b/drivers/net/enetfec/enet_regs.h
@@ -27,6 +27,12 @@
#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENETFEC_RX_VLAN 0x00000004
+
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+
/* Ethernet transmit use control and status of buffer descriptor */
#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
@@ -56,6 +62,10 @@
#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
/* GBIT supported in controller */
#define QUIRK_GBIT (1 << 3)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan */
+#define QUIRK_VLAN (1 << 6)
/* RACC register supported by controller */
#define QUIRK_RACC (1 << 12)
/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index 4e6a263e67..005c1d1135 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -5,6 +5,7 @@
#include <signal.h>
#include <rte_mbuf.h>
#include <rte_io.h>
+#include <ethdev_driver.h>
#include "enet_regs.h"
#include "enet_ethdev.h"
#include "enet_pmd_logs.h"
@@ -22,9 +23,14 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
unsigned short status;
unsigned short pkt_len;
int pkt_received = 0, index = 0;
- void *data;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
pool = rxq->pool;
bdp = rxq->bd.cur;
@@ -77,6 +83,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
mbuf = rxq->rx_mbuf[index];
data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
rte_prefetch0(data);
rte_pktmbuf_append((struct rte_mbuf *)mbuf,
pkt_len - 4);
@@ -86,6 +93,48 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
rx_pkts[pkt_received] = mbuf;
pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)
+ ((uint8_t *)data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove((uint8_t *)mbuf_data + VLAN_HLEN,
+ data, RTE_ETHER_ADDR_LEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if ((rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
+ /* don't check it */
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED
+ | RTE_MBUF_F_RX_VLAN;
+ }
+
rxq->rx_mbuf[index] = new_mbuf;
rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
&bdp->bd_bufaddr);
@@ -186,6 +235,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
if (txq->fep->bufdesc_ex) {
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == RTE_MBUF_F_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
rte_write32(0, &ebdp->bd_bdu);
rte_write32(rte_cpu_to_le_32(estatus),
&ebdp->bd_esc);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v7 4/5] net/enetfec: add Rx/Tx support
2021-11-04 18:28 ` Ferruh Yigit
@ 2021-11-09 16:20 ` Apeksha Gupta
0 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-09 16:20 UTC (permalink / raw)
To: Ferruh Yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, November 4, 2021 11:58 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>; david.marchand@redhat.com;
> andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: [EXT] Re: [PATCH v7 4/5] net/enetfec: add Rx/Tx support
>
> Caution: EXT Email
>
> On 11/3/2021 7:20 PM, Apeksha Gupta wrote:
> > This patch adds burst enqueue and dequeue operations to the enetfec
> > PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK'
> is
> > used to enable this feature. By default loopback mode is disabled.
> > Basic features added like promiscuous enable, basic stats.
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>
> <...>
>
> > +static int
> > +enetfec_eth_link_update(struct rte_eth_dev *dev,
> > + int wait_to_complete __rte_unused)
> > +{
> > + struct rte_eth_link link;
> > + unsigned int lstatus = 1;
> > +
> > + if (dev == NULL) {
>
> 'dev' can't be null for a dev_ops,
> unless it is called internally in PMD which seems not the case here.
[Apeksha] okay.
>
> > + ENETFEC_PMD_ERR("Invalid device in link_update.");
> > + return 0;
> > + }
> > +
> > + memset(&link, 0, sizeof(struct rte_eth_link));
> > +
> > + link.link_status = lstatus;
> > + link.link_speed = ETH_SPEED_NUM_1G;
>
> Isn't there an actual way to get real link status from device?
[Apeksha] 'Get link status' feature support is yet to be implemented.
>
> > +
> > + ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
> > + "Up");
> > +
> > + return rte_eth_linkstatus_set(dev, &link);
> > +}
> > +
>
> <...>
>
> > @@ -501,6 +658,21 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> > fep->bd_addr_p = fep->bd_addr_p + bdsize;
> > }
> >
> > + /* Copy the station address into the dev structure, */
> > + dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
> > + if (dev->data->mac_addrs == NULL) {
> > + ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC
> addresses",
> > + ETHER_ADDR_LEN);
> > + rc = -ENOMEM;
> > + goto err;
> > + }
> > +
> > + /*
> > + * Set default mac address
> > + */
> > + enetfec_set_mac_address(dev, &macaddr);
>
> In each device start, a different MAC address is set by 'enetfec_restart()',
> I also put some comment there, but there seems two different MAC address set
> in two different parts of the driver.
>
> > +
> > + fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
> > rc = enetfec_eth_init(dev);
> > if (rc)
> > goto failed_init;
> > @@ -509,6 +681,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> >
> > failed_init:
> > ENETFEC_PMD_ERR("Failed to init");
> > +err:
> > + rte_eth_dev_release_port(dev);
> > return rc;
> > }
> >
> > @@ -516,6 +690,8 @@ static int
> > pmd_enetfec_remove(struct rte_vdev_device *vdev)
> > {
> > struct rte_eth_dev *eth_dev = NULL;
> > + struct enetfec_private *fep;
> > + struct enetfec_priv_rx_q *rxq;
> > int ret;
> >
> > /* find the ethdev entry */
> > @@ -523,11 +699,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
> > if (eth_dev == NULL)
> > return -ENODEV;
> >
> > + fep = eth_dev->data->dev_private;
> > + /* Free descriptor base of first RX queue as it was configured
> > + * first in enetfec_eth_init().
> > + */
> > + rxq = fep->rx_queues[0];
> > + rte_free(rxq->bd.base);
> > + enet_free_queue(eth_dev);
> > + enetfec_eth_stop(eth_dev);
> > +
> > ret = rte_eth_dev_release_port(eth_dev);
> > if (ret != 0)
> > return -EINVAL;
> >
> > ENETFEC_PMD_INFO("Release enetfec sw device");
> > + munmap(fep->hw_baseaddr_v, fep->cbus_size);
>
> instead of unmap directly here, what about having a function in 'enet_uio.c',
> and call that cleanup function from here?
[Apeksha] Yes, this can be done. Updated in v8 series.
>
> > +
> > return 0;
> > }
> >
> > diff --git a/drivers/net/enetfec/enet_ethdev.h
> b/drivers/net/enetfec/enet_ethdev.h
> > index 36202ba6c7..e48f958ad9 100644
> > --- a/drivers/net/enetfec/enet_ethdev.h
> > +++ b/drivers/net/enetfec/enet_ethdev.h
> > @@ -7,6 +7,11 @@
> >
> > #include <rte_ethdev.h>
> >
> > +#define ETHER_ADDR_LEN 6
>
> Below defines 'ETH_ALEN', seems for same reason.
>
> And in DPDK we already have 'RTE_ETHER_ADDR_LEN'.
>
> Tor prevent all these redundancy, why not drop 'ETH_ALEN' &
> 'ETHER_ADDR_LEN',
> and just use 'RTE_ETHER_ADDR_LEN'.
[Apeksha] Okay, we will remove these redundancy and use 'RTE_ETHER_ADDR_LEN'.
>
> > +#define BD_LEN 49152
> > +#define ENETFEC_TX_FR_SIZE 2048
> > +#define ETH_HLEN RTE_ETHER_HDR_LEN
> > +
> > /* full duplex or half duplex */
> > #define HALF_DUPLEX 0x00
> > #define FULL_DUPLEX 0x01
> > @@ -19,6 +24,20 @@
> > #define ETH_ALEN RTE_ETHER_ADDR_LEN
> >
> > #define __iomem
> > +#if defined(RTE_ARCH_ARM)
> > +#if defined(RTE_ARCH_64)
> > +#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
> > +#define dcbf_64(p) dcbf(p)
> > +
> > +#else /* RTE_ARCH_32 */
> > +#define dcbf(p) RTE_SET_USED(p)
> > +#define dcbf_64(p) dcbf(p)
> > +#endif
> > +
> > +#else
> > +#define dcbf(p) RTE_SET_USED(p)
> > +#define dcbf_64(p) dcbf(p)
> > +#endif
> >
> > /*
> > * ENETFEC with AVB IP can support maximum 3 rx and tx queues.
> > @@ -142,4 +161,9 @@ enet_get_bd_index(struct bufdesc *bdp, struct
> bufdesc_prop *bd)
> > return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
> > }
> >
> > +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf
> **rx_pkts,
> > + uint16_t nb_pkts);
> > +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> > + uint16_t nb_pkts);
> > +
> > #endif /*__ENETFEC_ETHDEV_H__*/
> > diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
> > new file mode 100644
> > index 0000000000..6ac4624553
> > --- /dev/null
> > +++ b/drivers/net/enetfec/enet_rxtx.c
> > @@ -0,0 +1,445 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2021 NXP
> > + */
> > +
> > +#include <signal.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_io.h>
> > +#include "enet_regs.h"
> > +#include "enet_ethdev.h"
> > +#include "enet_pmd_logs.h"
> > +
> > +#define ENETFEC_LOOPBACK 0
> > +#define ENETFEC_DUMP 0
>
> There was a request to convert them into devargs, again seems
> silently ignored, copy/paste from previous version:
>
> Instead of compile time flags, why not convert them to devargs so
> they can be updated without recompile?
> This also make sure all code is enabled and prevent possible dead
> code by time.
>
> > +
> > +#if ENETFEC_DUMP
> > +static void
> > +enet_dump(struct enetfec_priv_tx_q *txq)
> > +{
> > + struct bufdesc *bdp;
> > + int index = 0;
> > +
> > + ENETFEC_PMD_DEBUG("TX ring dump\n");
> > + ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
> > +
> > + bdp = txq->bd.base;
> > + do {
> > + ENETFEC_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n",
> > + index,
> > + bdp == txq->bd.cur ? 'S' : ' ',
> > + bdp == txq->dirty_tx ? 'H' : ' ',
> > + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
> > + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
> > + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
> > + txq->tx_mbuf[index]);
> > + bdp = enet_get_nextdesc(bdp, &txq->bd);
> > + index++;
> > + } while (bdp != txq->bd.base);
> > +}
> > +
> > +static void
> > +enet_dump_rx(struct enetfec_priv_rx_q *rxq)
> > +{
> > + struct bufdesc *bdp;
> > + int index = 0;
> > +
> > + ENETFEC_PMD_DEBUG("RX ring dump\n");
> > + ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n");
> > +
> > + bdp = rxq->bd.base;
> > + do {
> > + ENETFEC_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n",
> > + index,
> > + bdp == rxq->bd.cur ? 'S' : ' ',
> > + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
> > + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
> > + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
> > + rxq->rx_mbuf[index]);
> > + rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index],
> > + rxq->rx_mbuf[index]->pkt_len);
> > + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> > + index++;
> > + } while (bdp != rxq->bd.base);
> > +}
> > +#endif
> > +
> > +#if ENETFEC_LOOPBACK
> > +static volatile bool lb_quit;
> > +
> > +static void fec_signal_handler(int signum)
> > +{
> > + if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
> > + ENETFEC_PMD_INFO("\n\n %s: Signal %d received, preparing to
> exit...\n",
> > + __func__, signum);
> > + lb_quit = true;
> > + }
> > +}
> > +
>
> Again another comment ignored from previos version, we shouldn't have
> signals handled by driver, that is application task.
>
> I am stopping reviewing here, there are too many things from previous
> version just ignored, can you please check/answer comments on the
> previous version of the set?
All above comments are handled in v8 patch series.
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v9 0/5] drivers/net: add NXP ENETFEC driver
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 5/5] net/enetfec: add features Apeksha Gupta
@ 2021-11-10 7:48 ` Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 1/5] net/enetfec: introduce " Apeksha Gupta
` (4 more replies)
0 siblings, 5 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-10 7:48 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch series introduce the enetfec driver, ENETFEC (Fast Ethernet
Controller) is a network poll mode driver for the inbuilt NIC found in
the NXP i.MX 8M Mini SoC.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO interface so that user space directly communicate with
a UIO based hardware device. UIO interface mmap the Control and Status
Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
gives access to non-cacheble memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Patch 5 adds checksum and VLAN features.
Apeksha Gupta (5):
net/enetfec: introduce NXP ENETFEC driver
net/enetfec: add UIO support
net/enetfec: support queue configuration
net/enetfec: add Rx/Tx support
net/enetfec: add features
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 137 +++++
doc/guides/nics/features/enetfec.ini | 14 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/net/enetfec/enet_ethdev.c | 706 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 155 ++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++
drivers/net/enetfec/enet_regs.h | 116 ++++
drivers/net/enetfec/enet_rxtx.c | 273 ++++++++++
drivers/net/enetfec/enet_uio.c | 284 ++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++
drivers/net/enetfec/meson.build | 11 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
15 files changed, 1808 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v9 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
@ 2021-11-10 7:48 ` Apeksha Gupta
2021-11-10 13:53 ` Ferruh Yigit
2021-11-13 4:31 ` [PATCH v10 0/5] drivers/net: add " Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 2/5] net/enetfec: add UIO support Apeksha Gupta
` (3 subsequent siblings)
4 siblings, 2 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-10 7:48 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
ENETFEC (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC i.MX 8M Mini.
This patch adds skeleton for enetfec driver with probe function.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
v9:
- Fix document build warning
v8:
- Rework of technical comments
v7:
- Fix compilation
- code cleanup
v6:
- Fix document build errors
---
MAINTAINERS | 7 ++
doc/guides/nics/enetfec.rst | 133 +++++++++++++++++++++++++
doc/guides/nics/features/enetfec.ini | 9 ++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/net/enetfec/enet_ethdev.c | 84 ++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 46 +++++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
drivers/net/enetfec/meson.build | 9 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
11 files changed, 329 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e157e12f88..2aa81efe20 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -889,6 +889,13 @@ F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
+NXP enetfec - EXPERIMENTAL
+M: Apeksha Gupta <apeksha.gupta@nxp.com>
+M: Sachin Saxena <sachin.saxena@nxp.com>
+F: drivers/net/enetfec/
+F: doc/guides/nics/enetfec.rst
+F: doc/guides/nics/features/enetfec.ini
+
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 0000000000..2c29ad362c
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,133 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK. Driver is taken as **experimental** as driver
+itself detects the uio device, reads address and mmap them within the
+driver.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both
+high performance and low power consumption. ENETFEC PMD is a hardware
+programmable packet forwarding engine to provide high performance
+Ethernet interface. It has only 1 GB Ethernet interface with RJ45
+connector.
+
+The diagram below shows a system level overview of ENETFEC:
+
+ .. code-block:: console
+
+ =====================================================
+ Userspace
+ +-----------------------------------------+
+ | ENETFEC Driver |
+ | +-------------------------+ |
+ | | virtual ethernet device | |
+ +-----------------------------------------+
+ ^ |
+ | |
+ | |
+ RXQ | | TXQ
+ | |
+ | v
+ =====================================================
+ Kernel Space
+ +---------+
+ | fec-uio |
+ ====================+=========+======================
+ Hardware
+ +-----------------------------------------+
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+-------------------+
+ | PHY |
+ +-----+
+
+ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
+userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
+blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
+initialisation and for mapping the allocated memory of register & buffer
+descriptor with DPDK which gives access to non-cacheable memory for buffer
+descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
+driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- Linux
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
+
+.. note::
+
+ Branch is 'lf-5.10.y'
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **dpdk-testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 0000000000..bdfbdbd9d4
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 784d5d39f6..777fdab4a0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 01923e2deb..6e80805cfd 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -221,6 +221,11 @@ New Features
* Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
* Added socket direct mode bonding support.
+* **Added NXP ENETFEC PMD.**
+
+ Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
+ :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
+
* **Updated Solarflare network PMD.**
Updated the Solarflare ``sfc_efx`` driver with changes including:
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 0000000000..50390f4ea4
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_pmd_logs.h"
+#include "enet_ethdev.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+ ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+
+ return 0;
+
+failed_init:
+ ENETFEC_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ int ret;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (eth_dev == NULL)
+ return -ENODEV;
+
+ ret = rte_eth_dev_release_port(eth_dev);
+ if (ret != 0)
+ return -EINVAL;
+
+ ENETFEC_PMD_INFO("Release enetfec sw device");
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 0000000000..e6b55e1ae6
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef __ENETFEC_ETHDEV_H__
+#define __ENETFEC_ETHDEV_H__
+
+/*
+ * ENETFEC can support 1 rx and tx queue..
+ */
+
+#define ENETFEC_MAX_Q 1
+
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
+ struct rte_mempool *pool;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
+ bool bufdesc_ex;
+ int full_duplex;
+ uint32_t quirks;
+ uint32_t enetfec_e_cntl;
+ int flag_csum;
+ int flag_pause;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ int link;
+ void *hw_baseaddr_v;
+ uint64_t hw_baseaddr_p;
+ void *bd_addr_v;
+ uint64_t bd_addr_p;
+ uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
+ uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
+ void *dma_baseaddr_r[ENETFEC_MAX_Q];
+ void *dma_baseaddr_t[ENETFEC_MAX_Q];
+ uint64_t cbus_size;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
+};
+
+#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 0000000000..e7b3964a0e
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _ENETFEC_LOGS_H_
+#define _ENETFEC_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENETFEC_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENETFEC_PMD_DEBUG(fmt, args...) \
+ ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
+#define ENETFEC_PMD_ERR(fmt, args...) \
+ ENETFEC_PMD_LOG(ERR, fmt, ## args)
+#define ENETFEC_PMD_INFO(fmt, args...) \
+ ENETFEC_PMD_LOG(INFO, fmt, ## args)
+
+#define ENETFEC_PMD_WARN(fmt, args...) \
+ ENETFEC_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENETFEC_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENETFEC_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 0000000000..6d6c64c94b
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+sources = files('enet_ethdev.c')
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 0000000000..b66517b171
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index bcf488f203..04be346509 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -19,6 +19,7 @@ drivers = [
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v9 2/5] net/enetfec: add UIO support
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-10 7:48 ` Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 3/5] net/enetfec: support queue configuration Apeksha Gupta
` (2 subsequent siblings)
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-10 7:48 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with "fec-uio" driver implemented in
kernel for PHY initialisation and for mapping the allocated memory
of register & BD from kernel to DPDK which gives access to
non-cacheable memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 209 ++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 11 ++
drivers/net/enetfec/enet_regs.h | 106 +++++++++++
drivers/net/enetfec/enet_uio.c | 284 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++++++
drivers/net/enetfec/meson.build | 3 +-
6 files changed, 676 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 50390f4ea4..fe6b5e539f 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -13,14 +13,192 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_pmd_logs.h"
#include "enet_ethdev.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
+/* FEC receive acceleration */
+#define ENETFEC_RACC_IPDIS RTE_BIT32(1)
+#define ENETFEC_RACC_PRODIS RTE_BIT32(2)
+#define ENETFEC_RACC_SHIFT16 RTE_BIT32(7)
+#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
+ ENETFEC_RACC_PRODIS)
+
+#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
+#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
+
+/* Pause frame field and FIFO threshold */
+#define ENETFEC_FCE RTE_BIT32(5)
+#define ENETFEC_RSEM_V 0x84
+#define ENETFEC_RSFL_V 16
+#define ENETFEC_RAEM_V 0x8
+#define ENETFEC_RAFL_V 0x8
+#define ENETFEC_OPD_V 0xFFF0
+
+#define NUM_OF_BD_QUEUES 6
+
+/*
+ * This function is called to start or restart the ENETFEC during a link
+ * change, transmit timeout, or to reconfigure the ENETFEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENETFEC_ETHEREN;
+ uint32_t val;
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(rte_cpu_to_le_32(0x04),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ /* align IP header */
+ val |= ENETFEC_RACC_SHIFT16;
+ val &= ~ENETFEC_RACC_OPTIONS;
+ rte_write32(rte_cpu_to_le_32(val),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= RTE_BIT32(6);
+ ecntl |= RTE_BIT32(5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENETFEC_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
+ } else {
+ rcntl &= ~ENETFEC_FCE;
+ }
+
+ rte_write32(rte_cpu_to_le_32(rcntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* enable ENETFEC endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENETFEC store and forward mode */
+ rte_write32(rte_cpu_to_le_32(1 << 8),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
+ }
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENETFEC_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENETFEC_RXC_DLY;
+ /* Enable the MIB statistic event counters */
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
+
+ ecntl |= 0x70000000;
+ fep->enetfec_e_cntl = ecntl;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(rte_cpu_to_le_32(ecntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_configure(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
+
+ return 0;
+}
+
+static int
+enetfec_eth_start(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+
+/* ENETFEC disable function.
+ * @param[in] base ENETFEC base address
+ */
+static void
+enetfec_disable(struct enetfec_private *fep)
+{
+ rte_write32(rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR)
+ & ~(fep->enetfec_e_cntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+}
+
+static int
+enetfec_eth_stop(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ dev->data->dev_started = 0;
+ enetfec_disable(fep);
+
+ return 0;
+}
+
+static const struct eth_dev_ops enetfec_ops = {
+ .dev_configure = enetfec_eth_configure,
+ .dev_start = enetfec_eth_start,
+ .dev_stop = enetfec_eth_stop
+};
+
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ fep->full_duplex = FULL_DUPLEX;
+ dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
return 0;
}
@@ -32,6 +210,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -45,6 +225,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENETFEC_MAX_Q;
+ fep->max_tx_queues = ENETFEC_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT
+ | QUIRK_RACC;
+
+ rc = enetfec_configure();
+ if (rc != 0)
+ return -ENOMEM;
+ rc = config_enetfec_uio(fep);
+ if (rc != 0)
+ return -ENOMEM;
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / NUM_OF_BD_QUEUES;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index e6b55e1ae6..0f0684ab11 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -5,12 +5,23 @@
#ifndef __ENETFEC_ETHDEV_H__
#define __ENETFEC_ETHDEV_H__
+#include <rte_ethdev.h>
+
+/* full duplex */
+#define FULL_DUPLEX 0x00
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+
/*
* ENETFEC can support 1 rx and tx queue..
*/
#define ENETFEC_MAX_Q 1
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
struct enetfec_private {
struct rte_eth_dev *dev;
struct rte_eth_stats stats;
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 0000000000..5415ed77ea
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __ENETFEC_REGS_H
+#define __ENETFEC_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */
+#define RX_BD_INT 0x00800000
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+
+/*
+ * 0 The next BD in consecutive location
+ * 1 The next BD in ENETFECn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of buffer descriptor */
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+#define TX_BD_WRAP ((ushort)0x2000)
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+
+#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_RD_START_2 : ENETFEC_RD_START_0))
+#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_TD_START_2 : ENETFEC_TD_START_0))
+#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0))
+
+#define ENETFEC_ETHEREN ((uint)0x00000002)
+#define ENETFEC_TXC_DLY ((uint)0x00010000)
+#define ENETFEC_RXC_DLY ((uint)0x00020000)
+
+/* ENETFEC MAC is in controller */
+#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENETFEC_EIR 0x004 /* Interrupt event register */
+#define ENETFEC_EIMR 0x008 /* Interrupt mask register */
+#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENETFEC_ECR 0x024 /* Ethernet control register */
+#define ENETFEC_MSCR 0x044 /* MII speed control register */
+#define ENETFEC_MIBC 0x064 /* MIB control and status register */
+#define ENETFEC_RCR 0x084 /* Receive control register */
+#define ENETFEC_TCR 0x0c4 /* Transmit Control register */
+#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */
+#define ENETFEC_IALR 0x11c /* hash table 32 bits low */
+#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */
+#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */
+#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */
+#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */
+#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */
+#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */
+#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */
+#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */
+#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/
+#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */
+
+#endif /*__ENETFEC_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 0000000000..99e9dccf5a
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,284 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+static int enetfec_count;
+
+/** @brief Checks if a file name contains a certain substring.
+ * This function assumes a filename format of: [text][number].
+ * @param [in] filename File name
+ * @param [in] match String to match in file name
+ *
+ * @retval true if file name matches the criteria
+ * @retval false if file name does not match the criteria
+ */
+static bool
+file_name_match_extract(const char filename[], const char match[])
+{
+ char *substr = NULL;
+
+ substr = strstr(filename, match);
+ if (substr == NULL)
+ return false;
+
+ return true;
+}
+
+/*
+ * @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for success
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ if (ret <= 0) {
+ ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name);
+ return ret;
+ }
+ close(fd);
+
+ /* NULL-ify string */
+ line[ret] = '\0';
+
+ return 0;
+}
+
+/*
+ * @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ *
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
+ char uio_map_p_addr_str[32];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return NULL;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (enetfec_count > 0) {
+ ENETFEC_PMD_INFO("Mapped!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ return -1;
+ }
+
+ ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->bd_addr_p = uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ enetfec_count++;
+
+ return 0;
+}
+
+int
+enetfec_configure(void)
+{
+ char uio_name[32];
+ int uio_minor_number = -1;
+ int ret;
+ DIR *d = NULL;
+ struct dirent *dir;
+
+ d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
+ if (d == NULL) {
+ ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
+ return -1;
+ }
+
+ /* Iterate through all subdirs */
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 1) ||
+ !strncmp(dir->d_name, "..", 2))
+ continue;
+
+ if (file_name_match_extract(dir->d_name, "uio")) {
+ /*
+ * As substring <uio> was found in <d_name>
+ * read number following <uio> substring in <d_name>
+ */
+ ret = sscanf(dir->d_name + strlen("uio"), "%d",
+ &uio_minor_number);
+ if (ret < 0)
+ ENETFEC_PMD_ERR("Error: not find minor number\n");
+ /*
+ * Open file uioX/name and read first line which
+ * contains the name for the device. Based on the
+ * name check if this UIO device is for enetfec.
+ */
+ memset(uio_name, 0, sizeof(uio_name));
+ ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
+ dir->d_name, "name", uio_name);
+ if (ret != 0) {
+ ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ closedir(d);
+ return -1;
+ }
+
+ if (file_name_match_extract(uio_name,
+ FEC_UIO_DEVICE_NAME)) {
+ enetfec_uio_job.uio_minor_number =
+ uio_minor_number;
+ ENETFEC_PMD_INFO("enetfec device uio name: %s",
+ uio_name);
+ }
+ }
+ }
+ closedir(d);
+ return 0;
+}
+
+void
+enetfec_cleanup(struct enetfec_private *fep)
+{
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 0000000000..fec8ba6f95
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+/*
+ * Name of UIO device. User space FEC will have a corresponding
+ * UIO device.
+ * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH.
+ *
+ * @note Must be kept in sync with FEC kernel driver
+ * define #FEC_UIO_DEVICE_NAME !
+ */
+#define FEC_UIO_DEVICE_NAME "imx-fec-uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int enetfec_configure(void);
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(struct enetfec_private *fep);
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 6d6c64c94b..57f316b8a5 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -6,4 +6,5 @@ if not is_linux
reason = 'only supported on linux'
endif
-sources = files('enet_ethdev.c')
+sources = files('enet_ethdev.c',
+ 'enet_uio.c')
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v9 3/5] net/enetfec: support queue configuration
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-11-10 7:48 ` Apeksha Gupta
2021-11-10 13:54 ` Ferruh Yigit
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 5/5] net/enetfec: add features Apeksha Gupta
4 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-10 7:48 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds Rx/Tx queue configuration setup operations.
On packet reception the respective BD Ring status bit is set
which is then used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 222 +++++++++++++++++++++++++++++-
drivers/net/enetfec/enet_ethdev.h | 74 ++++++++++
2 files changed, 295 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index fe6b5e539f..f70489ff91 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -41,6 +41,11 @@
#define NUM_OF_BD_QUEUES 6
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN;
+
/*
* This function is called to start or restart the ENETFEC during a link
* change, transmit timeout, or to reconfigure the ENETFEC. The network
@@ -186,10 +191,225 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_pktlen = ENETFEC_MAX_RX_PKT_LEN;
+ dev_info->max_rx_queues = ENETFEC_MAX_Q;
+ dev_info->max_tx_queues = ENETFEC_MAX_Q;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Tx deferred start is not supported */
+ if (tx_conf->tx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (txq == NULL) {
+ ENETFEC_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.queue_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(0, &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Rx deferred start is not supported */
+ if (rx_conf->rx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Rx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (rxq == NULL) {
+ ENETFEC_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.queue_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENETFEC_PMD_ERR("mbuf failed");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr) > 0)
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ if (rxq->rx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ }
+ rte_free(rxq);
+ return errno;
+}
+
static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
- .dev_stop = enetfec_eth_stop
+ .dev_stop = enetfec_eth_stop,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup
};
static int
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 0f0684ab11..babc7190fb 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -10,8 +10,13 @@
/* full duplex */
#define FULL_DUPLEX 0x00
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
#define PKT_MAX_BUF_SIZE 1984
#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ENETFEC_MAX_RX_PKT_LEN 3000
+
+#define __iomem
/*
* ENETFEC can support 1 rx and tx queue..
@@ -22,6 +27,49 @@
#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
#define readl(p) rte_read32(p)
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int queue_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
struct enetfec_private {
struct rte_eth_dev *dev;
struct rte_eth_stats stats;
@@ -54,4 +102,30 @@ struct enetfec_private {
struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
};
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
+}
+
+static inline int
+fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
#endif /*__ENETFEC_ETHDEV_H__*/
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v9 4/5] net/enetfec: add Rx/Tx support
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
` (2 preceding siblings ...)
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-10 7:48 ` Apeksha Gupta
2021-11-10 13:56 ` Ferruh Yigit
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 5/5] net/enetfec: add features Apeksha Gupta
4 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-10 7:48 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds burst enqueue and dequeue operations to the enetfec
PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
used to enable this feature. By default loopback mode is disabled.
Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 2 +
drivers/net/enetfec/enet_ethdev.c | 183 ++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 23 +++
drivers/net/enetfec/enet_rxtx.c | 220 +++++++++++++++++++++++++++
drivers/net/enetfec/meson.build | 3 +-
6 files changed, 432 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 2c29ad362c..63d6fb81ee 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -84,6 +84,8 @@ driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index bdfbdbd9d4..3d8aa5b627 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Promiscuous mode = Y
+Basic stats = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index f70489ff91..8c8788ad8f 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -39,6 +39,8 @@
#define ENETFEC_RAFL_V 0x8
#define ENETFEC_OPD_V 0xFFF0
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
#define NUM_OF_BD_QUEUES 6
/* Supported Rx offloads */
@@ -152,6 +154,40 @@ enetfec_restart(struct rte_eth_dev *dev)
rte_delay_us(10);
}
+static void
+enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
static int
enetfec_eth_configure(struct rte_eth_dev *dev)
{
@@ -165,6 +201,8 @@ static int
enetfec_eth_start(struct rte_eth_dev *dev)
{
enetfec_restart(dev);
+ dev->rx_pkt_burst = &enetfec_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_xmit_pkts;
return 0;
}
@@ -191,6 +229,100 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_close(struct rte_eth_dev *dev)
+{
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused)
+{
+ struct rte_eth_link link;
+ unsigned int lstatus = 1;
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ link.link_status = lstatus;
+ link.link_speed = ETH_SPEED_NUM_1G;
+
+ ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ "Up");
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+enetfec_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(rte_cpu_to_le_32(tmp),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(rte_cpu_to_le_32(0x04400002),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0x10800049),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+
+ return 0;
+}
+
static int
enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -202,6 +334,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static void
+enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
static const unsigned short offset_des_active_rxq[] = {
ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
};
@@ -407,6 +551,12 @@ static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
.dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
+ .link_update = enetfec_eth_link_update,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup
@@ -432,6 +582,9 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
int rc;
int i;
unsigned int bdsize;
+ struct rte_ether_addr macaddr = {
+ .addr_bytes = { 0x1, 0x1, 0x1, 0x1, 0x1, 0x1 }
+ };
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -474,6 +627,21 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ RTE_ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set default mac address
+ */
+ enetfec_set_mac_address(dev, &macaddr);
+
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
@@ -482,6 +650,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
failed_init:
ENETFEC_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -489,6 +659,8 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
int ret;
/* find the ethdev entry */
@@ -496,11 +668,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
if (eth_dev == NULL)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+ enetfec_eth_stop(eth_dev);
+
ret = rte_eth_dev_release_port(eth_dev);
if (ret != 0)
return -EINVAL;
ENETFEC_PMD_INFO("Release enetfec sw device");
+ enetfec_cleanup(fep);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index babc7190fb..c6f8cf7f03 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -7,6 +7,10 @@
#include <rte_ethdev.h>
+#define BD_LEN 49152
+#define ENETFEC_TX_FR_SIZE 2048
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+
/* full duplex */
#define FULL_DUPLEX 0x00
@@ -17,6 +21,20 @@
#define ENETFEC_MAX_RX_PKT_LEN 3000
#define __iomem
+#if defined(RTE_ARCH_ARM)
+#if defined(RTE_ARCH_64)
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+
+#else /* RTE_ARCH_32 */
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+#else
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
/*
* ENETFEC can support 1 rx and tx queue..
@@ -128,4 +146,9 @@ enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
}
+uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 0000000000..4e6a263e67
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while ((status & RX_BD_EMPTY) == 0) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(new_mbuf == NULL)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENETFEC_DP_LOG(DEBUG, "rx_fifo_error");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_DP_LOG(DEBUG, "rx_length_error");
+ if (status & RX_BD_LAST)
+ ENETFEC_DP_LOG(DEBUG, "rcv is not +last");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENETFEC_DP_LOG(DEBUG, "rx_crc_errors");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_DP_LOG(DEBUG, "rx_frame_error");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ uint8_t *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENETFEC_PMD_DEBUG("SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 57f316b8a5..79dca58dea 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -7,4 +7,5 @@ if not is_linux
endif
sources = files('enet_ethdev.c',
- 'enet_uio.c')
+ 'enet_uio.c',
+ 'enet_rxtx.c')
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [dpdk-dev] [PATCH v9 5/5] net/enetfec: add features
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
` (3 preceding siblings ...)
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
@ 2021-11-10 7:48 ` Apeksha Gupta
2021-11-10 13:57 ` Ferruh Yigit
4 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-10 7:48 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds checksum and VLAN offloads in enetfec network
poll mode driver.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 3 ++
drivers/net/enetfec/enet_ethdev.c | 14 ++++++-
drivers/net/enetfec/enet_ethdev.h | 1 +
drivers/net/enetfec/enet_regs.h | 10 +++++
drivers/net/enetfec/enet_rxtx.c | 55 +++++++++++++++++++++++++++-
6 files changed, 82 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 63d6fb81ee..6734986c6d 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -86,6 +86,8 @@ ENETFEC Features
- Basic stats
- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 3d8aa5b627..2a34351b43 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -5,6 +5,9 @@
;
[Features]
Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Basic stats = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 8c8788ad8f..80ee452a19 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -79,7 +79,11 @@ enetfec_restart(struct rte_eth_dev *dev)
val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
/* align IP header */
val |= ENETFEC_RACC_SHIFT16;
- val &= ~ENETFEC_RACC_OPTIONS;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENETFEC_RACC_OPTIONS;
+ else
+ val &= ~ENETFEC_RACC_OPTIONS;
rte_write32(rte_cpu_to_le_32(val),
(uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
@@ -191,7 +195,12 @@ enet_free_buffers(struct rte_eth_dev *dev)
static int
enetfec_eth_configure(struct rte_eth_dev *dev)
{
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
return 0;
@@ -570,6 +579,7 @@ enetfec_eth_init(struct rte_eth_dev *dev)
fep->full_duplex = FULL_DUPLEX;
dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index c6f8cf7f03..60b920d414 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -10,6 +10,7 @@
#define BD_LEN 49152
#define ENETFEC_TX_FR_SIZE 2048
#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
/* full duplex */
#define FULL_DUPLEX 0x00
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
index 5415ed77ea..a300c6f8bc 100644
--- a/drivers/net/enetfec/enet_regs.h
+++ b/drivers/net/enetfec/enet_regs.h
@@ -27,6 +27,12 @@
#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENETFEC_RX_VLAN 0x00000004
+
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+
/* Ethernet transmit use control and status of buffer descriptor */
#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
@@ -56,6 +62,10 @@
#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
/* GBIT supported in controller */
#define QUIRK_GBIT (1 << 3)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan */
+#define QUIRK_VLAN (1 << 6)
/* RACC register supported by controller */
#define QUIRK_RACC (1 << 12)
/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index 4e6a263e67..005c1d1135 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -5,6 +5,7 @@
#include <signal.h>
#include <rte_mbuf.h>
#include <rte_io.h>
+#include <ethdev_driver.h>
#include "enet_regs.h"
#include "enet_ethdev.h"
#include "enet_pmd_logs.h"
@@ -22,9 +23,14 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
unsigned short status;
unsigned short pkt_len;
int pkt_received = 0, index = 0;
- void *data;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
pool = rxq->pool;
bdp = rxq->bd.cur;
@@ -77,6 +83,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
mbuf = rxq->rx_mbuf[index];
data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
rte_prefetch0(data);
rte_pktmbuf_append((struct rte_mbuf *)mbuf,
pkt_len - 4);
@@ -86,6 +93,48 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
rx_pkts[pkt_received] = mbuf;
pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)
+ ((uint8_t *)data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove((uint8_t *)mbuf_data + VLAN_HLEN,
+ data, RTE_ETHER_ADDR_LEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if ((rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
+ /* don't check it */
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED
+ | RTE_MBUF_F_RX_VLAN;
+ }
+
rxq->rx_mbuf[index] = new_mbuf;
rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
&bdp->bd_bufaddr);
@@ -186,6 +235,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
if (txq->fep->bufdesc_ex) {
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == RTE_MBUF_F_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
rte_write32(0, &ebdp->bd_bdu);
rte_write32(rte_cpu_to_le_32(estatus),
&ebdp->bd_esc);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v9 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-10 13:53 ` Ferruh Yigit
2021-11-13 4:31 ` [PATCH v10 0/5] drivers/net: add " Apeksha Gupta
1 sibling, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-10 13:53 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/10/2021 7:48 AM, Apeksha Gupta wrote:
> ENETFEC (Fast Ethernet Controller) is a network poll mode driver
> for NXP SoC i.MX 8M Mini.
>
> This patch adds skeleton for enetfec driver with probe function.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> @@ -0,0 +1,133 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2021 NXP
> +
> +ENETFEC Poll Mode Driver
> +========================
> +
> +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
> +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
> +
> +More information can be found at NXP Official Website
> +<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
> +
> +ENETFEC
> +-------
> +
> +This section provides an overview of the NXP ENETFEC and how it is
> +integrated into the DPDK. Driver is taken as **experimental** as driver
> +itself detects the uio device, reads address and mmap them within the
> +driver.
What about something like:
"Driver is taken as **experimental** as driver depends on a Linux kernel
module, 'fec-uio', which is not upstreamed yet."
<...>
> +static int
> +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *dev = NULL;
> + struct enetfec_private *fep;
> + const char *name;
> + int rc;
> +
> + name = rte_vdev_device_name(vdev);
> + if (name == NULL)
> + return -EINVAL;
I am not really sure if this check is required, although it doesn't hurt.
Can you please share the call stack, how it can be NULL?
<...>
> +struct enetfec_private {
> + struct rte_eth_dev *dev;
> + struct rte_eth_stats stats;
> + struct rte_mempool *pool;
> + uint16_t max_rx_queues;
> + uint16_t max_tx_queues;
> + unsigned int total_tx_ring_size;
> + unsigned int total_rx_ring_size;
> + bool bufdesc_ex;
> + int full_duplex;
> + uint32_t quirks;
> + uint32_t enetfec_e_cntl;
> + int flag_csum;
> + int flag_pause;
> + bool rgmii_txc_delay;
> + bool rgmii_rxc_delay;
> + int link;
> + void *hw_baseaddr_v;
> + uint64_t hw_baseaddr_p;
> + void *bd_addr_v;
> + uint64_t bd_addr_p;
> + uint64_t bd_addr_p_r[ENETFEC_MAX_Q];
> + uint64_t bd_addr_p_t[ENETFEC_MAX_Q];
> + void *dma_baseaddr_r[ENETFEC_MAX_Q];
> + void *dma_baseaddr_t[ENETFEC_MAX_Q];
> + uint64_t cbus_size;
> + unsigned int reg_size;
> + unsigned int bd_size;
> + struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
> + struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
What do you think to construct the struct as the fields are used, at this patch
only 'dev' seems needed.
<...>
> diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
> new file mode 100644
> index 0000000000..6d6c64c94b
> --- /dev/null
> +++ b/drivers/net/enetfec/meson.build
> @@ -0,0 +1,9 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright 2021 NXP
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on linux'
> +endif
> +
> +sources = files('enet_ethdev.c')
./devtools/check-meson.py is failing, can you please fix meson syntax accordingly,
not only this patch, in all patches that updates meson file.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/5] net/enetfec: support queue configuration
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-10 13:54 ` Ferruh Yigit
2021-11-13 5:00 ` [EXT] " Apeksha Gupta
2021-11-15 10:06 ` Ferruh Yigit
0 siblings, 2 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-10 13:54 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/10/2021 7:48 AM, Apeksha Gupta wrote:
> This patch adds Rx/Tx queue configuration setup operations.
> On packet reception the respective BD Ring status bit is set
> which is then used for packet processing.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> +
> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
Isn't 'fep->bd_addr_p_t[]' a 64-bit value?
<...>
> +
> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
Isn't 'fep->bd_addr_p_r[]' a 64-bit address, why doing endianness operation
only on 32-bit and writing only 32-bit of it to register?
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v9 4/5] net/enetfec: add Rx/Tx support
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
@ 2021-11-10 13:56 ` Ferruh Yigit
0 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-10 13:56 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/10/2021 7:48 AM, Apeksha Gupta wrote:
> This patch adds burst enqueue and dequeue operations to the enetfec
> PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is
> used to enable this feature. By default loopback mode is disabled.
> Basic features added like promiscuous enable, basic stats.
>
Commit log needs to be updated since 'ENETFEC_LOOPBACK' is not more exists.
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> +static int
> +enetfec_eth_link_update(struct rte_eth_dev *dev,
> + int wait_to_complete __rte_unused)
> +{
> + struct rte_eth_link link;
> + unsigned int lstatus = 1;
> +
> + memset(&link, 0, sizeof(struct rte_eth_link));
> +
> + link.link_status = lstatus;
> + link.link_speed = ETH_SPEED_NUM_1G;
Can you please use updated macro: RTE_ETH_SPEED_NUM_1G
<...>
> +/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
> + * When update through the ring, just set the empty indicator.
> + */
> +uint16_t
> +enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
I am sure 'rx_pkts' is used, can drop '__rte_unused'.
> + uint16_t nb_pkts)
> +{
> + struct rte_mempool *pool;
> + struct bufdesc *bdp;
> + struct rte_mbuf *mbuf, *new_mbuf = NULL;
> + unsigned short status;
> + unsigned short pkt_len;
> + int pkt_received = 0, index = 0;
> + void *data;
> + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
> + struct rte_eth_stats *stats = &rxq->fep->stats;
> + pool = rxq->pool;
> + bdp = rxq->bd.cur;
> +
> + /* Process the incoming packet */
> + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
> + while ((status & RX_BD_EMPTY) == 0) {
> + if (pkt_received >= nb_pkts)
> + break;
> +
> + new_mbuf = rte_pktmbuf_alloc(pool);
> + if (unlikely(new_mbuf == NULL)) {
> + stats->ierrors++;
'rx_mbuf_alloc_failed' is used to store mbuf alloc failures, not 'ierrors'.
<...>
> +
> + if (mbuf->nb_segs > 1) {
> + ENETFEC_PMD_DEBUG("SG not supported");
It is not good idea to use dynamic debug macros in the datapath.
'ENETFEC_DP_LOG()' is the one to use.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v9 5/5] net/enetfec: add features
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 5/5] net/enetfec: add features Apeksha Gupta
@ 2021-11-10 13:57 ` Ferruh Yigit
0 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-10 13:57 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/10/2021 7:48 AM, Apeksha Gupta wrote:
> This patch adds checksum and VLAN offloads in enetfec network
> poll mode driver.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
<...>
> @@ -191,7 +195,12 @@ enet_free_buffers(struct rte_eth_dev *dev)
> static int
> enetfec_eth_configure(struct rte_eth_dev *dev)
> {
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
Instead of first adding 'DEV_RX_OFFLOAD_KEEP_CRC' and later fixing it in this
patch, correct macro should be added at first place.
<...>
> @@ -86,6 +93,48 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
>
> rx_pkts[pkt_received] = mbuf;
> pkt_received++;
> +
> + /* Extract the enhanced buffer descriptor */
> + ebdp = NULL;
> + if (rxq->fep->bufdesc_ex)
> + ebdp = (struct bufdesc_ex *)bdp;
> +
> + /* If this is a VLAN packet remove the VLAN Tag */
> + vlan_packet_rcvd = false;
> + if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
Can you please use updated macro: RTE_ETH_RX_OFFLOAD_VLAN
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v10 0/5] drivers/net: add NXP ENETFEC driver
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-10 13:53 ` Ferruh Yigit
@ 2021-11-13 4:31 ` Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 1/5] net/enetfec: introduce " Apeksha Gupta
` (4 more replies)
1 sibling, 5 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-13 4:31 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch series introduce the enetfec driver, ENETFEC (Fast Ethernet
Controller) is a network poll mode driver for the inbuilt NIC found in
the NXP i.MX 8M Mini SoC.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO interface so that user space directly communicate with
a UIO based hardware device. UIO interface mmap the Control and Status
Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
gives access to non-cacheble memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Patch 5 adds checksum and VLAN features.
Apeksha Gupta (5):
net/enetfec: introduce NXP ENETFEC driver
net/enetfec: add UIO support
net/enetfec: support queue configuration
net/enetfec: add Rx/Tx support
net/enetfec: add features
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 137 +++++
doc/guides/nics/features/enetfec.ini | 14 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/net/enetfec/enet_ethdev.c | 705 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 153 ++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++
drivers/net/enetfec/enet_regs.h | 116 ++++
drivers/net/enetfec/enet_rxtx.c | 273 ++++++++++
drivers/net/enetfec/enet_uio.c | 284 ++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++
drivers/net/enetfec/meson.build | 13 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
15 files changed, 1807 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v10 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-13 4:31 ` [PATCH v10 0/5] drivers/net: add " Apeksha Gupta
@ 2021-11-13 4:31 ` Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 2/5] net/enetfec: add UIO support Apeksha Gupta
` (3 subsequent siblings)
4 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-13 4:31 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
ENETFEC (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC i.MX 8M Mini.
This patch adds skeleton for enetfec driver with probe function.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
MAINTAINERS | 7 ++
doc/guides/nics/enetfec.rst | 133 +++++++++++++++++++++++++
doc/guides/nics/features/enetfec.ini | 9 ++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/net/enetfec/enet_ethdev.c | 82 +++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 18 ++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
drivers/net/enetfec/meson.build | 10 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
11 files changed, 300 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e157e12f88..2aa81efe20 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -889,6 +889,13 @@ F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
+NXP enetfec - EXPERIMENTAL
+M: Apeksha Gupta <apeksha.gupta@nxp.com>
+M: Sachin Saxena <sachin.saxena@nxp.com>
+F: drivers/net/enetfec/
+F: doc/guides/nics/enetfec.rst
+F: doc/guides/nics/features/enetfec.ini
+
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 0000000000..6a86295e34
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,133 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK. Driver is taken as **experimental** as driver
+depends on a Linux kernel module 'enetfec-uio', which is not upstreamed
+yet.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both
+high performance and low power consumption. ENETFEC PMD is a hardware
+programmable packet forwarding engine to provide high performance
+Ethernet interface. It has only 1 GB Ethernet interface with RJ45
+connector.
+
+The diagram below shows a system level overview of ENETFEC:
+
+ .. code-block:: console
+
+ =====================================================
+ Userspace
+ +-----------------------------------------+
+ | ENETFEC Driver |
+ | +-------------------------+ |
+ | | virtual ethernet device | |
+ +-----------------------------------------+
+ ^ |
+ | |
+ | |
+ RXQ | | TXQ
+ | |
+ | v
+ =====================================================
+ Kernel Space
+ +---------+
+ | fec-uio |
+ ====================+=========+======================
+ Hardware
+ +-----------------------------------------+
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+-------------------+
+ | PHY |
+ +-----+
+
+ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
+userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
+blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
+initialisation and for mapping the allocated memory of register & buffer
+descriptor with DPDK which gives access to non-cacheable memory for buffer
+descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
+driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- Linux
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
+
+.. note::
+
+ Branch is 'lf-5.10.y'
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **dpdk-testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 0000000000..bdfbdbd9d4
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 784d5d39f6..777fdab4a0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 01923e2deb..6e80805cfd 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -221,6 +221,11 @@ New Features
* Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
* Added socket direct mode bonding support.
+* **Added NXP ENETFEC PMD.**
+
+ Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
+ :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
+
* **Updated Solarflare network PMD.**
Updated the Solarflare ``sfc_efx`` driver with changes including:
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 0000000000..56bd199191
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_pmd_logs.h"
+#include "enet_ethdev.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc;
+
+ name = rte_vdev_device_name(vdev);
+ ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+
+ return 0;
+
+failed_init:
+ ENETFEC_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ int ret;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (eth_dev == NULL)
+ return -ENODEV;
+
+ ret = rte_eth_dev_release_port(eth_dev);
+ if (ret != 0)
+ return -EINVAL;
+
+ ENETFEC_PMD_INFO("Release enetfec sw device");
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 0000000000..7e941da972
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef __ENETFEC_ETHDEV_H__
+#define __ENETFEC_ETHDEV_H__
+
+/*
+ * ENETFEC can support 1 rx and tx queue..
+ */
+
+#define ENETFEC_MAX_Q 1
+
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+};
+
+#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 0000000000..e7b3964a0e
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _ENETFEC_LOGS_H_
+#define _ENETFEC_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENETFEC_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENETFEC_PMD_DEBUG(fmt, args...) \
+ ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
+#define ENETFEC_PMD_ERR(fmt, args...) \
+ ENETFEC_PMD_LOG(ERR, fmt, ## args)
+#define ENETFEC_PMD_INFO(fmt, args...) \
+ ENETFEC_PMD_LOG(INFO, fmt, ## args)
+
+#define ENETFEC_PMD_WARN(fmt, args...) \
+ ENETFEC_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENETFEC_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENETFEC_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 0000000000..42ec41502b
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+sources = files(
+ 'enet_ethdev.c')
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 0000000000..b66517b171
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index bcf488f203..04be346509 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -19,6 +19,7 @@ drivers = [
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v10 2/5] net/enetfec: add UIO support
2021-11-13 4:31 ` [PATCH v10 0/5] drivers/net: add " Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-13 4:31 ` Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 3/5] net/enetfec: support queue configuration Apeksha Gupta
` (2 subsequent siblings)
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-13 4:31 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with "fec-uio" driver implemented in
kernel for PHY initialisation and for mapping the allocated memory
of register & BD from kernel to DPDK which gives access to
non-cacheable memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 209 ++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 31 ++++
drivers/net/enetfec/enet_regs.h | 106 +++++++++++
drivers/net/enetfec/enet_uio.c | 284 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++++++
drivers/net/enetfec/meson.build | 3 +-
6 files changed, 696 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 56bd199191..8bed091efe 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -13,14 +13,192 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_pmd_logs.h"
#include "enet_ethdev.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
+/* FEC receive acceleration */
+#define ENETFEC_RACC_IPDIS RTE_BIT32(1)
+#define ENETFEC_RACC_PRODIS RTE_BIT32(2)
+#define ENETFEC_RACC_SHIFT16 RTE_BIT32(7)
+#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
+ ENETFEC_RACC_PRODIS)
+
+#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
+#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
+
+/* Pause frame field and FIFO threshold */
+#define ENETFEC_FCE RTE_BIT32(5)
+#define ENETFEC_RSEM_V 0x84
+#define ENETFEC_RSFL_V 16
+#define ENETFEC_RAEM_V 0x8
+#define ENETFEC_RAFL_V 0x8
+#define ENETFEC_OPD_V 0xFFF0
+
+#define NUM_OF_BD_QUEUES 6
+
+/*
+ * This function is called to start or restart the ENETFEC during a link
+ * change, transmit timeout, or to reconfigure the ENETFEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENETFEC_ETHEREN;
+ uint32_t val;
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(rte_cpu_to_le_32(0x04),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ /* align IP header */
+ val |= ENETFEC_RACC_SHIFT16;
+ val &= ~ENETFEC_RACC_OPTIONS;
+ rte_write32(rte_cpu_to_le_32(val),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= RTE_BIT32(6);
+ ecntl |= RTE_BIT32(5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENETFEC_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
+ } else {
+ rcntl &= ~ENETFEC_FCE;
+ }
+
+ rte_write32(rte_cpu_to_le_32(rcntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* enable ENETFEC endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENETFEC store and forward mode */
+ rte_write32(rte_cpu_to_le_32(1 << 8),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
+ }
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENETFEC_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENETFEC_RXC_DLY;
+ /* Enable the MIB statistic event counters */
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
+
+ ecntl |= 0x70000000;
+ fep->enetfec_e_cntl = ecntl;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(rte_cpu_to_le_32(ecntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_configure(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
+
+ return 0;
+}
+
+static int
+enetfec_eth_start(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+
+/* ENETFEC disable function.
+ * @param[in] base ENETFEC base address
+ */
+static void
+enetfec_disable(struct enetfec_private *fep)
+{
+ rte_write32(rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR)
+ & ~(fep->enetfec_e_cntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+}
+
+static int
+enetfec_eth_stop(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ dev->data->dev_started = 0;
+ enetfec_disable(fep);
+
+ return 0;
+}
+
+static const struct eth_dev_ops enetfec_ops = {
+ .dev_configure = enetfec_eth_configure,
+ .dev_start = enetfec_eth_start,
+ .dev_stop = enetfec_eth_stop
+};
+
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ fep->full_duplex = FULL_DUPLEX;
+ dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
return 0;
}
@@ -32,6 +210,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
@@ -43,6 +223,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENETFEC_MAX_Q;
+ fep->max_tx_queues = ENETFEC_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT
+ | QUIRK_RACC;
+
+ rc = enetfec_configure();
+ if (rc != 0)
+ return -ENOMEM;
+ rc = config_enetfec_uio(fep);
+ if (rc != 0)
+ return -ENOMEM;
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / NUM_OF_BD_QUEUES;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 7e941da972..4d671e6d45 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -5,14 +5,45 @@
#ifndef __ENETFEC_ETHDEV_H__
#define __ENETFEC_ETHDEV_H__
+#include <rte_ethdev.h>
+
+/* full duplex */
+#define FULL_DUPLEX 0x00
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+
/*
* ENETFEC can support 1 rx and tx queue..
*/
#define ENETFEC_MAX_Q 1
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
struct enetfec_private {
struct rte_eth_dev *dev;
+ int full_duplex;
+ int flag_pause;
+ uint32_t quirks;
+ uint32_t cbus_size;
+ uint32_t enetfec_e_cntl;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ bool bufdesc_ex;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ void *hw_baseaddr_v;
+ void *bd_addr_v;
+ uint32_t hw_baseaddr_p;
+ uint32_t bd_addr_p;
+ uint32_t bd_addr_p_r[ENETFEC_MAX_Q];
+ uint32_t bd_addr_p_t[ENETFEC_MAX_Q];
+ void *dma_baseaddr_r[ENETFEC_MAX_Q];
+ void *dma_baseaddr_t[ENETFEC_MAX_Q];
};
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 0000000000..5415ed77ea
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __ENETFEC_REGS_H
+#define __ENETFEC_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */
+#define RX_BD_INT 0x00800000
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+
+/*
+ * 0 The next BD in consecutive location
+ * 1 The next BD in ENETFECn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of buffer descriptor */
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+#define TX_BD_WRAP ((ushort)0x2000)
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+
+#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_RD_START_2 : ENETFEC_RD_START_0))
+#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_TD_START_2 : ENETFEC_TD_START_0))
+#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0))
+
+#define ENETFEC_ETHEREN ((uint)0x00000002)
+#define ENETFEC_TXC_DLY ((uint)0x00010000)
+#define ENETFEC_RXC_DLY ((uint)0x00020000)
+
+/* ENETFEC MAC is in controller */
+#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENETFEC_EIR 0x004 /* Interrupt event register */
+#define ENETFEC_EIMR 0x008 /* Interrupt mask register */
+#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENETFEC_ECR 0x024 /* Ethernet control register */
+#define ENETFEC_MSCR 0x044 /* MII speed control register */
+#define ENETFEC_MIBC 0x064 /* MIB control and status register */
+#define ENETFEC_RCR 0x084 /* Receive control register */
+#define ENETFEC_TCR 0x0c4 /* Transmit Control register */
+#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */
+#define ENETFEC_IALR 0x11c /* hash table 32 bits low */
+#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */
+#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */
+#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */
+#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */
+#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */
+#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */
+#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */
+#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */
+#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/
+#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */
+
+#endif /*__ENETFEC_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 0000000000..6539cbb354
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,284 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+static int enetfec_count;
+
+/** @brief Checks if a file name contains a certain substring.
+ * This function assumes a filename format of: [text][number].
+ * @param [in] filename File name
+ * @param [in] match String to match in file name
+ *
+ * @retval true if file name matches the criteria
+ * @retval false if file name does not match the criteria
+ */
+static bool
+file_name_match_extract(const char filename[], const char match[])
+{
+ char *substr = NULL;
+
+ substr = strstr(filename, match);
+ if (substr == NULL)
+ return false;
+
+ return true;
+}
+
+/*
+ * @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for success
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ if (ret <= 0) {
+ ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name);
+ return ret;
+ }
+ close(fd);
+
+ /* NULL-ify string */
+ line[ret] = '\0';
+
+ return 0;
+}
+
+/*
+ * @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ *
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
+ char uio_map_p_addr_str[32];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return NULL;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (enetfec_count > 0) {
+ ENETFEC_PMD_INFO("Mapped!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ return -1;
+ }
+
+ ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->bd_addr_p = (uint32_t)uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ enetfec_count++;
+
+ return 0;
+}
+
+int
+enetfec_configure(void)
+{
+ char uio_name[32];
+ int uio_minor_number = -1;
+ int ret;
+ DIR *d = NULL;
+ struct dirent *dir;
+
+ d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
+ if (d == NULL) {
+ ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
+ return -1;
+ }
+
+ /* Iterate through all subdirs */
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 1) ||
+ !strncmp(dir->d_name, "..", 2))
+ continue;
+
+ if (file_name_match_extract(dir->d_name, "uio")) {
+ /*
+ * As substring <uio> was found in <d_name>
+ * read number following <uio> substring in <d_name>
+ */
+ ret = sscanf(dir->d_name + strlen("uio"), "%d",
+ &uio_minor_number);
+ if (ret < 0)
+ ENETFEC_PMD_ERR("Error: not find minor number\n");
+ /*
+ * Open file uioX/name and read first line which
+ * contains the name for the device. Based on the
+ * name check if this UIO device is for enetfec.
+ */
+ memset(uio_name, 0, sizeof(uio_name));
+ ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
+ dir->d_name, "name", uio_name);
+ if (ret != 0) {
+ ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ closedir(d);
+ return -1;
+ }
+
+ if (file_name_match_extract(uio_name,
+ FEC_UIO_DEVICE_NAME)) {
+ enetfec_uio_job.uio_minor_number =
+ uio_minor_number;
+ ENETFEC_PMD_INFO("enetfec device uio name: %s",
+ uio_name);
+ }
+ }
+ }
+ closedir(d);
+ return 0;
+}
+
+void
+enetfec_cleanup(struct enetfec_private *fep)
+{
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 0000000000..fec8ba6f95
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+/*
+ * Name of UIO device. User space FEC will have a corresponding
+ * UIO device.
+ * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH.
+ *
+ * @note Must be kept in sync with FEC kernel driver
+ * define #FEC_UIO_DEVICE_NAME !
+ */
+#define FEC_UIO_DEVICE_NAME "imx-fec-uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int enetfec_configure(void);
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(struct enetfec_private *fep);
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 42ec41502b..3fb0f73071 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -7,4 +7,5 @@ if not is_linux
endif
sources = files(
- 'enet_ethdev.c')
+ 'enet_ethdev.c',
+ 'enet_uio.c')
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v10 3/5] net/enetfec: support queue configuration
2021-11-13 4:31 ` [PATCH v10 0/5] drivers/net: add " Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-11-13 4:31 ` Apeksha Gupta
2021-11-13 17:11 ` Stephen Hemminger
2021-11-13 4:31 ` [PATCH v10 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 5/5] net/enetfec: add features Apeksha Gupta
4 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-13 4:31 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds Rx/Tx queue configuration setup operations.
On packet reception the respective BD Ring status bit is set
which is then used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 222 +++++++++++++++++++++++++++++-
drivers/net/enetfec/enet_ethdev.h | 77 +++++++++++
2 files changed, 298 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 8bed091efe..095e835da9 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -41,6 +41,11 @@
#define NUM_OF_BD_QUEUES 6
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN;
+
/*
* This function is called to start or restart the ENETFEC during a link
* change, transmit timeout, or to reconfigure the ENETFEC. The network
@@ -186,10 +191,225 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_pktlen = ENETFEC_MAX_RX_PKT_LEN;
+ dev_info->max_rx_queues = ENETFEC_MAX_Q;
+ dev_info->max_tx_queues = ENETFEC_MAX_Q;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Tx deferred start is not supported */
+ if (tx_conf->tx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (txq == NULL) {
+ ENETFEC_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.queue_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(0, &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Rx deferred start is not supported */
+ if (rx_conf->rx_deferred_start) {
+ ENETFEC_PMD_ERR("%p:Rx deferred start not supported",
+ (void *)dev);
+ return -EINVAL;
+ }
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (rxq == NULL) {
+ ENETFEC_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.queue_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENETFEC_PMD_ERR("mbuf failed");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr) > 0)
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ if (rxq->rx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ }
+ rte_free(rxq);
+ return errno;
+}
+
static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
- .dev_stop = enetfec_eth_stop
+ .dev_stop = enetfec_eth_stop,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup
};
static int
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 4d671e6d45..27e124c339 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -10,9 +10,13 @@
/* full duplex */
#define FULL_DUPLEX 0x00
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
#define PKT_MAX_BUF_SIZE 1984
#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ENETFEC_MAX_RX_PKT_LEN 3000
+#define __iomem
/*
* ENETFEC can support 1 rx and tx queue..
*/
@@ -22,6 +26,49 @@
#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
#define readl(p) rte_read32(p)
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int queue_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
struct enetfec_private {
struct rte_eth_dev *dev;
int full_duplex;
@@ -31,6 +78,8 @@ struct enetfec_private {
uint32_t enetfec_e_cntl;
uint16_t max_rx_queues;
uint16_t max_tx_queues;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
unsigned int reg_size;
unsigned int bd_size;
bool bufdesc_ex;
@@ -44,6 +93,34 @@ struct enetfec_private {
uint32_t bd_addr_p_t[ENETFEC_MAX_Q];
void *dma_baseaddr_r[ENETFEC_MAX_Q];
void *dma_baseaddr_t[ENETFEC_MAX_Q];
+ struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
};
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
+}
+
+static inline int
+fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
#endif /*__ENETFEC_ETHDEV_H__*/
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v10 4/5] net/enetfec: add Rx/Tx support
2021-11-13 4:31 ` [PATCH v10 0/5] drivers/net: add " Apeksha Gupta
` (2 preceding siblings ...)
2021-11-13 4:31 ` [PATCH v10 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-13 4:31 ` Apeksha Gupta
2021-11-13 17:10 ` Stephen Hemminger
2021-11-13 4:31 ` [PATCH v10 5/5] net/enetfec: add features Apeksha Gupta
4 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-13 4:31 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds burst enqueue and dequeue operations to the enetfec
PMD. Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 2 +
drivers/net/enetfec/enet_ethdev.c | 184 ++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 25 +++
drivers/net/enetfec/enet_rxtx.c | 220 +++++++++++++++++++++++++++
drivers/net/enetfec/meson.build | 4 +-
6 files changed, 436 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 6a86295e34..209073e77c 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -84,6 +84,8 @@ driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index bdfbdbd9d4..3d8aa5b627 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Promiscuous mode = Y
+Basic stats = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 095e835da9..2da6b79f5f 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -39,6 +39,8 @@
#define ENETFEC_RAFL_V 0x8
#define ENETFEC_OPD_V 0xFFF0
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
#define NUM_OF_BD_QUEUES 6
/* Supported Rx offloads */
@@ -152,6 +154,40 @@ enetfec_restart(struct rte_eth_dev *dev)
rte_delay_us(10);
}
+static void
+enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
static int
enetfec_eth_configure(struct rte_eth_dev *dev)
{
@@ -165,6 +201,8 @@ static int
enetfec_eth_start(struct rte_eth_dev *dev)
{
enetfec_restart(dev);
+ dev->rx_pkt_burst = &enetfec_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_xmit_pkts;
return 0;
}
@@ -191,6 +229,101 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_close(struct rte_eth_dev *dev)
+{
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused)
+{
+ struct rte_eth_link link;
+ unsigned int lstatus = 1;
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ link.link_status = lstatus;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
+
+ ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ "Up");
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+enetfec_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(rte_cpu_to_le_32(tmp),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(rte_cpu_to_le_32(0x04400002),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0x10800049),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+ stats->rx_nombuf = eth_stats->rx_nombuf;
+
+ return 0;
+}
+
static int
enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -202,6 +335,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static void
+enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
static const unsigned short offset_des_active_rxq[] = {
ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
};
@@ -407,6 +552,12 @@ static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
.dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
+ .link_update = enetfec_eth_link_update,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup
@@ -432,6 +583,9 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
int rc;
int i;
unsigned int bdsize;
+ struct rte_ether_addr macaddr = {
+ .addr_bytes = { 0x1, 0x1, 0x1, 0x1, 0x1, 0x1 }
+ };
name = rte_vdev_device_name(vdev);
ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
@@ -472,6 +626,21 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ RTE_ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set default mac address
+ */
+ enetfec_set_mac_address(dev, &macaddr);
+
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
@@ -480,6 +649,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
failed_init:
ENETFEC_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -487,6 +658,8 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
int ret;
/* find the ethdev entry */
@@ -494,11 +667,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
if (eth_dev == NULL)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+ enetfec_eth_stop(eth_dev);
+
ret = rte_eth_dev_release_port(eth_dev);
if (ret != 0)
return -EINVAL;
ENETFEC_PMD_INFO("Release enetfec sw device");
+ enetfec_cleanup(fep);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 27e124c339..06a6c10600 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -7,6 +7,10 @@
#include <rte_ethdev.h>
+#define BD_LEN 49152
+#define ENETFEC_TX_FR_SIZE 2048
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+
/* full duplex */
#define FULL_DUPLEX 0x00
@@ -17,6 +21,21 @@
#define ENETFEC_MAX_RX_PKT_LEN 3000
#define __iomem
+#if defined(RTE_ARCH_ARM)
+#if defined(RTE_ARCH_64)
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+
+#else /* RTE_ARCH_32 */
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+#else
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
/*
* ENETFEC can support 1 rx and tx queue..
*/
@@ -71,6 +90,7 @@ struct enetfec_priv_rx_q {
struct enetfec_private {
struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
int full_duplex;
int flag_pause;
uint32_t quirks;
@@ -123,4 +143,9 @@ enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
}
+uint16_t enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 0000000000..e61a217dcb
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while ((status & RX_BD_EMPTY) == 0) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(new_mbuf == NULL)) {
+ stats->rx_nombuf++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENETFEC_DP_LOG(DEBUG, "rx_fifo_error");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_DP_LOG(DEBUG, "rx_length_error");
+ if (status & RX_BD_LAST)
+ ENETFEC_DP_LOG(DEBUG, "rcv is not +last");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENETFEC_DP_LOG(DEBUG, "rx_crc_errors");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_DP_LOG(DEBUG, "rx_frame_error");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ uint8_t *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENETFEC_DP_LOG(DEBUG, "SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 3fb0f73071..551cd5358c 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -8,4 +8,6 @@ endif
sources = files(
'enet_ethdev.c',
- 'enet_uio.c')
+ 'enet_uio.c',
+ 'enet_rxtx.c'
+)
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v10 5/5] net/enetfec: add features
2021-11-13 4:31 ` [PATCH v10 0/5] drivers/net: add " Apeksha Gupta
` (3 preceding siblings ...)
2021-11-13 4:31 ` [PATCH v10 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
@ 2021-11-13 4:31 ` Apeksha Gupta
4 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-13 4:31 UTC (permalink / raw)
To: ferruh.yigit, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal, Apeksha Gupta
This patch adds checksum and VLAN offloads in enetfec network
poll mode driver.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 3 ++
drivers/net/enetfec/enet_ethdev.c | 12 +++++-
drivers/net/enetfec/enet_ethdev.h | 2 +
drivers/net/enetfec/enet_regs.h | 10 +++++
drivers/net/enetfec/enet_rxtx.c | 55 +++++++++++++++++++++++++++-
6 files changed, 82 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 209073e77c..4014cffde9 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -86,6 +86,8 @@ ENETFEC Features
- Basic stats
- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 3d8aa5b627..2a34351b43 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -5,6 +5,9 @@
;
[Features]
Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Basic stats = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 2da6b79f5f..dce3b4544e 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -79,7 +79,11 @@ enetfec_restart(struct rte_eth_dev *dev)
val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
/* align IP header */
val |= ENETFEC_RACC_SHIFT16;
- val &= ~ENETFEC_RACC_OPTIONS;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENETFEC_RACC_OPTIONS;
+ else
+ val &= ~ENETFEC_RACC_OPTIONS;
rte_write32(rte_cpu_to_le_32(val),
(uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
@@ -191,6 +195,11 @@ enet_free_buffers(struct rte_eth_dev *dev)
static int
enetfec_eth_configure(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+
if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
@@ -571,6 +580,7 @@ enetfec_eth_init(struct rte_eth_dev *dev)
fep->full_duplex = FULL_DUPLEX;
dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 06a6c10600..798b6eee05 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -10,6 +10,7 @@
#define BD_LEN 49152
#define ENETFEC_TX_FR_SIZE 2048
#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
/* full duplex */
#define FULL_DUPLEX 0x00
@@ -93,6 +94,7 @@ struct enetfec_private {
struct rte_eth_stats stats;
int full_duplex;
int flag_pause;
+ int flag_csum;
uint32_t quirks;
uint32_t cbus_size;
uint32_t enetfec_e_cntl;
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
index 5415ed77ea..a300c6f8bc 100644
--- a/drivers/net/enetfec/enet_regs.h
+++ b/drivers/net/enetfec/enet_regs.h
@@ -27,6 +27,12 @@
#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENETFEC_RX_VLAN 0x00000004
+
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+
/* Ethernet transmit use control and status of buffer descriptor */
#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
@@ -56,6 +62,10 @@
#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
/* GBIT supported in controller */
#define QUIRK_GBIT (1 << 3)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan */
+#define QUIRK_VLAN (1 << 6)
/* RACC register supported by controller */
#define QUIRK_RACC (1 << 12)
/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index e61a217dcb..8066b1ef07 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -5,6 +5,7 @@
#include <signal.h>
#include <rte_mbuf.h>
#include <rte_io.h>
+#include <ethdev_driver.h>
#include "enet_regs.h"
#include "enet_ethdev.h"
#include "enet_pmd_logs.h"
@@ -22,9 +23,14 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
unsigned short status;
unsigned short pkt_len;
int pkt_received = 0, index = 0;
- void *data;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
pool = rxq->pool;
bdp = rxq->bd.cur;
@@ -77,6 +83,7 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
mbuf = rxq->rx_mbuf[index];
data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
rte_prefetch0(data);
rte_pktmbuf_append((struct rte_mbuf *)mbuf,
pkt_len - 4);
@@ -86,6 +93,48 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
rx_pkts[pkt_received] = mbuf;
pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)
+ ((uint8_t *)data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove((uint8_t *)mbuf_data + VLAN_HLEN,
+ data, RTE_ETHER_ADDR_LEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if ((rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
+ /* don't check it */
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED
+ | RTE_MBUF_F_RX_VLAN;
+ }
+
rxq->rx_mbuf[index] = new_mbuf;
rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
&bdp->bd_bufaddr);
@@ -186,6 +235,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
if (txq->fep->bufdesc_ex) {
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == RTE_MBUF_F_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
rte_write32(0, &ebdp->bd_bdu);
rte_write32(rte_cpu_to_le_32(estatus),
&ebdp->bd_esc);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* RE: [EXT] Re: [PATCH v9 3/5] net/enetfec: support queue configuration
2021-11-10 13:54 ` Ferruh Yigit
@ 2021-11-13 5:00 ` Apeksha Gupta
2021-11-15 10:06 ` Ferruh Yigit
1 sibling, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-13 5:00 UTC (permalink / raw)
To: Ferruh Yigit, david.marchand, andrew.rybchenko
Cc: dev, Sachin Saxena, Hemant Agrawal
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, November 10, 2021 7:25 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>;
> david.marchand@redhat.com; andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant
> Agrawal <hemant.agrawal@nxp.com>
> Subject: [EXT] Re: [PATCH v9 3/5] net/enetfec: support queue configuration
>
> Caution: EXT Email
>
> On 11/10/2021 7:48 AM, Apeksha Gupta wrote:
> > This patch adds Rx/Tx queue configuration setup operations.
> > On packet reception the respective BD Ring status bit is set
> > which is then used for packet processing.
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>
> <...>
>
> > +
> > + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
>
> Isn't 'fep->bd_addr_p_t[]' a 64-bit value?
>
> <...>
>
> > +
> > + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
>
> Isn't 'fep->bd_addr_p_r[]' a 64-bit address, why doing endianness operation
> only on 32-bit and writing only 32-bit of it to register?
[Apeksha] As FEC supports 32bit address only and Tx/Rx descriptor addresses should be within 32bit address range.
Our hardware expects 32 bit addresses and kernel UIO makes sure it provide 32bit address range.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v10 4/5] net/enetfec: add Rx/Tx support
2021-11-13 4:31 ` [PATCH v10 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
@ 2021-11-13 17:10 ` Stephen Hemminger
0 siblings, 0 replies; 91+ messages in thread
From: Stephen Hemminger @ 2021-11-13 17:10 UTC (permalink / raw)
To: Apeksha Gupta
Cc: ferruh.yigit, david.marchand, andrew.rybchenko, dev,
sachin.saxena, hemant.agrawal
On Sat, 13 Nov 2021 10:01:40 +0530
Apeksha Gupta <apeksha.gupta@nxp.com> wrote:
> + if (mbuf)
> + rte_pktmbuf_free(mbuf);
Null check is unnecessary. rte_pktmbuf_free(NULL) is defined
to be safe, like free(NULL).
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v10 3/5] net/enetfec: support queue configuration
2021-11-13 4:31 ` [PATCH v10 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-13 17:11 ` Stephen Hemminger
0 siblings, 0 replies; 91+ messages in thread
From: Stephen Hemminger @ 2021-11-13 17:11 UTC (permalink / raw)
To: Apeksha Gupta
Cc: ferruh.yigit, david.marchand, andrew.rybchenko, dev,
sachin.saxena, hemant.agrawal
On Sat, 13 Nov 2021 10:01:39 +0530
Apeksha Gupta <apeksha.gupta@nxp.com> wrote:
> + /* Tx deferred start is not supported */
> + if (tx_conf->tx_deferred_start) {
> + ENETFEC_PMD_ERR("%p:Tx deferred start not supported",
> + (void *)dev);
Small observations:
1. Cast to void * is unnecessary in a print/log statement
2. Why not print something human friendly like device name?
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v11 0/5] drivers/net: add NXP ENETFEC driver
2021-11-13 4:31 ` [PATCH v10 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-15 7:19 ` Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 1/5] net/enetfec: introduce " Apeksha Gupta
` (6 more replies)
0 siblings, 7 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-15 7:19 UTC (permalink / raw)
To: stephen, ferruh.yigit
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena,
hemant.agrawal, Apeksha Gupta
This patch series introduce the enetfec driver, ENETFEC (Fast Ethernet
Controller) is a network poll mode driver for the inbuilt NIC found in
the NXP i.MX 8M Mini SoC.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO interface so that user space directly communicate with
a UIO based hardware device. UIO interface mmap the Control and Status
Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
gives access to non-cacheble memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Patch 5 adds checksum and VLAN features.
Apeksha Gupta (5):
net/enetfec: introduce NXP ENETFEC driver
net/enetfec: add UIO support
net/enetfec: support queue configuration
net/enetfec: add Rx/Tx support
net/enetfec: add features
MAINTAINERS | 7 +
doc/guides/nics/enetfec.rst | 137 +++++
doc/guides/nics/features/enetfec.ini | 14 +
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/net/enetfec/enet_ethdev.c | 701 +++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 153 ++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++
drivers/net/enetfec/enet_regs.h | 116 ++++
drivers/net/enetfec/enet_rxtx.c | 273 ++++++++++
drivers/net/enetfec/enet_uio.c | 284 ++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++
drivers/net/enetfec/meson.build | 13 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
15 files changed, 1803 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v11 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
@ 2021-11-15 7:19 ` Apeksha Gupta
2021-11-15 10:07 ` Ferruh Yigit
2023-03-21 18:03 ` Ferruh Yigit
2021-11-15 7:19 ` [PATCH v11 2/5] net/enetfec: add UIO support Apeksha Gupta
` (5 subsequent siblings)
6 siblings, 2 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-15 7:19 UTC (permalink / raw)
To: stephen, ferruh.yigit
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena,
hemant.agrawal, Apeksha Gupta
ENETFEC (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC i.MX 8M Mini.
This patch adds skeleton for enetfec driver with probe function.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
MAINTAINERS | 7 ++
doc/guides/nics/enetfec.rst | 133 +++++++++++++++++++++++++
doc/guides/nics/features/enetfec.ini | 9 ++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_21_11.rst | 5 +
drivers/net/enetfec/enet_ethdev.c | 82 +++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 18 ++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++
drivers/net/enetfec/meson.build | 10 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
11 files changed, 300 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e157e12f88..2aa81efe20 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -889,6 +889,13 @@ F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
+NXP enetfec - EXPERIMENTAL
+M: Apeksha Gupta <apeksha.gupta@nxp.com>
+M: Sachin Saxena <sachin.saxena@nxp.com>
+F: drivers/net/enetfec/
+F: doc/guides/nics/enetfec.rst
+F: doc/guides/nics/features/enetfec.ini
+
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 0000000000..6a86295e34
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,133 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK. Driver is taken as **experimental** as driver
+depends on a Linux kernel module 'enetfec-uio', which is not upstreamed
+yet.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both
+high performance and low power consumption. ENETFEC PMD is a hardware
+programmable packet forwarding engine to provide high performance
+Ethernet interface. It has only 1 GB Ethernet interface with RJ45
+connector.
+
+The diagram below shows a system level overview of ENETFEC:
+
+ .. code-block:: console
+
+ =====================================================
+ Userspace
+ +-----------------------------------------+
+ | ENETFEC Driver |
+ | +-------------------------+ |
+ | | virtual ethernet device | |
+ +-----------------------------------------+
+ ^ |
+ | |
+ | |
+ RXQ | | TXQ
+ | |
+ | v
+ =====================================================
+ Kernel Space
+ +---------+
+ | fec-uio |
+ ====================+=========+======================
+ Hardware
+ +-----------------------------------------+
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+-------------------+
+ | PHY |
+ +-----+
+
+ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
+userspace.'fec-uio' is the kernel driver. The MAC and PHY are the hardware
+blocks. ENETFEC PMD uses standard UIO interface to access kernel for PHY
+initialisation and for mapping the allocated memory of register & buffer
+descriptor with DPDK which gives access to non-cacheable memory for buffer
+descriptor. net_enetfec is logical Ethernet interface, created by ENETFEC
+driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- Linux
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
+
+.. note::
+
+ Branch is 'lf-5.10.y'
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **dpdk-testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 0000000000..bdfbdbd9d4
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 784d5d39f6..777fdab4a0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 01923e2deb..6e80805cfd 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -221,6 +221,11 @@ New Features
* Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
* Added socket direct mode bonding support.
+* **Added NXP ENETFEC PMD.**
+
+ Added the new ENETFEC driver for the NXP IMX8MMEVK platform. See the
+ :doc:`../nics/enetfec` NIC driver guide for more details on this new driver.
+
* **Updated Solarflare network PMD.**
Updated the Solarflare ``sfc_efx`` driver with changes including:
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 0000000000..56bd199191
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_pmd_logs.h"
+#include "enet_ethdev.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc;
+
+ name = rte_vdev_device_name(vdev);
+ ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+
+ return 0;
+
+failed_init:
+ ENETFEC_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ int ret;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (eth_dev == NULL)
+ return -ENODEV;
+
+ ret = rte_eth_dev_release_port(eth_dev);
+ if (ret != 0)
+ return -EINVAL;
+
+ ENETFEC_PMD_INFO("Release enetfec sw device");
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_LOG_REGISTER_DEFAULT(enetfec_logtype_pmd, NOTICE);
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 0000000000..7e941da972
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef __ENETFEC_ETHDEV_H__
+#define __ENETFEC_ETHDEV_H__
+
+/*
+ * ENETFEC can support 1 rx and tx queue..
+ */
+
+#define ENETFEC_MAX_Q 1
+
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+};
+
+#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 0000000000..e7b3964a0e
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020-2021 NXP
+ */
+
+#ifndef _ENETFEC_LOGS_H_
+#define _ENETFEC_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENETFEC_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENETFEC_PMD_DEBUG(fmt, args...) \
+ ENETFEC_PMD_LOG(DEBUG, fmt, ## args)
+#define ENETFEC_PMD_ERR(fmt, args...) \
+ ENETFEC_PMD_LOG(ERR, fmt, ## args)
+#define ENETFEC_PMD_INFO(fmt, args...) \
+ ENETFEC_PMD_LOG(INFO, fmt, ## args)
+
+#define ENETFEC_PMD_WARN(fmt, args...) \
+ ENETFEC_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENETFEC_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENETFEC_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 0000000000..42ec41502b
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+sources = files(
+ 'enet_ethdev.c')
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 0000000000..b66517b171
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index bcf488f203..04be346509 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -19,6 +19,7 @@ drivers = [
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v11 2/5] net/enetfec: add UIO support
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-15 7:19 ` Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 3/5] net/enetfec: support queue configuration Apeksha Gupta
` (4 subsequent siblings)
6 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-15 7:19 UTC (permalink / raw)
To: stephen, ferruh.yigit
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena,
hemant.agrawal, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with "fec-uio" driver implemented in
kernel for PHY initialisation and for mapping the allocated memory
of register & BD from kernel to DPDK which gives access to
non-cacheable memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 209 ++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 31 ++++
drivers/net/enetfec/enet_regs.h | 106 +++++++++++
drivers/net/enetfec/enet_uio.c | 284 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 64 +++++++
drivers/net/enetfec/meson.build | 3 +-
6 files changed, 696 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 56bd199191..8bed091efe 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -13,14 +13,192 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_pmd_logs.h"
#include "enet_ethdev.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
+/* FEC receive acceleration */
+#define ENETFEC_RACC_IPDIS RTE_BIT32(1)
+#define ENETFEC_RACC_PRODIS RTE_BIT32(2)
+#define ENETFEC_RACC_SHIFT16 RTE_BIT32(7)
+#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \
+ ENETFEC_RACC_PRODIS)
+
+#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1
+#define ENETFEC_PAUSE_FLAG_ENABLE 0x2
+
+/* Pause frame field and FIFO threshold */
+#define ENETFEC_FCE RTE_BIT32(5)
+#define ENETFEC_RSEM_V 0x84
+#define ENETFEC_RSFL_V 16
+#define ENETFEC_RAEM_V 0x8
+#define ENETFEC_RAFL_V 0x8
+#define ENETFEC_OPD_V 0xFFF0
+
+#define NUM_OF_BD_QUEUES 6
+
+/*
+ * This function is called to start or restart the ENETFEC during a link
+ * change, transmit timeout, or to reconfigure the ENETFEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENETFEC_ETHEREN;
+ uint32_t val;
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(rte_cpu_to_le_32(0x04),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ /* align IP header */
+ val |= ENETFEC_RACC_SHIFT16;
+ val &= ~ENETFEC_RACC_OPTIONS;
+ rte_write32(rte_cpu_to_le_32(val),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= RTE_BIT32(6);
+ ecntl |= RTE_BIT32(5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENETFEC_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM);
+ rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_OPD);
+ } else {
+ rcntl &= ~ENETFEC_FCE;
+ }
+
+ rte_write32(rte_cpu_to_le_32(rcntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IAUR);
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) {
+ /* enable ENETFEC endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENETFEC store and forward mode */
+ rte_write32(rte_cpu_to_le_32(1 << 8),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TFWR);
+ }
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENETFEC_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENETFEC_RXC_DLY;
+ /* Enable the MIB statistic event counters */
+ rte_write32(0, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MIBC);
+
+ ecntl |= 0x70000000;
+ fep->enetfec_e_cntl = ecntl;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(rte_cpu_to_le_32(ecntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_configure(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
+
+ return 0;
+}
+
+static int
+enetfec_eth_start(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+
+/* ENETFEC disable function.
+ * @param[in] base ENETFEC base address
+ */
+static void
+enetfec_disable(struct enetfec_private *fep)
+{
+ rte_write32(rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR)
+ & ~(fep->enetfec_e_cntl),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+}
+
+static int
+enetfec_eth_stop(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ dev->data->dev_started = 0;
+ enetfec_disable(fep);
+
+ return 0;
+}
+
+static const struct eth_dev_ops enetfec_ops = {
+ .dev_configure = enetfec_eth_configure,
+ .dev_start = enetfec_eth_start,
+ .dev_stop = enetfec_eth_stop
+};
+
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ fep->full_duplex = FULL_DUPLEX;
+ dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
return 0;
}
@@ -32,6 +210,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
@@ -43,6 +223,35 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENETFEC_MAX_Q;
+ fep->max_tx_queues = ENETFEC_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT
+ | QUIRK_RACC;
+
+ rc = enetfec_configure();
+ if (rc != 0)
+ return -ENOMEM;
+ rc = config_enetfec_uio(fep);
+ if (rc != 0)
+ return -ENOMEM;
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / NUM_OF_BD_QUEUES;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = (uint8_t *)fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 7e941da972..4d671e6d45 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -5,14 +5,45 @@
#ifndef __ENETFEC_ETHDEV_H__
#define __ENETFEC_ETHDEV_H__
+#include <rte_ethdev.h>
+
+/* full duplex */
+#define FULL_DUPLEX 0x00
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+
/*
* ENETFEC can support 1 rx and tx queue..
*/
#define ENETFEC_MAX_Q 1
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
struct enetfec_private {
struct rte_eth_dev *dev;
+ int full_duplex;
+ int flag_pause;
+ uint32_t quirks;
+ uint32_t cbus_size;
+ uint32_t enetfec_e_cntl;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ bool bufdesc_ex;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ void *hw_baseaddr_v;
+ void *bd_addr_v;
+ uint32_t hw_baseaddr_p;
+ uint32_t bd_addr_p;
+ uint32_t bd_addr_p_r[ENETFEC_MAX_Q];
+ uint32_t bd_addr_p_t[ENETFEC_MAX_Q];
+ void *dma_baseaddr_r[ENETFEC_MAX_Q];
+ void *dma_baseaddr_t[ENETFEC_MAX_Q];
};
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 0000000000..5415ed77ea
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#ifndef __ENETFEC_REGS_H
+#define __ENETFEC_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */
+#define RX_BD_INT 0x00800000
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+
+/*
+ * 0 The next BD in consecutive location
+ * 1 The next BD in ENETFECn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of buffer descriptor */
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+#define TX_BD_WRAP ((ushort)0x2000)
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+
+#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_RD_START_2 : ENETFEC_RD_START_0))
+#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \
+ (((X) == 2) ? \
+ ENETFEC_TD_START_2 : ENETFEC_TD_START_0))
+#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0))
+
+#define ENETFEC_ETHEREN ((uint)0x00000002)
+#define ENETFEC_TXC_DLY ((uint)0x00010000)
+#define ENETFEC_RXC_DLY ((uint)0x00020000)
+
+/* ENETFEC MAC is in controller */
+#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENETFEC_EIR 0x004 /* Interrupt event register */
+#define ENETFEC_EIMR 0x008 /* Interrupt mask register */
+#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENETFEC_ECR 0x024 /* Ethernet control register */
+#define ENETFEC_MSCR 0x044 /* MII speed control register */
+#define ENETFEC_MIBC 0x064 /* MIB control and status register */
+#define ENETFEC_RCR 0x084 /* Receive control register */
+#define ENETFEC_TCR 0x0c4 /* Transmit Control register */
+#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */
+#define ENETFEC_IALR 0x11c /* hash table 32 bits low */
+#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */
+#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */
+#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */
+#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */
+#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */
+#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */
+#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */
+#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */
+#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/
+#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */
+
+#endif /*__ENETFEC_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 0000000000..6539cbb354
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,284 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+static int enetfec_count;
+
+/** @brief Checks if a file name contains a certain substring.
+ * This function assumes a filename format of: [text][number].
+ * @param [in] filename File name
+ * @param [in] match String to match in file name
+ *
+ * @retval true if file name matches the criteria
+ * @retval false if file name does not match the criteria
+ */
+static bool
+file_name_match_extract(const char filename[], const char match[])
+{
+ char *substr = NULL;
+
+ substr = strstr(filename, match);
+ if (substr == NULL)
+ return false;
+
+ return true;
+}
+
+/*
+ * @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for success
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ if (ret <= 0) {
+ ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name);
+ return ret;
+ }
+ close(fd);
+
+ /* NULL-ify string */
+ line[ret] = '\0';
+
+ return 0;
+}
+
+/*
+ * @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ *
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
+ char uio_map_p_addr_str[32];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret < 0) {
+ ENETFEC_PMD_ERR("file_read_first_line() failed");
+ return NULL;
+ }
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return NULL;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (enetfec_count > 0) {
+ ENETFEC_PMD_INFO("Mapped!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ return -1;
+ }
+
+ ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ if (fep->hw_baseaddr_v == NULL)
+ return -ENOMEM;
+ fep->bd_addr_p = (uint32_t)uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ enetfec_count++;
+
+ return 0;
+}
+
+int
+enetfec_configure(void)
+{
+ char uio_name[32];
+ int uio_minor_number = -1;
+ int ret;
+ DIR *d = NULL;
+ struct dirent *dir;
+
+ d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
+ if (d == NULL) {
+ ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
+ return -1;
+ }
+
+ /* Iterate through all subdirs */
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 1) ||
+ !strncmp(dir->d_name, "..", 2))
+ continue;
+
+ if (file_name_match_extract(dir->d_name, "uio")) {
+ /*
+ * As substring <uio> was found in <d_name>
+ * read number following <uio> substring in <d_name>
+ */
+ ret = sscanf(dir->d_name + strlen("uio"), "%d",
+ &uio_minor_number);
+ if (ret < 0)
+ ENETFEC_PMD_ERR("Error: not find minor number\n");
+ /*
+ * Open file uioX/name and read first line which
+ * contains the name for the device. Based on the
+ * name check if this UIO device is for enetfec.
+ */
+ memset(uio_name, 0, sizeof(uio_name));
+ ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
+ dir->d_name, "name", uio_name);
+ if (ret != 0) {
+ ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ closedir(d);
+ return -1;
+ }
+
+ if (file_name_match_extract(uio_name,
+ FEC_UIO_DEVICE_NAME)) {
+ enetfec_uio_job.uio_minor_number =
+ uio_minor_number;
+ ENETFEC_PMD_INFO("enetfec device uio name: %s",
+ uio_name);
+ }
+ }
+ }
+ closedir(d);
+ return 0;
+}
+
+void
+enetfec_cleanup(struct enetfec_private *fep)
+{
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 0000000000..fec8ba6f95
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+/*
+ * Name of UIO device. User space FEC will have a corresponding
+ * UIO device.
+ * Maximum length is #FEC_UIO_MAX_DEVICE_NAME_LENGTH.
+ *
+ * @note Must be kept in sync with FEC kernel driver
+ * define #FEC_UIO_DEVICE_NAME !
+ */
+#define FEC_UIO_DEVICE_NAME "imx-fec-uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int enetfec_configure(void);
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(struct enetfec_private *fep);
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 42ec41502b..3fb0f73071 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -7,4 +7,5 @@ if not is_linux
endif
sources = files(
- 'enet_ethdev.c')
+ 'enet_ethdev.c',
+ 'enet_uio.c')
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v11 3/5] net/enetfec: support queue configuration
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 2/5] net/enetfec: add UIO support Apeksha Gupta
@ 2021-11-15 7:19 ` Apeksha Gupta
2021-11-15 10:11 ` Ferruh Yigit
2021-11-15 7:19 ` [PATCH v11 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
` (3 subsequent siblings)
6 siblings, 1 reply; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-15 7:19 UTC (permalink / raw)
To: stephen, ferruh.yigit
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena,
hemant.agrawal, Apeksha Gupta
This patch adds Rx/Tx queue configuration setup operations.
On packet reception the respective BD Ring status bit is set
which is then used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 220 +++++++++++++++++++++++++++++-
drivers/net/enetfec/enet_ethdev.h | 77 +++++++++++
2 files changed, 296 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 8bed091efe..0b8b73615d 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -41,6 +41,11 @@
#define NUM_OF_BD_QUEUES 6
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_VLAN;
+
/*
* This function is called to start or restart the ENETFEC during a link
* change, transmit timeout, or to reconfigure the ENETFEC. The network
@@ -186,10 +191,223 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_pktlen = ENETFEC_MAX_RX_PKT_LEN;
+ dev_info->max_rx_queues = ENETFEC_MAX_Q;
+ dev_info->max_tx_queues = ENETFEC_MAX_Q;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Tx deferred start is not supported */
+ if (tx_conf->tx_deferred_start) {
+ ENETFEC_PMD_ERR("Tx deferred start not supported");
+ return -EINVAL;
+ }
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (txq == NULL) {
+ ENETFEC_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.queue_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(0, &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id __rte_unused,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* Rx deferred start is not supported */
+ if (rx_conf->rx_deferred_start) {
+ ENETFEC_PMD_ERR("Rx deferred start not supported");
+ return -EINVAL;
+ }
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (rxq == NULL) {
+ ENETFEC_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
+ rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.queue_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc = (uint8_t *)fep->hw_baseaddr_v +
+ offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((uintptr_t)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((uintptr_t)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENETFEC_PMD_ERR("mbuf failed");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr) > 0)
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ if (rxq->rx_mbuf[i] != NULL) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ }
+ rte_free(rxq);
+ return errno;
+}
+
static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
- .dev_stop = enetfec_eth_stop
+ .dev_stop = enetfec_eth_stop,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup
};
static int
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 4d671e6d45..27e124c339 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -10,9 +10,13 @@
/* full duplex */
#define FULL_DUPLEX 0x00
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
#define PKT_MAX_BUF_SIZE 1984
#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ENETFEC_MAX_RX_PKT_LEN 3000
+#define __iomem
/*
* ENETFEC can support 1 rx and tx queue..
*/
@@ -22,6 +26,49 @@
#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
#define readl(p) rte_read32(p)
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int queue_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
struct enetfec_private {
struct rte_eth_dev *dev;
int full_duplex;
@@ -31,6 +78,8 @@ struct enetfec_private {
uint32_t enetfec_e_cntl;
uint16_t max_rx_queues;
uint16_t max_tx_queues;
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
unsigned int reg_size;
unsigned int bd_size;
bool bufdesc_ex;
@@ -44,6 +93,34 @@ struct enetfec_private {
uint32_t bd_addr_p_t[ENETFEC_MAX_Q];
void *dma_baseaddr_r[ENETFEC_MAX_Q];
void *dma_baseaddr_t[ENETFEC_MAX_Q];
+ struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q];
};
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((uintptr_t)bdp) + bd->d_size);
+}
+
+static inline int
+fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((uintptr_t)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
#endif /*__ENETFEC_ETHDEV_H__*/
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v11 4/5] net/enetfec: add Rx/Tx support
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
` (2 preceding siblings ...)
2021-11-15 7:19 ` [PATCH v11 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-15 7:19 ` Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 5/5] net/enetfec: add features Apeksha Gupta
` (2 subsequent siblings)
6 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-15 7:19 UTC (permalink / raw)
To: stephen, ferruh.yigit
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena,
hemant.agrawal, Apeksha Gupta
This patch adds burst enqueue and dequeue operations to the enetfec
PMD. Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 2 +
drivers/net/enetfec/enet_ethdev.c | 182 ++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 25 +++
drivers/net/enetfec/enet_rxtx.c | 220 +++++++++++++++++++++++++++
drivers/net/enetfec/meson.build | 4 +-
6 files changed, 434 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 6a86295e34..209073e77c 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -84,6 +84,8 @@ driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index bdfbdbd9d4..3d8aa5b627 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,6 +4,8 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Promiscuous mode = Y
+Basic stats = Y
Linux = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 0b8b73615d..9ac7501043 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -39,6 +39,8 @@
#define ENETFEC_RAFL_V 0x8
#define ENETFEC_OPD_V 0xFFF0
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
#define NUM_OF_BD_QUEUES 6
/* Supported Rx offloads */
@@ -152,6 +154,38 @@ enetfec_restart(struct rte_eth_dev *dev)
rte_delay_us(10);
}
+static void
+enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
static int
enetfec_eth_configure(struct rte_eth_dev *dev)
{
@@ -165,6 +199,8 @@ static int
enetfec_eth_start(struct rte_eth_dev *dev)
{
enetfec_restart(dev);
+ dev->rx_pkt_burst = &enetfec_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_xmit_pkts;
return 0;
}
@@ -191,6 +227,101 @@ enetfec_eth_stop(struct rte_eth_dev *dev)
return 0;
}
+static int
+enetfec_eth_close(struct rte_eth_dev *dev)
+{
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete __rte_unused)
+{
+ struct rte_eth_link link;
+ unsigned int lstatus = 1;
+
+ memset(&link, 0, sizeof(struct rte_eth_link));
+
+ link.link_status = lstatus;
+ link.link_speed = RTE_ETH_SPEED_NUM_1G;
+
+ ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ "Up");
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+enetfec_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(rte_cpu_to_le_32(tmp),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0xffffffff),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(rte_cpu_to_le_32(0x04400002),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GAUR);
+ rte_write32(rte_cpu_to_le_32(0x10800049),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ (uint8_t *)fep->hw_baseaddr_v + ENETFEC_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+ stats->rx_nombuf = eth_stats->rx_nombuf;
+
+ return 0;
+}
+
static int
enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -202,6 +333,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static void
+enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
static const unsigned short offset_des_active_rxq[] = {
ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2
};
@@ -405,6 +548,12 @@ static const struct eth_dev_ops enetfec_ops = {
.dev_configure = enetfec_eth_configure,
.dev_start = enetfec_eth_start,
.dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
+ .link_update = enetfec_eth_link_update,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup
@@ -430,6 +579,9 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
int rc;
int i;
unsigned int bdsize;
+ struct rte_ether_addr macaddr = {
+ .addr_bytes = { 0x1, 0x1, 0x1, 0x1, 0x1, 0x1 }
+ };
name = rte_vdev_device_name(vdev);
ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
@@ -470,6 +622,21 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ RTE_ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set default mac address
+ */
+ enetfec_set_mac_address(dev, &macaddr);
+
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
@@ -478,6 +645,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
failed_init:
ENETFEC_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -485,6 +654,8 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
int ret;
/* find the ethdev entry */
@@ -492,11 +663,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev)
if (eth_dev == NULL)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+ enetfec_eth_stop(eth_dev);
+
ret = rte_eth_dev_release_port(eth_dev);
if (ret != 0)
return -EINVAL;
ENETFEC_PMD_INFO("Release enetfec sw device");
+ enetfec_cleanup(fep);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 27e124c339..06a6c10600 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -7,6 +7,10 @@
#include <rte_ethdev.h>
+#define BD_LEN 49152
+#define ENETFEC_TX_FR_SIZE 2048
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+
/* full duplex */
#define FULL_DUPLEX 0x00
@@ -17,6 +21,21 @@
#define ENETFEC_MAX_RX_PKT_LEN 3000
#define __iomem
+#if defined(RTE_ARCH_ARM)
+#if defined(RTE_ARCH_64)
+#define dcbf(p) { asm volatile("dc cvac, %0" : : "r"(p) : "memory"); }
+#define dcbf_64(p) dcbf(p)
+
+#else /* RTE_ARCH_32 */
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
+#else
+#define dcbf(p) RTE_SET_USED(p)
+#define dcbf_64(p) dcbf(p)
+#endif
+
/*
* ENETFEC can support 1 rx and tx queue..
*/
@@ -71,6 +90,7 @@ struct enetfec_priv_rx_q {
struct enetfec_private {
struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
int full_duplex;
int flag_pause;
uint32_t quirks;
@@ -123,4 +143,9 @@ enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd)
return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
}
+uint16_t enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
#endif /*__ENETFEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 0000000000..e61a217dcb
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while ((status & RX_BD_EMPTY) == 0) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(new_mbuf == NULL)) {
+ stats->rx_nombuf++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENETFEC_DP_LOG(DEBUG, "rx_fifo_error");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENETFEC_DP_LOG(DEBUG, "rx_length_error");
+ if (status & RX_BD_LAST)
+ ENETFEC_DP_LOG(DEBUG, "rcv is not +last");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENETFEC_DP_LOG(DEBUG, "rx_crc_errors");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENETFEC_DP_LOG(DEBUG, "rx_frame_error");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ uint8_t *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENETFEC_DP_LOG(DEBUG, "SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 3fb0f73071..551cd5358c 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -8,4 +8,6 @@ endif
sources = files(
'enet_ethdev.c',
- 'enet_uio.c')
+ 'enet_uio.c',
+ 'enet_rxtx.c'
+)
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* [PATCH v11 5/5] net/enetfec: add features
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
` (3 preceding siblings ...)
2021-11-15 7:19 ` [PATCH v11 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
@ 2021-11-15 7:19 ` Apeksha Gupta
2021-11-15 9:44 ` [PATCH v11 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
2021-11-15 15:05 ` Ferruh Yigit
6 siblings, 0 replies; 91+ messages in thread
From: Apeksha Gupta @ 2021-11-15 7:19 UTC (permalink / raw)
To: stephen, ferruh.yigit
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena,
hemant.agrawal, Apeksha Gupta
This patch adds checksum and VLAN offloads in enetfec network
poll mode driver.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/enetfec.rst | 2 +
doc/guides/nics/features/enetfec.ini | 3 ++
drivers/net/enetfec/enet_ethdev.c | 12 +++++-
drivers/net/enetfec/enet_ethdev.h | 2 +
drivers/net/enetfec/enet_regs.h | 10 +++++
drivers/net/enetfec/enet_rxtx.c | 55 +++++++++++++++++++++++++++-
6 files changed, 82 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 209073e77c..4014cffde9 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -86,6 +86,8 @@ ENETFEC Features
- Basic stats
- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- Linux
- ARMv8
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 3d8aa5b627..2a34351b43 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -5,6 +5,9 @@
;
[Features]
Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Basic stats = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 9ac7501043..f23bdcc4f1 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -79,7 +79,11 @@ enetfec_restart(struct rte_eth_dev *dev)
val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
/* align IP header */
val |= ENETFEC_RACC_SHIFT16;
- val &= ~ENETFEC_RACC_OPTIONS;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENETFEC_RACC_OPTIONS;
+ else
+ val &= ~ENETFEC_RACC_OPTIONS;
rte_write32(rte_cpu_to_le_32(val),
(uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC);
rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE),
@@ -189,6 +193,11 @@ enet_free_buffers(struct rte_eth_dev *dev)
static int
enetfec_eth_configure(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+
if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload");
@@ -567,6 +576,7 @@ enetfec_eth_init(struct rte_eth_dev *dev)
fep->full_duplex = FULL_DUPLEX;
dev->dev_ops = &enetfec_ops;
rte_eth_dev_probing_finish(dev);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
index 06a6c10600..798b6eee05 100644
--- a/drivers/net/enetfec/enet_ethdev.h
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -10,6 +10,7 @@
#define BD_LEN 49152
#define ENETFEC_TX_FR_SIZE 2048
#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
/* full duplex */
#define FULL_DUPLEX 0x00
@@ -93,6 +94,7 @@ struct enetfec_private {
struct rte_eth_stats stats;
int full_duplex;
int flag_pause;
+ int flag_csum;
uint32_t quirks;
uint32_t cbus_size;
uint32_t enetfec_e_cntl;
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
index 5415ed77ea..a300c6f8bc 100644
--- a/drivers/net/enetfec/enet_regs.h
+++ b/drivers/net/enetfec/enet_regs.h
@@ -27,6 +27,12 @@
#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENETFEC_RX_VLAN 0x00000004
+
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+
/* Ethernet transmit use control and status of buffer descriptor */
#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
@@ -56,6 +62,10 @@
#define QUIRK_HAS_ENETFEC_MAC (1 << 0)
/* GBIT supported in controller */
#define QUIRK_GBIT (1 << 3)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan */
+#define QUIRK_VLAN (1 << 6)
/* RACC register supported by controller */
#define QUIRK_RACC (1 << 12)
/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index e61a217dcb..8066b1ef07 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -5,6 +5,7 @@
#include <signal.h>
#include <rte_mbuf.h>
#include <rte_io.h>
+#include <ethdev_driver.h>
#include "enet_regs.h"
#include "enet_ethdev.h"
#include "enet_pmd_logs.h"
@@ -22,9 +23,14 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
unsigned short status;
unsigned short pkt_len;
int pkt_received = 0, index = 0;
- void *data;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
pool = rxq->pool;
bdp = rxq->bd.cur;
@@ -77,6 +83,7 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
mbuf = rxq->rx_mbuf[index];
data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
rte_prefetch0(data);
rte_pktmbuf_append((struct rte_mbuf *)mbuf,
pkt_len - 4);
@@ -86,6 +93,48 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
rx_pkts[pkt_received] = mbuf;
pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)
+ ((uint8_t *)data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove((uint8_t *)mbuf_data + VLAN_HLEN,
+ data, RTE_ETHER_ADDR_LEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if ((rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) {
+ /* don't check it */
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED
+ | RTE_MBUF_F_RX_VLAN;
+ }
+
rxq->rx_mbuf[index] = new_mbuf;
rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
&bdp->bd_bufaddr);
@@ -186,6 +235,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
if (txq->fep->bufdesc_ex) {
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == RTE_MBUF_F_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
rte_write32(0, &ebdp->bd_bdu);
rte_write32(rte_cpu_to_le_32(estatus),
&ebdp->bd_esc);
--
2.17.1
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 0/5] drivers/net: add NXP ENETFEC driver
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
` (4 preceding siblings ...)
2021-11-15 7:19 ` [PATCH v11 5/5] net/enetfec: add features Apeksha Gupta
@ 2021-11-15 9:44 ` Ferruh Yigit
2021-11-15 15:05 ` Ferruh Yigit
6 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 9:44 UTC (permalink / raw)
To: Apeksha Gupta, stephen
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena, hemant.agrawal
On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
> This patch series introduce the enetfec driver, ENETFEC (Fast Ethernet
> Controller) is a network poll mode driver for the inbuilt NIC found in
> the NXP i.MX 8M Mini SoC.
>
> An overview of the enetfec driver with probe and remove are in patch 1.
> Patch 2 design UIO interface so that user space directly communicate with
> a UIO based hardware device. UIO interface mmap the Control and Status
> Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
> gives access to non-cacheble memory for BD.
>
> Patch 3 adds the RX/TX queue configuration setup operations.
> Patch 4 adds enqueue and dequeue support. Also adds some basic features
> like promiscuous enable, basic stats.
> Patch 5 adds checksum and VLAN features.
>
> Apeksha Gupta (5):
> net/enetfec: introduce NXP ENETFEC driver
> net/enetfec: add UIO support
> net/enetfec: support queue configuration
> net/enetfec: add Rx/Tx support
> net/enetfec: add features
>
Hi Apeksha,
It would help if you can keep a changelog between versions, in the cover
letter, for future.
Also, there is a meson file warning, fyi:
$ ./devtools/check-meson.py
Error: Missing trailing "," in list at drivers/net/enetfec/meson.build:12
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v9 3/5] net/enetfec: support queue configuration
2021-11-10 13:54 ` Ferruh Yigit
2021-11-13 5:00 ` [EXT] " Apeksha Gupta
@ 2021-11-15 10:06 ` Ferruh Yigit
2021-11-15 10:23 ` Ferruh Yigit
1 sibling, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 10:06 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/10/2021 1:54 PM, Ferruh Yigit wrote:
> On 11/10/2021 7:48 AM, Apeksha Gupta wrote:
>> This patch adds Rx/Tx queue configuration setup operations.
>> On packet reception the respective BD Ring status bit is set
>> which is then used for packet processing.
>>
>> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
>> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>
> <...>
>
>> +
>> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
>
> Isn't 'fep->bd_addr_p_t[]' a 64-bit value?
>
> <...>
>
>> +
>> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
>
> Isn't 'fep->bd_addr_p_r[]' a 64-bit address, why doing endianness operation
> only on 32-bit and writing only 32-bit of it to register?
Hi Apeksha,
Above comments seems not addressed in v10 & v11, unfortunately this keep happening
in this set.
Above lines are causing a build error for gcc12, can you please check:
../drivers/net/enetfec/enet_ethdev.c:482:9: error: array subscript 1 is above array bounds of ‘uint32_t[1]’ {aka ‘unsigned int[1]’} [-Werror=array-bounds]
482 | rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
483 | (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/net/enetfec/enet_ethdev.c:18:
../drivers/net/enetfec/enet_ethdev.h:114:33: note: while referencing ‘bd_addr_p_r’
114 | uint32_t bd_addr_p_r[ENETFEC_MAX_Q];
| ^~~~~~~~~~~
../drivers/net/enetfec/enet_ethdev.c:482:9: error: array subscript 2 is above array bounds of ‘uint32_t[1]’ {aka ‘unsigned int[1]’} [-Werror=array-bounds]
482 | rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
483 | (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/net/enetfec/enet_ethdev.c:18:
../drivers/net/enetfec/enet_ethdev.h:114:33: note: while referencing ‘bd_addr_p_r’
114 | uint32_t bd_addr_p_r[ENETFEC_MAX_Q];
| ^~~~~~~~~~~
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-15 7:19 ` [PATCH v11 1/5] net/enetfec: introduce " Apeksha Gupta
@ 2021-11-15 10:07 ` Ferruh Yigit
2023-03-21 18:03 ` Ferruh Yigit
1 sibling, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 10:07 UTC (permalink / raw)
To: Apeksha Gupta, stephen
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena, hemant.agrawal
On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
> ENETFEC (Fast Ethernet Controller) is a network poll mode driver
> for NXP SoC i.MX 8M Mini.
>
> This patch adds skeleton for enetfec driver with probe function.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
<...>
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020-2021 NXP
> + */
> +
> +#include <stdio.h>
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <sys/mman.h>
> +#include <rte_kvargs.h>
> +#include <ethdev_vdev.h>
> +#include <rte_bus_vdev.h>
> +#include <rte_dev.h>
> +#include <rte_ether.h>
> +#include "enet_pmd_logs.h"
> +#include "enet_ethdev.h"
Most probably you don't need all above headers, at least not in this patch.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 3/5] net/enetfec: support queue configuration
2021-11-15 7:19 ` [PATCH v11 3/5] net/enetfec: support queue configuration Apeksha Gupta
@ 2021-11-15 10:11 ` Ferruh Yigit
2021-11-15 10:24 ` Ferruh Yigit
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 10:11 UTC (permalink / raw)
To: Apeksha Gupta, stephen
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena, hemant.agrawal
On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
> This patch adds Rx/Tx queue configuration setup operations.
> On packet reception the respective BD Ring status bit is set
> which is then used for packet processing.
>
> Signed-off-by: Sachin Saxena<sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta<apeksha.gupta@nxp.com>
> Acked-by: Hemant Agrawal<hemant.agrawal@nxp.com>
I put a comment to v9 on using 'fep->bd_addr_p_r[queue_idx]' &
'fep->bd_addr_p_t[queue_idx])' 64 bit variables as 32 bit, can you please
check it?
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v9 3/5] net/enetfec: support queue configuration
2021-11-15 10:06 ` Ferruh Yigit
@ 2021-11-15 10:23 ` Ferruh Yigit
2021-11-15 10:29 ` Ferruh Yigit
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 10:23 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/15/2021 10:06 AM, Ferruh Yigit wrote:
> On 11/10/2021 1:54 PM, Ferruh Yigit wrote:
>> On 11/10/2021 7:48 AM, Apeksha Gupta wrote:
>>> This patch adds Rx/Tx queue configuration setup operations.
>>> On packet reception the respective BD Ring status bit is set
>>> which is then used for packet processing.
>>>
>>> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
>>> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>>
>> <...>
>>
>>> +
>>> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
>>
>> Isn't 'fep->bd_addr_p_t[]' a 64-bit value?
>>
>> <...>
>>
>>> +
>>> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
>>
>> Isn't 'fep->bd_addr_p_r[]' a 64-bit address, why doing endianness operation
>> only on 32-bit and writing only 32-bit of it to register?
>
> Hi Apeksha,
>
> Above comments seems not addressed in v10 & v11, unfortunately this keep happening
> in this set.
>
My bad, sorry. The variables seems updated as 'uint32_t' in the v10 & v11.
So I am not sure about the reason of the below build error, can you help to understand it.
> Above lines are causing a build error for gcc12, can you please check:
> ../drivers/net/enetfec/enet_ethdev.c:482:9: error: array subscript 1 is above array bounds of ‘uint32_t[1]’ {aka ‘unsigned int[1]’} [-Werror=array-bounds]
> 482 | rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 483 | (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> In file included from ../drivers/net/enetfec/enet_ethdev.c:18:
> ../drivers/net/enetfec/enet_ethdev.h:114:33: note: while referencing ‘bd_addr_p_r’
> 114 | uint32_t bd_addr_p_r[ENETFEC_MAX_Q];
> | ^~~~~~~~~~~
> ../drivers/net/enetfec/enet_ethdev.c:482:9: error: array subscript 2 is above array bounds of ‘uint32_t[1]’ {aka ‘unsigned int[1]’} [-Werror=array-bounds]
> 482 | rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 483 | (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> In file included from ../drivers/net/enetfec/enet_ethdev.c:18:
> ../drivers/net/enetfec/enet_ethdev.h:114:33: note: while referencing ‘bd_addr_p_r’
> 114 | uint32_t bd_addr_p_r[ENETFEC_MAX_Q];
> | ^~~~~~~~~~~
>
>
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 3/5] net/enetfec: support queue configuration
2021-11-15 10:11 ` Ferruh Yigit
@ 2021-11-15 10:24 ` Ferruh Yigit
2021-11-15 11:15 ` Ferruh Yigit
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 10:24 UTC (permalink / raw)
To: Apeksha Gupta, stephen
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena, hemant.agrawal
On 11/15/2021 10:11 AM, Ferruh Yigit wrote:
> On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
>> This patch adds Rx/Tx queue configuration setup operations.
>> On packet reception the respective BD Ring status bit is set
>> which is then used for packet processing.
>>
>> Signed-off-by: Sachin Saxena<sachin.saxena@nxp.com>
>> Signed-off-by: Apeksha Gupta<apeksha.gupta@nxp.com>
>> Acked-by: Hemant Agrawal<hemant.agrawal@nxp.com>
>
> I put a comment to v9 on using 'fep->bd_addr_p_r[queue_idx]' &
> 'fep->bd_addr_p_t[queue_idx])' 64 bit variables as 32 bit, can you please
> check it?
hmm, 'fep->bd_addr_p_r[queue_idx]' & 'fep->bd_addr_p_t[queue_idx])' are no more
64-bit variables. Not sure why gcc12 complains about it, can we follow it on the
v9 comment?
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v9 3/5] net/enetfec: support queue configuration
2021-11-15 10:23 ` Ferruh Yigit
@ 2021-11-15 10:29 ` Ferruh Yigit
0 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 10:29 UTC (permalink / raw)
To: Apeksha Gupta, david.marchand, andrew.rybchenko
Cc: dev, sachin.saxena, hemant.agrawal
On 11/15/2021 10:23 AM, Ferruh Yigit wrote:
> On 11/15/2021 10:06 AM, Ferruh Yigit wrote:
>> On 11/10/2021 1:54 PM, Ferruh Yigit wrote:
>>> On 11/10/2021 7:48 AM, Apeksha Gupta wrote:
>>>> This patch adds Rx/Tx queue configuration setup operations.
>>>> On packet reception the respective BD Ring status bit is set
>>>> which is then used for packet processing.
>>>>
>>>> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
>>>> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>>>
>>> <...>
>>>
>>>> +
>>>> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]),
>>>
>>> Isn't 'fep->bd_addr_p_t[]' a 64-bit value?
>>>
>>> <...>
>>>
>>>> +
>>>> + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
>>>
>>> Isn't 'fep->bd_addr_p_r[]' a 64-bit address, why doing endianness operation
>>> only on 32-bit and writing only 32-bit of it to register?
>>
>> Hi Apeksha,
>>
>> Above comments seems not addressed in v10 & v11, unfortunately this keep happening
>> in this set.
>>
>
> My bad, sorry. The variables seems updated as 'uint32_t' in the v10 & v11.
>
> So I am not sure about the reason of the below build error, can you help to understand it.
>
>> Above lines are causing a build error for gcc12, can you please check:
>> ../drivers/net/enetfec/enet_ethdev.c:482:9: error: array subscript 1 is above array bounds of ‘uint32_t[1]’ {aka ‘unsigned int[1]’} [-Werror=array-bounds]
>> 482 | rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
>> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> 483 | (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
>> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> In file included from ../drivers/net/enetfec/enet_ethdev.c:18:
>> ../drivers/net/enetfec/enet_ethdev.h:114:33: note: while referencing ‘bd_addr_p_r’
>> 114 | uint32_t bd_addr_p_r[ENETFEC_MAX_Q];
>> | ^~~~~~~~~~~
>> ../drivers/net/enetfec/enet_ethdev.c:482:9: error: array subscript 2 is above array bounds of ‘uint32_t[1]’ {aka ‘unsigned int[1]’} [-Werror=array-bounds]
>> 482 | rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]),
>> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> 483 | (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx));
>> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> In file included from ../drivers/net/enetfec/enet_ethdev.c:18:
>> ../drivers/net/enetfec/enet_ethdev.h:114:33: note: while referencing ‘bd_addr_p_r’
>> 114 | uint32_t bd_addr_p_r[ENETFEC_MAX_Q];
>> | ^~~~~~~~~~~
>>
>>
>
Warning talks about 'array subscript 1' & 'array subscript 2', not sure why it thinks
'queue_idx' can be '1' or '2'.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 3/5] net/enetfec: support queue configuration
2021-11-15 10:24 ` Ferruh Yigit
@ 2021-11-15 11:15 ` Ferruh Yigit
0 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 11:15 UTC (permalink / raw)
To: Apeksha Gupta, stephen
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena, hemant.agrawal
On 11/15/2021 10:24 AM, Ferruh Yigit wrote:
> On 11/15/2021 10:11 AM, Ferruh Yigit wrote:
>> On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
>>> This patch adds Rx/Tx queue configuration setup operations.
>>> On packet reception the respective BD Ring status bit is set
>>> which is then used for packet processing.
>>>
>>> Signed-off-by: Sachin Saxena<sachin.saxena@nxp.com>
>>> Signed-off-by: Apeksha Gupta<apeksha.gupta@nxp.com>
>>> Acked-by: Hemant Agrawal<hemant.agrawal@nxp.com>
>>
>> I put a comment to v9 on using 'fep->bd_addr_p_r[queue_idx]' &
>> 'fep->bd_addr_p_t[queue_idx])' 64 bit variables as 32 bit, can you please
>> check it?
>
> hmm, 'fep->bd_addr_p_r[queue_idx]' & 'fep->bd_addr_p_t[queue_idx])' are no more
> 64-bit variables. Not sure why gcc12 complains about it, can we follow it on the
> v9 comment?
Since the 64-bit variable issue is addressed and
it is not clear from the build error if there is a real issue in the code
and gcc12 is experimental, I will continue with patch, ignoring the build
error for now, we can address it later if the issue is still there when
gcc12 is released.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 0/5] drivers/net: add NXP ENETFEC driver
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
` (5 preceding siblings ...)
2021-11-15 9:44 ` [PATCH v11 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
@ 2021-11-15 15:05 ` Ferruh Yigit
2021-11-25 16:52 ` Ferruh Yigit
6 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-15 15:05 UTC (permalink / raw)
To: Apeksha Gupta, stephen
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena, hemant.agrawal
On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
> This patch series introduce the enetfec driver, ENETFEC (Fast Ethernet
> Controller) is a network poll mode driver for the inbuilt NIC found in
> the NXP i.MX 8M Mini SoC.
>
> An overview of the enetfec driver with probe and remove are in patch 1.
> Patch 2 design UIO interface so that user space directly communicate with
> a UIO based hardware device. UIO interface mmap the Control and Status
> Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
> gives access to non-cacheble memory for BD.
>
> Patch 3 adds the RX/TX queue configuration setup operations.
> Patch 4 adds enqueue and dequeue support. Also adds some basic features
> like promiscuous enable, basic stats.
> Patch 5 adds checksum and VLAN features.
>
> Apeksha Gupta (5):
> net/enetfec: introduce NXP ENETFEC driver
> net/enetfec: add UIO support
> net/enetfec: support queue configuration
> net/enetfec: add Rx/Tx support
> net/enetfec: add features
>
For series:
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Series applied to dpdk-next-net/main, thanks.
meson warning fixed, unnecessary headers removed while merging.
Ignoring the gcc12 build error for now, please remember to visit it later.
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 0/5] drivers/net: add NXP ENETFEC driver
2021-11-15 15:05 ` Ferruh Yigit
@ 2021-11-25 16:52 ` Ferruh Yigit
0 siblings, 0 replies; 91+ messages in thread
From: Ferruh Yigit @ 2021-11-25 16:52 UTC (permalink / raw)
To: Apeksha Gupta, hemant.agrawal
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena, stephen,
Thomas Monjalon
On 11/15/2021 3:05 PM, Ferruh Yigit wrote:
> On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
>> This patch series introduce the enetfec driver, ENETFEC (Fast Ethernet
>> Controller) is a network poll mode driver for the inbuilt NIC found in
>> the NXP i.MX 8M Mini SoC.
>>
>> An overview of the enetfec driver with probe and remove are in patch 1.
>> Patch 2 design UIO interface so that user space directly communicate with
>> a UIO based hardware device. UIO interface mmap the Control and Status
>> Registers (CSR) & BD memory in DPDK which is allocated in kernel and this
>> gives access to non-cacheble memory for BD.
>>
>> Patch 3 adds the RX/TX queue configuration setup operations.
>> Patch 4 adds enqueue and dequeue support. Also adds some basic features
>> like promiscuous enable, basic stats.
>> Patch 5 adds checksum and VLAN features.
>>
>> Apeksha Gupta (5):
>> net/enetfec: introduce NXP ENETFEC driver
>> net/enetfec: add UIO support
>> net/enetfec: support queue configuration
>> net/enetfec: add Rx/Tx support
>> net/enetfec: add features
>>
>
> For series:
> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> Series applied to dpdk-next-net/main, thanks.
>
>
> meson warning fixed, unnecessary headers removed while merging.
> Ignoring the gcc12 build error for now, please remember to visit it later.
>
Hi Apeksha, Hemant,
Is it possible to send a test report for this new PMD [1]?
Since this is a new PMD, and only vendor can test it, it will be good to
record what features tested on which platform and what is the result.
Otherwise we are completely blind on the status of this new piece of code.
Some sample test results to help you:
http://inbox.dpdk.org/dev/f5980c76-d456-03fc-e1b7-cb5f24c658c9@linux.vnet.ibm.com/
http://inbox.dpdk.org/dev/CAH-L+nOyuL4HdZVmT6GLr370vo9PV8QptCKeHUsupVUpNZ8SPw@mail.gmail.com/
http://inbox.dpdk.org/dev/CAMp7Qk=ROf4ptrWuaWdCV8GnYr1jR3ADAcGHxbNkHYSHzVUtEg@mail.gmail.com/
http://inbox.dpdk.org/dev/b2c7ed47d95a4d09b41308a826bfe0e5@intel.com/
And if possible a tested platform patch would be great,
something like following,
https://patches.dpdk.org/project/dpdk/patch/20211119181833.121383-1-yanx.xia@intel.com/
[1]
http://core.dpdk.org/testing/#release-validation
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 1/5] net/enetfec: introduce NXP ENETFEC driver
2021-11-15 7:19 ` [PATCH v11 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-15 10:07 ` Ferruh Yigit
@ 2023-03-21 18:03 ` Ferruh Yigit
2023-03-23 6:00 ` Sachin Saxena (OSS)
1 sibling, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2023-03-21 18:03 UTC (permalink / raw)
To: Apeksha Gupta, stephen, hemant.agrawal
Cc: david.marchand, andrew.rybchenko, dev, sachin.saxena
On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
> +ENETFEC
> +-------
> +
> +This section provides an overview of the NXP ENETFEC and how it is
> +integrated into the DPDK. Driver is taken as **experimental** as driver
> +depends on a Linux kernel module 'enetfec-uio', which is not upstreamed
> +yet.
Hi Apeksha, Hemant,
I wonder what was the fate of the 'enetfec-uio' kernel module, is it
upstreamed?
Is there any change in the status of the driver?
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 1/5] net/enetfec: introduce NXP ENETFEC driver
2023-03-21 18:03 ` Ferruh Yigit
@ 2023-03-23 6:00 ` Sachin Saxena (OSS)
2023-03-23 11:07 ` Ferruh Yigit
0 siblings, 1 reply; 91+ messages in thread
From: Sachin Saxena (OSS) @ 2023-03-23 6:00 UTC (permalink / raw)
To: Ferruh Yigit, Apeksha Gupta, stephen, Hemant Agrawal
Cc: david.marchand, andrew.rybchenko, dev
On 3/21/2023 11:33 PM, Ferruh Yigit wrote:
> On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
>> +ENETFEC
>> +-------
>> +
>> +This section provides an overview of the NXP ENETFEC and how it is
>> +integrated into the DPDK. Driver is taken as **experimental** as driver
>> +depends on a Linux kernel module 'enetfec-uio', which is not upstreamed
>> +yet.
>
>
> Hi Apeksha, Hemant,
>
> I wonder what was the fate of the 'enetfec-uio' kernel module, is it
> upstreamed?
> Is there any change in the status of the driver?
>
It is not up-streamed yet as we got some review comments from internal
Linux team. The re-use of PHY handling in enetfec-uio driver is point of
objection.
So, we are currently working on changing the design and most probably,
we may design new driver w/o UIO support.
---
Thanks,
Sachin Saxena
(NXP)
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 1/5] net/enetfec: introduce NXP ENETFEC driver
2023-03-23 6:00 ` Sachin Saxena (OSS)
@ 2023-03-23 11:07 ` Ferruh Yigit
2023-03-23 11:09 ` Sachin Saxena (OSS)
0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2023-03-23 11:07 UTC (permalink / raw)
To: Sachin Saxena (OSS), Apeksha Gupta, stephen, Hemant Agrawal
Cc: david.marchand, andrew.rybchenko, dev
On 3/23/2023 6:00 AM, Sachin Saxena (OSS) wrote:
> On 3/21/2023 11:33 PM, Ferruh Yigit wrote:
>> On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
>>> +ENETFEC
>>> +-------
>>> +
>>> +This section provides an overview of the NXP ENETFEC and how it is
>>> +integrated into the DPDK. Driver is taken as **experimental** as driver
>>> +depends on a Linux kernel module 'enetfec-uio', which is not upstreamed
>>> +yet.
>>
>>
>> Hi Apeksha, Hemant,
>>
>> I wonder what was the fate of the 'enetfec-uio' kernel module, is it
>> upstreamed?
>> Is there any change in the status of the driver?
>>
>
> It is not up-streamed yet as we got some review comments from internal
> Linux team. The re-use of PHY handling in enetfec-uio driver is point of
> objection.
> So, we are currently working on changing the design and most probably,
> we may design new driver w/o UIO support.
>
Got it, thanks for update.
I assume users still can get this kernel module via SDK, right?
^ permalink raw reply [flat|nested] 91+ messages in thread
* Re: [PATCH v11 1/5] net/enetfec: introduce NXP ENETFEC driver
2023-03-23 11:07 ` Ferruh Yigit
@ 2023-03-23 11:09 ` Sachin Saxena (OSS)
0 siblings, 0 replies; 91+ messages in thread
From: Sachin Saxena (OSS) @ 2023-03-23 11:09 UTC (permalink / raw)
To: Ferruh Yigit, Apeksha Gupta, stephen, Hemant Agrawal
Cc: david.marchand, andrew.rybchenko, dev
On 3/23/2023 4:37 PM, Ferruh Yigit wrote:
> On 3/23/2023 6:00 AM, Sachin Saxena (OSS) wrote:
>> On 3/21/2023 11:33 PM, Ferruh Yigit wrote:
>>> On 11/15/2021 7:19 AM, Apeksha Gupta wrote:
>>>> +ENETFEC
>>>> +-------
>>>> +
>>>> +This section provides an overview of the NXP ENETFEC and how it is
>>>> +integrated into the DPDK. Driver is taken as **experimental** as driver
>>>> +depends on a Linux kernel module 'enetfec-uio', which is not upstreamed
>>>> +yet.
>>>
>>>
>>> Hi Apeksha, Hemant,
>>>
>>> I wonder what was the fate of the 'enetfec-uio' kernel module, is it
>>> upstreamed?
>>> Is there any change in the status of the driver?
>>>
>>
>> It is not up-streamed yet as we got some review comments from internal
>> Linux team. The re-use of PHY handling in enetfec-uio driver is point of
>> objection.
>> So, we are currently working on changing the design and most probably,
>> we may design new driver w/o UIO support.
>>
>
> Got it, thanks for update.
>
> I assume users still can get this kernel module via SDK, right?
>
Yes.
--
Thanks,
Sachin Saxena
(NXP)
^ permalink raw reply [flat|nested] 91+ messages in thread
end of thread, other threads:[~2023-03-23 11:09 UTC | newest]
Thread overview: 91+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-01 11:42 [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 1/5] net/enetfec: introduce " Apeksha Gupta
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 0/5] drivers/net: add " Apeksha Gupta
2021-10-19 18:39 ` [dpdk-dev] [PATCH v5 1/5] net/enetfec: introduce " Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add " Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 1/5] net/enetfec: introduce " Apeksha Gupta
2021-10-21 5:24 ` Hemant Agrawal
2021-10-27 14:18 ` Ferruh Yigit
2021-11-08 18:42 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add " Apeksha Gupta
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-03 23:27 ` Ferruh Yigit
2021-11-04 18:24 ` Ferruh Yigit
2021-11-08 19:13 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 0/5] drivers/net: add " Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
2021-11-09 11:34 ` [dpdk-dev] [PATCH v8 5/5] net/enetfec: add features Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-10 13:53 ` Ferruh Yigit
2021-11-13 4:31 ` [PATCH v10 0/5] drivers/net: add " Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 0/5] drivers/net: add " Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 1/5] net/enetfec: introduce " Apeksha Gupta
2021-11-15 10:07 ` Ferruh Yigit
2023-03-21 18:03 ` Ferruh Yigit
2023-03-23 6:00 ` Sachin Saxena (OSS)
2023-03-23 11:07 ` Ferruh Yigit
2023-03-23 11:09 ` Sachin Saxena (OSS)
2021-11-15 7:19 ` [PATCH v11 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-11-15 10:11 ` Ferruh Yigit
2021-11-15 10:24 ` Ferruh Yigit
2021-11-15 11:15 ` Ferruh Yigit
2021-11-15 7:19 ` [PATCH v11 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
2021-11-15 7:19 ` [PATCH v11 5/5] net/enetfec: add features Apeksha Gupta
2021-11-15 9:44 ` [PATCH v11 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
2021-11-15 15:05 ` Ferruh Yigit
2021-11-25 16:52 ` Ferruh Yigit
2021-11-13 4:31 ` [PATCH v10 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-11-13 4:31 ` [PATCH v10 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-11-13 17:11 ` Stephen Hemminger
2021-11-13 4:31 ` [PATCH v10 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
2021-11-13 17:10 ` Stephen Hemminger
2021-11-13 4:31 ` [PATCH v10 5/5] net/enetfec: add features Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-11-10 13:54 ` Ferruh Yigit
2021-11-13 5:00 ` [EXT] " Apeksha Gupta
2021-11-15 10:06 ` Ferruh Yigit
2021-11-15 10:23 ` Ferruh Yigit
2021-11-15 10:29 ` Ferruh Yigit
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
2021-11-10 13:56 ` Ferruh Yigit
2021-11-10 7:48 ` [dpdk-dev] [PATCH v9 5/5] net/enetfec: add features Apeksha Gupta
2021-11-10 13:57 ` Ferruh Yigit
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-11-04 18:25 ` Ferruh Yigit
2021-11-08 20:24 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-11-08 21:51 ` Ferruh Yigit
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-11-04 18:26 ` Ferruh Yigit
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 4/5] net/enetfec: add Rx/Tx support Apeksha Gupta
2021-11-04 18:28 ` Ferruh Yigit
2021-11-09 16:20 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-11-03 19:20 ` [dpdk-dev] [PATCH v7 5/5] net/enetfec: add features Apeksha Gupta
2021-11-04 18:31 ` [dpdk-dev] [PATCH v7 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-10-27 14:21 ` Ferruh Yigit
2021-11-08 18:44 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-10-27 14:23 ` Ferruh Yigit
2021-11-08 18:45 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-10-21 4:46 ` [dpdk-dev] [PATCH v6 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
2021-10-27 14:25 ` Ferruh Yigit
2021-11-08 18:47 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-10-21 4:47 ` [dpdk-dev] [PATCH v6 5/5] net/enetfec: add features Apeksha Gupta
2021-10-27 14:26 ` Ferruh Yigit
2021-10-27 14:15 ` [dpdk-dev] [PATCH v6 0/5] drivers/net: add NXP ENETFEC driver Ferruh Yigit
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
2021-10-19 18:40 ` [dpdk-dev] [PATCH v5 5/5] net/enetfec: add features Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
2021-10-01 11:42 ` [dpdk-dev] [PATCH v4 5/5] net/enetfec: add features Apeksha Gupta
2021-10-17 10:49 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: add NXP ENETFEC driver Apeksha Gupta
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).