* [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver
@ 2021-04-30 4:34 Apeksha Gupta
2021-04-30 4:34 ` [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce " Apeksha Gupta
` (5 more replies)
0 siblings, 6 replies; 17+ messages in thread
From: Apeksha Gupta @ 2021-04-30 4:34 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena, Apeksha Gupta
This patch series introduce the enetfec ethernet driver,
ENET fec (Fast Ethernet Controller) is a network poll mode driver for
the inbuilt NIC found in the NXP imx8mmevk Soc.
An overview of the enetfec driver with probe and remove are in patch 1.
Patch 2 design UIO so that user space directly communicate with a
hardware device. UIO interface mmap the Register & BD memory in DPDK
which is allocated in kernel and this gives access to non-cacheble
memory for BD.
Patch 3 adds the RX/TX queue configuration setup operations.
Patch 4 adds enqueue and dequeue support. Also adds some basic features
like promiscuous enable, basic stats.
Apeksha Gupta (4):
drivers/net/enetfec: Introduce NXP ENETFEC driver
drivers/net/enetfec: UIO support added
drivers/net/enetfec: queue configuration
drivers/net/enetfec: add enqueue and dequeue support
doc/guides/nics/enetfec.rst | 125 +++++
doc/guides/nics/features/enetfec.ini | 13 +
doc/guides/nics/index.rst | 1 +
drivers/net/enetfec/enet_ethdev.c | 726 +++++++++++++++++++++++++++
drivers/net/enetfec/enet_ethdev.h | 203 ++++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++
drivers/net/enetfec/enet_regs.h | 179 +++++++
drivers/net/enetfec/enet_rxtx.c | 499 ++++++++++++++++++
drivers/net/enetfec/enet_uio.c | 192 +++++++
drivers/net/enetfec/enet_uio.h | 54 ++
drivers/net/enetfec/meson.build | 16 +
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
13 files changed, 2043 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_rxtx.c
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
--
2.17.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce NXP ENETFEC driver
2021-04-30 4:34 [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Apeksha Gupta
@ 2021-04-30 4:34 ` Apeksha Gupta
2021-06-08 13:10 ` Andrew Rybchenko
2021-07-04 2:57 ` Sachin Saxena (OSS)
2021-04-30 4:34 ` [dpdk-dev] [PATCH 2/4] drivers/net/enetfec: UIO support added Apeksha Gupta
` (4 subsequent siblings)
5 siblings, 2 replies; 17+ messages in thread
From: Apeksha Gupta @ 2021-04-30 4:34 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena, Apeksha Gupta
ENET fec (Fast Ethernet Controller) is a network poll mode driver
for NXP SoC imx8mmevk.
This patch add skeleton for enetfec driver with probe and
uintialisation functions
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 121 ++++++++++++++++
doc/guides/nics/features/enetfec.ini | 8 ++
doc/guides/nics/index.rst | 1 +
drivers/net/enetfec/enet_ethdev.c | 89 ++++++++++++
drivers/net/enetfec/enet_ethdev.h | 203 +++++++++++++++++++++++++++
drivers/net/enetfec/enet_pmd_logs.h | 31 ++++
drivers/net/enetfec/meson.build | 15 ++
drivers/net/enetfec/version.map | 3 +
drivers/net/meson.build | 1 +
9 files changed, 472 insertions(+)
create mode 100644 doc/guides/nics/enetfec.rst
create mode 100644 doc/guides/nics/features/enetfec.ini
create mode 100644 drivers/net/enetfec/enet_ethdev.c
create mode 100644 drivers/net/enetfec/enet_ethdev.h
create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
create mode 100644 drivers/net/enetfec/meson.build
create mode 100644 drivers/net/enetfec/version.map
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
new file mode 100644
index 000000000..10f495fb9
--- /dev/null
+++ b/doc/guides/nics/enetfec.rst
@@ -0,0 +1,121 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2021 NXP
+
+ENETFEC Poll Mode Driver
+========================
+
+The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
+support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
+
+More information can be found at NXP Official Website
+<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
+
+ENETFEC
+-------
+
+This section provides an overview of the NXP ENETFEC and how it is
+integrated into the DPDK.
+
+Contents summary
+
+- ENETFEC overview
+- ENETFEC features
+- Supported ENETFEC SoCs
+- Prerequisites
+- Driver compilation and testing
+- Limitations
+
+ENETFEC Overview
+~~~~~~~~~~~~~~~~
+The i.MX 8M Mini Media Applications Processor is built to achieve both high
+performance and low power consumption. ENETFEC is a hardware programmable
+packet forwarding engine to provide high performance Ethernet interface.
+The diagram below shows a system level overview of ENETFEC:
+
+ ====================================================+===============
+ US +-----------------------------------------+ | Kernel Space
+ | | |
+ | ENETFEC Ethernet Driver | |
+ +-----------------------------------------+ |
+ ^ | |
+ ENETFEC RXQ | | TXQ |
+ PMD | | |
+ | v | +----------+
+ +-------------+ | | fec-uio |
+ | net_enetfec | | +----------+
+ +-------------+ |
+ ^ | |
+ TXQ | | RXQ |
+ | | |
+ | v |
+ ===================================================+===============
+ +----------------------------------------+
+ | | HW
+ | i.MX 8M MINI EVK |
+ | +-----+ |
+ | | MAC | |
+ +---------------+-----+------------------+
+ | PHY |
+ +-----+
+
+ENETFEC ethernet driver is traditional DPDK PMD driver running in the userspace.
+The MAC and PHY are the hardware blocks. 'fec-uio' is the uio driver, enetfec PMD
+uses uio interface to interact with kernel for PHY initialisation and for mapping
+the allocated memory of register & BD in kernel with DPDK which gives access to
+non-cacheble memory for BD. net_enetfec is logical ethernet interface, created by
+ENETFEC driver.
+
+- ENETFEC driver registers the device in virtual device driver.
+- RTE framework scans and will invoke the probe function of ENETFEC driver.
+- The probe function will set the basic device registers and also setups BD rings.
+- On packet Rx the respective BD Ring status bit is set which is then used for
+ packet processing.
+- Then Tx is done first followed by Rx via logical interfaces.
+
+ENETFEC Features
+~~~~~~~~~~~~~~~~~
+
+- ARMv8
+
+Supported ENETFEC SoCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+- i.MX 8M Mini
+
+Prerequisites
+~~~~~~~~~~~~~
+
+There are three main pre-requisites for executing ENETfec PMD on a i.MX
+compatible board:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's bitbucket: <https://bitbucket.sw.nxp.com/projects/LFAC/repos/linux-nxp/commits?until=refs%2Fheads%2Fnet%2Ffec-uio&merges=include>`_.
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
+
+4. The ethernet device will be registered as virtual device, so enetfec has dependency on
+ **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
+ run DPDK application.
+
+Driver compilation and testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Follow instructions available in the document
+:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+to launch **testpmd**
+
+Limitations
+~~~~~~~~~~~
+
+- Multi queue is not supported.
+- Link status is down always.
+- Single ethernet interface.
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
new file mode 100644
index 000000000..570069798
--- /dev/null
+++ b/doc/guides/nics/features/enetfec.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'enetfec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+ARMv8 = Y
+Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 799697caf..93b68e701 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -25,6 +25,7 @@ Network Interface Controller Drivers
e1000em
ena
enetc
+ enetfec
enic
fm10k
hinic
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
new file mode 100644
index 000000000..5fd2dbc2d
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+#include <stdio.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <rte_kvargs.h>
+#include <ethdev_vdev.h>
+#include <rte_bus_vdev.h>
+#include <rte_dev.h>
+#include <rte_ether.h>
+#include "enet_pmd_logs.h"
+#include "enet_ethdev.h"
+
+#define ENETFEC_NAME_PMD net_enetfec
+#define ENET_VDEV_GEM_ID_ARG "intf"
+#define ENET_CDEV_INVALID_FD -1
+
+int enetfec_logtype_pmd;
+
+static int
+enetfec_eth_init(struct rte_eth_dev *dev)
+{
+ rte_eth_dev_probing_finish(dev);
+ return 0;
+}
+
+static int
+pmd_enetfec_probe(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct enetfec_private *fep;
+ const char *name;
+ int rc = -1;
+
+ name = rte_vdev_device_name(vdev);
+ if (name == NULL)
+ return -EINVAL;
+ ENET_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
+
+ dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
+ if (dev == NULL)
+ return -ENOMEM;
+
+ /* setup board info structure */
+ fep = dev->data->dev_private;
+ fep->dev = dev;
+ rc = enetfec_eth_init(dev);
+ if (rc)
+ goto failed_init;
+ return 0;
+failed_init:
+ ENET_PMD_ERR("Failed to init");
+ return rc;
+}
+
+static int
+pmd_enetfec_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+
+ /* find the ethdev entry */
+ eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (!eth_dev)
+ return -ENODEV;
+
+ rte_eth_dev_release_port(eth_dev);
+
+ ENET_PMD_INFO("Closing sw device\n");
+ return 0;
+}
+
+static
+struct rte_vdev_driver pmd_enetfec_drv = {
+ .probe = pmd_enetfec_probe,
+ .remove = pmd_enetfec_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
+RTE_PMD_REGISTER_PARAM_STRING(ENETFEC_NAME_PMD, ENET_VDEV_GEM_ID_ARG "=<int>");
+
+RTE_INIT(enetfec_pmd_init_log)
+{
+ enetfec_logtype_pmd = rte_log_register("pmd.net.enetfec");
+ if (enetfec_logtype_pmd >= 0)
+ rte_log_set_level(enetfec_logtype_pmd, RTE_LOG_NOTICE);
+}
diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
new file mode 100644
index 000000000..3833a70fc
--- /dev/null
+++ b/drivers/net/enetfec/enet_ethdev.h
@@ -0,0 +1,203 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef __ENET_ETHDEV_H__
+#define __ENET_ETHDEV_H__
+
+#include <compat.h>
+#include <rte_ethdev.h>
+
+/* ENET with AVB IP can support maximum 3 rx and tx queues.
+ */
+#define ENET_MAX_Q 3
+
+#define BD_LEN 49152
+#define ENET_TX_FR_SIZE 2048
+#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
+#define MAX_RX_BD_RING_SIZE 512
+
+/* full duplex or half duplex */
+#define HALF_DUPLEX 0x00
+#define FULL_DUPLEX 0x01
+#define UNKNOWN_DUPLEX 0xff
+
+#define PKT_MAX_BUF_SIZE 1984
+#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
+#define ETH_ALEN RTE_ETHER_ADDR_LEN
+#define ETH_HLEN RTE_ETHER_HDR_LEN
+#define VLAN_HLEN 4
+
+
+struct bufdesc {
+ uint16_t bd_datlen; /* buffer data length */
+ uint16_t bd_sc; /* buffer control & status */
+ uint32_t bd_bufaddr; /* buffer address */
+};
+
+struct bufdesc_ex {
+ struct bufdesc desc;
+ uint32_t bd_esc;
+ uint32_t bd_prot;
+ uint32_t bd_bdu;
+ uint32_t ts;
+ uint16_t res0[4];
+};
+
+struct bufdesc_prop {
+ int que_id;
+ /* Addresses of Tx and Rx buffers */
+ struct bufdesc *base;
+ struct bufdesc *last;
+ struct bufdesc *cur;
+ void __iomem *active_reg_desc;
+ uint64_t descr_baseaddr_p;
+ unsigned short ring_size;
+ unsigned char d_size;
+ unsigned char d_size_log2;
+};
+
+struct enetfec_priv_tx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
+ struct bufdesc *dirty_tx;
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+struct enetfec_priv_rx_q {
+ struct bufdesc_prop bd;
+ struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
+ struct rte_mempool *pool;
+ struct enetfec_private *fep;
+};
+
+/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
+ * descriptor base is x_bd_base. Currently available buffer are x_cur
+ * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
+ * that is sent by the controller.
+ * The tx_cur and dirty_tx are same in completely full and empty
+ * conditions. Actual condition is determine by empty & ready bits.
+ */
+struct enetfec_private {
+ struct rte_eth_dev *dev;
+ struct rte_eth_stats stats;
+ struct rte_mempool *pool;
+
+ struct enetfec_priv_rx_q *rx_queues[ENET_MAX_Q];
+ struct enetfec_priv_tx_q *tx_queues[ENET_MAX_Q];
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+
+ unsigned int total_tx_ring_size;
+ unsigned int total_rx_ring_size;
+
+ bool bufdesc_ex;
+ unsigned int tx_align;
+ unsigned int rx_align;
+ int full_duplex;
+ unsigned int phy_speed;
+ u_int32_t quirks;
+ int flag_csum;
+ int flag_pause;
+ int flag_wol;
+ bool rgmii_txc_delay;
+ bool rgmii_rxc_delay;
+ int link;
+ void *hw_baseaddr_v;
+ uint64_t hw_baseaddr_p;
+ void *bd_addr_v;
+ uint64_t bd_addr_p;
+ uint64_t bd_addr_p_r[ENET_MAX_Q];
+ uint64_t bd_addr_p_t[ENET_MAX_Q];
+ void *dma_baseaddr_r[ENET_MAX_Q];
+ void *dma_baseaddr_t[ENET_MAX_Q];
+ uint64_t cbus_size;
+ unsigned int reg_size;
+ unsigned int bd_size;
+ int hw_ts_rx_en;
+ int hw_ts_tx_en;
+};
+
+#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
+#define readl(p) rte_read32(p)
+
+static __always_inline
+void __read_once_size(volatile void *p, void *res, int size)
+{
+ switch (size) {
+ case 1:
+ *(__u8 *)res = *(volatile __u8 *)p;
+ break;
+ case 2:
+ *(__u16 *)res = *(volatile __u16 *)p;
+ break;
+ case 4:
+ *(__u32 *)res = *(volatile __u32 *)p;
+ break;
+ case 8:
+ *(__u64 *)res = *(volatile __u64 *)p;
+ break;
+ default:
+ break;
+ }
+}
+
+#define __READ_ONCE(x)\
+({\
+ union { typeof(x) __val; char __c[1]; } __u;\
+ __read_once_size(&(x), __u.__c, sizeof(x));\
+ __u.__val;\
+})
+#ifndef READ_ONCE
+#define READ_ONCE(x) __READ_ONCE(x)
+#endif
+
+static inline struct
+bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
+
+ struct bufdesc_prop *bd)
+{
+ return (bdp >= bd->last) ? bd->base
+ : (struct bufdesc *)(((void *)bdp) + bd->d_size);
+}
+
+static inline struct
+bufdesc *enet_get_prevdesc(struct bufdesc *bdp,
+ struct bufdesc_prop *bd)
+{
+ return (bdp <= bd->base) ? bd->last
+ : (struct bufdesc *)(((void *)bdp) - bd->d_size);
+}
+
+static inline int
+enet_get_bd_index(struct bufdesc *bdp,
+ struct bufdesc_prop *bd)
+{
+ return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
+}
+
+static inline phys_addr_t enetfec_mem_vtop(uint64_t vaddr)
+{
+ const struct rte_memseg *memseg;
+ memseg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
+ if (memseg)
+ return memseg->iova + RTE_PTR_DIFF(vaddr, memseg->addr);
+ return (size_t)NULL;
+}
+
+static inline int fls64(unsigned long word)
+{
+ return (64 - __builtin_clzl(word)) - 1;
+}
+
+uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
+ struct bufdesc_prop *bd);
+int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
+ struct rte_mbuf *mbuf);
+
+#endif /*__FEC_ETHDEV_H__*/
diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
new file mode 100644
index 000000000..ff8daa359
--- /dev/null
+++ b/drivers/net/enetfec/enet_pmd_logs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#ifndef _ENET_LOGS_H_
+#define _ENET_LOGS_H_
+
+extern int enetfec_logtype_pmd;
+
+/* PMD related logs */
+#define ENET_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "fec_net: %s()" \
+ fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
+
+#define ENET_PMD_DEBUG(fmt, args...) \
+ ENET_PMD_LOG(DEBUG, fmt, ## args)
+#define ENET_PMD_ERR(fmt, args...) \
+ ENET_PMD_LOG(ERR, fmt, ## args)
+#define ENET_PMD_INFO(fmt, args...) \
+ ENET_PMD_LOG(INFO, fmt, ## args)
+
+#define ENET_PMD_WARN(fmt, args...) \
+ ENET_PMD_LOG(WARNING, fmt, ## args)
+
+/* DP Logs, toggled out at compile time if level lower than current level */
+#define ENET_DP_LOG(level, fmt, args...) \
+ RTE_LOG_DP(level, PMD, fmt, ## args)
+
+#endif /* _ENET_LOGS_H_ */
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
new file mode 100644
index 000000000..252bf8330
--- /dev/null
+++ b/drivers/net/enetfec/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2021 NXP
+
+if not is_linux
+ build = false
+ reason = 'only supported on linux'
+endif
+
+deps += ['common_dpaax']
+
+sources = files('enet_ethdev.c')
+
+if cc.has_argument('-Wno-pointer-arith')
+ cflags += '-Wno-pointer-arith'
+endif
diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
new file mode 100644
index 000000000..6e4fb220a
--- /dev/null
+++ b/drivers/net/enetfec/version.map
@@ -0,0 +1,3 @@
+DPDK_21 {
+ local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index c8b5ce298..c1307a3a6 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -18,6 +18,7 @@ drivers = [
'e1000',
'ena',
'enetc',
+ 'enetfec',
'enic',
'failsafe',
'fm10k',
--
2.17.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 2/4] drivers/net/enetfec: UIO support added
2021-04-30 4:34 [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-04-30 4:34 ` [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce " Apeksha Gupta
@ 2021-04-30 4:34 ` Apeksha Gupta
2021-06-08 13:21 ` Andrew Rybchenko
2021-07-04 4:27 ` Sachin Saxena (OSS)
2021-04-30 4:34 ` [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration Apeksha Gupta
` (3 subsequent siblings)
5 siblings, 2 replies; 17+ messages in thread
From: Apeksha Gupta @ 2021-04-30 4:34 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena, Apeksha Gupta
Implemented the fec-uio driver in kernel. enetfec PMD uses
UIO interface to interact with kernel for PHY initialisation
and for mapping the allocated memory of register & BD from
kernel to DPDK which gives access to non-cacheble memory for BD.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 204 ++++++++++++++++++++++++++++++
drivers/net/enetfec/enet_regs.h | 179 ++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.c | 192 ++++++++++++++++++++++++++++
drivers/net/enetfec/enet_uio.h | 54 ++++++++
drivers/net/enetfec/meson.build | 3 +-
5 files changed, 631 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_regs.h
create mode 100644 drivers/net/enetfec/enet_uio.c
create mode 100644 drivers/net/enetfec/enet_uio.h
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 5fd2dbc2d..5f4f2cf9e 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -11,18 +11,195 @@
#include <rte_bus_vdev.h>
#include <rte_dev.h>
#include <rte_ether.h>
+#include <rte_io.h>
#include "enet_pmd_logs.h"
#include "enet_ethdev.h"
+#include "enet_regs.h"
+#include "enet_uio.h"
#define ENETFEC_NAME_PMD net_enetfec
#define ENET_VDEV_GEM_ID_ARG "intf"
#define ENET_CDEV_INVALID_FD -1
+#define BIT(nr) (1 << (nr))
+/* FEC receive acceleration */
+#define ENET_RACC_IPDIS (1 << 1)
+#define ENET_RACC_PRODIS (1 << 2)
+#define ENET_RACC_SHIFT16 BIT(7)
+#define ENET_RACC_OPTIONS (ENET_RACC_IPDIS | ENET_RACC_PRODIS)
+
+/* Transmitter timeout */
+#define TX_TIMEOUT (2 * HZ)
+
+#define ENET_PAUSE_FLAG_AUTONEG 0x1
+#define ENET_PAUSE_FLAG_ENABLE 0x2
+#define ENET_WOL_HAS_MAGIC_PACKET (0x1 << 0)
+#define ENET_WOL_FLAG_ENABLE (0x1 << 1)
+#define ENET_WOL_FLAG_SLEEP_ON (0x1 << 2)
+
+/* Pause frame feild and FIFO threshold */
+#define ENET_ENET_FCE (1 << 5)
+#define ENET_ENET_RSEM_V 0x84
+#define ENET_ENET_RSFL_V 16
+#define ENET_ENET_RAEM_V 0x8
+#define ENET_ENET_RAFL_V 0x8
+#define ENET_ENET_OPD_V 0xFFF0
+#define ENET_MDIO_PM_TIMEOUT 100 /* ms */
+
int enetfec_logtype_pmd;
+/*
+ * This function is called to start or restart the FEC during a link
+ * change, transmit timeout or to reconfigure the FEC. The network
+ * packet processing for this device must be stopped before this call.
+ */
+static void
+enetfec_restart(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t temp_mac[2];
+ uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
+ uint32_t ecntl = ENET_ETHEREN; /* ETHEREN */
+ /* TODO get eth addr from eth dev */
+ struct rte_ether_addr addr = {
+ .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
+ uint32_t val;
+
+ /*
+ * enet-mac reset will reset mac address registers too,
+ * so need to reconfigure it.
+ */
+ memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
+ rte_write32(rte_cpu_to_be_32(temp_mac[0]),
+ fep->hw_baseaddr_v + ENET_PALR);
+ rte_write32(rte_cpu_to_be_32(temp_mac[1]),
+ fep->hw_baseaddr_v + ENET_PAUR);
+
+ /* Clear any outstanding interrupt. */
+ writel(0xffffffff, fep->hw_baseaddr_v + ENET_EIR);
+
+ /* Enable MII mode */
+ if (fep->full_duplex == FULL_DUPLEX) {
+ /* FD enable */
+ rte_write32(0x04, fep->hw_baseaddr_v + ENET_TCR);
+ } else {
+ /* No Rcv on Xmit */
+ rcntl |= 0x02;
+ rte_write32(0x0, fep->hw_baseaddr_v + ENET_TCR);
+ }
+
+ if (fep->quirks & QUIRK_RACC) {
+ val = rte_read32(fep->hw_baseaddr_v + ENET_RACC);
+ /* align IP header */
+ val |= ENET_RACC_SHIFT16;
+ if (fep->flag_csum & RX_FLAG_CSUM_EN)
+ /* set RX checksum */
+ val |= ENET_RACC_OPTIONS;
+ else
+ val &= ~ENET_RACC_OPTIONS;
+ rte_write32(val, fep->hw_baseaddr_v + ENET_RACC);
+ rte_write32(PKT_MAX_BUF_SIZE,
+ fep->hw_baseaddr_v + ENET_FRAME_TRL);
+ }
+
+ /*
+ * The phy interface and speed need to get configured
+ * differently on enet-mac.
+ */
+ if (fep->quirks & QUIRK_HAS_ENET_MAC) {
+ /* Enable flow control and length check */
+ rcntl |= 0x40000000 | 0x00000020;
+
+ /* RGMII, RMII or MII */
+ rcntl |= (1 << 6);
+ ecntl |= (1 << 5);
+ }
+
+ /* enable pause frame*/
+ if ((fep->flag_pause & ENET_PAUSE_FLAG_ENABLE) ||
+ ((fep->flag_pause & ENET_PAUSE_FLAG_AUTONEG)
+ /*&& ndev->phydev && ndev->phydev->pause*/)) {
+ rcntl |= ENET_ENET_FCE;
+
+ /* set FIFO threshold parameter to reduce overrun */
+ rte_write32(ENET_ENET_RSEM_V,
+ fep->hw_baseaddr_v + ENET_R_FIFO_SEM);
+ rte_write32(ENET_ENET_RSFL_V,
+ fep->hw_baseaddr_v + ENET_R_FIFO_SFL);
+ rte_write32(ENET_ENET_RAEM_V,
+ fep->hw_baseaddr_v + ENET_R_FIFO_AEM);
+ rte_write32(ENET_ENET_RAFL_V,
+ fep->hw_baseaddr_v + ENET_R_FIFO_AFL);
+
+ /* OPD */
+ rte_write32(ENET_ENET_OPD_V, fep->hw_baseaddr_v + ENET_OPD);
+ } else {
+ rcntl &= ~ENET_ENET_FCE;
+ }
+
+ rte_write32(rcntl, fep->hw_baseaddr_v + ENET_RCR);
+
+ rte_write32(0, fep->hw_baseaddr_v + ENET_IAUR);
+ rte_write32(0, fep->hw_baseaddr_v + ENET_IALR);
+
+ if (fep->quirks & QUIRK_HAS_ENET_MAC) {
+ /* enable ENET endian swap */
+ ecntl |= (1 << 8);
+ /* enable ENET store and forward mode */
+ rte_write32(1 << 8, fep->hw_baseaddr_v + ENET_TFWR);
+ }
+
+ if (fep->bufdesc_ex)
+ ecntl |= (1 << 4);
+
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_txc_delay)
+ ecntl |= ENET_TXC_DLY;
+ if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
+ fep->rgmii_rxc_delay)
+ ecntl |= ENET_RXC_DLY;
+
+ /* Enable the MIB statistic event counters */
+ rte_write32(0 << 31, fep->hw_baseaddr_v + ENET_MIBC);
+
+ ecntl |= 0x70000000;
+ /* And last, enable the transmit and receive processing */
+ rte_write32(ecntl, fep->hw_baseaddr_v + ENET_ECR);
+ rte_delay_us(10);
+}
+
+static int
+enetfec_eth_open(struct rte_eth_dev *dev)
+{
+ enetfec_restart(dev);
+
+ return 0;
+}
+
+static const struct eth_dev_ops ops = {
+ .dev_start = enetfec_eth_open,
+};
+
static int
enetfec_eth_init(struct rte_eth_dev *dev)
{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
+
+ fep->full_duplex = FULL_DUPLEX;
+
+ dev->dev_ops = &ops;
+ if (fep->quirks & QUIRK_VLAN)
+ /* enable hw VLAN support */
+ rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+
+ if (fep->quirks & QUIRK_CSUM) {
+ /* enable hw accelerator */
+ rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ fep->flag_csum |= RX_FLAG_CSUM_EN;
+ }
+
rte_eth_dev_probing_finish(dev);
return 0;
}
@@ -34,6 +211,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc = -1;
+ int i;
+ unsigned int bdsize;
name = rte_vdev_device_name(vdev);
if (name == NULL)
@@ -47,6 +226,31 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
/* setup board info structure */
fep = dev->data->dev_private;
fep->dev = dev;
+
+ fep->max_rx_queues = ENET_MAX_Q;
+ fep->max_tx_queues = ENET_MAX_Q;
+ fep->quirks = QUIRK_HAS_ENET_MAC | QUIRK_GBIT | QUIRK_BUFDESC_EX
+ | QUIRK_CSUM | QUIRK_VLAN | QUIRK_ERR007885
+ | QUIRK_RACC | QUIRK_COALESCE | QUIRK_EEE;
+
+ config_enetfec_uio(fep);
+
+ /* Get the BD size for distributing among six queues */
+ bdsize = (fep->bd_size) / 6;
+
+ for (i = 0; i < fep->max_tx_queues; i++) {
+ fep->dma_baseaddr_t[i] = fep->bd_addr_v;
+ fep->bd_addr_p_t[i] = fep->bd_addr_p;
+ fep->bd_addr_v = fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+ for (i = 0; i < fep->max_rx_queues; i++) {
+ fep->dma_baseaddr_r[i] = fep->bd_addr_v;
+ fep->bd_addr_p_r[i] = fep->bd_addr_p;
+ fep->bd_addr_v = fep->bd_addr_v + bdsize;
+ fep->bd_addr_p = fep->bd_addr_p + bdsize;
+ }
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
new file mode 100644
index 000000000..d037aafae
--- /dev/null
+++ b/drivers/net/enetfec/enet_regs.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+#ifndef __ENET_REGS_H
+#define __ENET_REGS_H
+
+/* Ethernet receive use control and status of buffer descriptor
+ */
+#define RX_BD_TR ((ushort)0x0001) /* Truncated */
+#define RX_BD_OV ((ushort)0x0002) /* Over-run */
+#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
+#define RX_BD_SH ((ushort)0x0008) /* Reserved */
+#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
+#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
+#define RX_BD_MC ((ushort)0x0040) /* Rcvd Multicast */
+#define RX_BD_BC ((ushort)0x0080) /* Rcvd Broadcast */
+#define RX_BD_MISS ((ushort)0x0100) /* Miss: promisc mode frame */
+#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
+#define RX_BD_LAST ((ushort)0x0800) /* Buffer is the last in the frame */
+#define RX_BD_INTR ((ushort)0x1000) /* Software specified field */
+/* 0 The next BD in consecutive location
+ * 1 The next BD in ENETn_RDSR.
+ */
+#define RX_BD_WRAP ((ushort)0x2000)
+#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
+#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
+
+/* Ethernet receive use control and status of enhanced buffer descriptor */
+#define BD_ENET_RX_VLAN 0x00000004
+
+/* Ethernet transmit use control and status of buffer descriptor.
+ */
+#define TX_BD_CSL ((ushort)0x0001)
+#define TX_BD_UN ((ushort)0x0002)
+#define TX_BD_RCMASK ((ushort)0x003c)
+#define TX_BD_RL ((ushort)0x0040)
+#define TX_BD_LC ((ushort)0x0080)
+#define TX_BD_HB ((ushort)0x0100)
+#define TX_BD_DEF ((ushort)0x0200)
+#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
+#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
+#define TX_BD_INTR ((ushort)0x1000)
+#define TX_BD_WRAP ((ushort)0x2000)
+#define TX_BD_PAD ((ushort)0x4000)
+#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
+
+#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
+
+/* Ethernet transmit use control and status of enhanced buffer descriptor */
+#define TX_BD_IINS 0x08000000
+#define TX_BD_PINS 0x10000000
+#define TX_BD_TS 0x20000000
+#define TX_BD_INT 0x40000000
+
+#define ENET_RD_START(X) (((X) == 1) ? ENET_RD_START_1 : \
+ (((X) == 2) ? \
+ ENET_RD_START_2 : ENET_RD_START_0))
+#define ENET_TD_START(X) (((X) == 1) ? ENET_TD_START_1 : \
+ (((X) == 2) ? \
+ ENET_TD_START_2 : ENET_TD_START_0))
+#define ENET_MRB_SIZE(X) (((X) == 1) ? ENET_MRB_SIZE_1 : \
+ (((X) == 2) ? \
+ ENET_MRB_SIZE_2 : ENET_MRB_SIZE_0))
+
+#define ENET_DMACFG(X) (((X) == 2) ? ENET_DMA2CFG : ENET_DMA1CFG)
+
+#define ENABLE_DMA_CLASS (1 << 16)
+#define ENET_RCM(X) (((X) == 2) ? ENET_RCM2 : ENET_RCM1)
+#define SLOPE_IDLE_MASK 0xffff
+#define SLOPE_IDLE_1 0x200 /* BW_fraction: 0.5 */
+#define SLOPE_IDLE_2 0x200 /* BW_fraction: 0.5 */
+#define SLOPE_IDLE(X) (((X) == 1) ? \
+ (SLOPE_IDLE_1 & SLOPE_IDLE_MASK) : \
+ (SLOPE_IDLE_2 & SLOPE_IDLE_MASK))
+#define RCM_MATCHEN (0x1 << 16)
+#define CFG_RCMR_CMP(v, n) (((v) & 0x7) << ((n) << 2))
+#define RCMR_CMP1 (CFG_RCMR_CMP(0, 0) | CFG_RCMR_CMP(1, 1) | \
+ CFG_RCMR_CMP(2, 2) | CFG_RCMR_CMP(3, 3))
+#define RCMR_CMP2 (CFG_RCMR_CMP(4, 0) | CFG_RCMR_CMP(5, 1) | \
+ CFG_RCMR_CMP(6, 2) | CFG_RCMR_CMP(7, 3))
+#define RCM_CMP(X) (((X) == 1) ? RCMR_CMP1 : RCMR_CMP2)
+#define BD_TX_FTYPE(X) (((X) & 0xf) << 20)
+
+#define RX_BD_INT 0x00800000
+#define RX_BD_PTP ((ushort)0x0400)
+#define RX_BD_ICE 0x00000020
+#define RX_BD_PCR 0x00000010
+#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
+#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
+#define ENET_MII ((uint)0x00800000) /*MII_interrupt*/
+
+#define ENET_ETHEREN ((uint)0x00000002)
+#define ENET_TXC_DLY ((uint)0x00010000)
+#define ENET_RXC_DLY ((uint)0x00020000)
+
+/* ENET MAC is in controller */
+#define QUIRK_HAS_ENET_MAC (1 << 0)
+/* gasket is used in controller */
+#define QUIRK_GASKET (1 << 2)
+/* GBIT supported in controller */
+#define QUIRK_GBIT (1 << 3)
+/* Controller has extended descriptor buffer */
+#define QUIRK_BUFDESC_EX (1 << 4)
+/* Controller support hardware checksum */
+#define QUIRK_CSUM (1 << 5)
+/* Controller support hardware vlan*/
+#define QUIRK_VLAN (1 << 6)
+/* ENET IP hardware AVB
+ * i.MX8MM ENET IP supports the AVB (Audio Video Bridging) feature.
+ */
+#define QUIRK_AVB (1 << 8)
+#define QUIRK_ERR007885 (1 << 9)
+/* RACC register supported by controller */
+#define QUIRK_RACC (1 << 12)
+/* interrupt coalesc supported by controller*/
+#define QUIRK_COALESCE (1 << 13)
+/* To support IEEE 802.3az EEE std, new feature is added by i.MX8MQ ENET IP
+ * version.
+ */
+#define QUIRK_EEE (1 << 17)
+/* i.MX8QM ENET IP version added the feature to generate the delayed TXC or
+ * RXC. For its implementation, ENET uses synchronized clocks (250MHz) for
+ * generating delay of 2ns.
+ */
+#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
+
+#define ENET_EIR 0x004 /* Interrupt event register */
+#define ENET_EIMR 0x008 /* Interrupt mask register */
+#define ENET_RDAR_0 0x010 /* Receive descriptor active register ring0 */
+#define ENET_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
+#define ENET_ECR 0x024 /* Ethernet control register */
+#define ENET_MSCR 0x044 /* MII speed control register */
+#define ENET_MIBC 0x064 /* MIB control and status register */
+#define ENET_RCR 0x084 /* Receive control register */
+#define ENET_TCR 0x0c4 /* Transmit Control register */
+#define ENET_PALR 0x0e4 /* MAC address low 32 bits */
+#define ENET_PAUR 0x0e8 /* MAC address high 16 bits */
+#define ENET_OPD 0x0ec /* Opcode/Pause duration register */
+#define ENET_IAUR 0x118 /* hash table 32 bits high */
+#define ENET_IALR 0x11c /* hash table 32 bits low */
+#define ENET_GAUR 0x120 /* grp hash table 32 bits high */
+#define ENET_GALR 0x124 /* grp hash table 32 bits low */
+#define ENET_TFWR 0x144 /* transmit FIFO water_mark */
+#define ENET_RD_START_1 0x160 /* Receive descriptor ring1 start register */
+#define ENET_TD_START_1 0x164 /* Transmit descriptor ring1 start register */
+#define ENET_MRB_SIZE_1 0x168 /* Maximum receive buffer size register ring1 */
+#define ENET_RD_START_2 0x16c /* Receive descriptor ring2 start register */
+#define ENET_TD_START_2 0x170 /* Transmit descriptor ring2 start register */
+#define ENET_MRB_SIZE_2 0x174 /* Maximum receive buffer size register ring2 */
+#define ENET_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
+#define ENET_TD_START_0 0x184 /* Transmit buffer descriptor ring0 start reg */
+#define ENET_MRB_SIZE_0 0x188 /* Maximum receive buffer size register ring0*/
+#define ENET_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
+#define ENET_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
+#define ENET_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
+#define ENET_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
+#define ENET_FRAME_TRL 0x1b0 /* Frame truncation length */
+#define ENET_RACC 0x1c4 /* Receive Accelerator function configuration*/
+#define ENET_RCM1 0x1c8 /* Receive classification match register ring1 */
+#define ENET_RCM2 0x1cc /* Receive classification match register ring2 */
+#define ENET_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
+#define ENET_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
+#define ENET_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
+#define ENET_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
+#define ENET_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
+#define ENET_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
+#define ENET_MII_GSK_CFGR 0x300 /* MII_GSK Configuration register */
+#define ENET_MII_GSK_ENR 0x308 /* MII_GSK Enable register*/
+
+#define BM_MII_GSK_CFGR_MII 0x00
+#define BM_MII_GSK_CFGR_RMII 0x01
+#define BM_MII_GSK_CFGR_FRCONT_10M 0x40
+
+/* full duplex or half duplex */
+#define HALF_DUPLEX 0x00
+#define FULL_DUPLEX 0x01
+#define UNKNOWN_DUPLEX 0xff
+
+#endif /*__ENET_REGS_H */
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
new file mode 100644
index 000000000..b64dc522e
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.c
@@ -0,0 +1,192 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <fcntl.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include "enet_pmd_logs.h"
+#include "enet_uio.h"
+
+static struct uio_job enetfec_uio_job;
+int count;
+
+/** @brief Reads first line from a file.
+ * Composes file name as: root/subdir/filename
+ *
+ * @param [in] root Root path
+ * @param [in] subdir Subdirectory name
+ * @param [in] filename File name
+ * @param [out] line The first line read from file.
+ *
+ * @retval 0 for succes
+ * @retval other value for error
+ */
+static int
+file_read_first_line(const char root[], const char subdir[],
+ const char filename[], char *line)
+{
+ char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
+ int fd = 0, ret = 0;
+
+ /*compose the file name: root/subdir/filename */
+ memset(absolute_file_name, 0, sizeof(absolute_file_name));
+ snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
+ "%s/%s/%s", root, subdir, filename);
+
+ fd = open(absolute_file_name, O_RDONLY);
+ if (fd <= 0)
+ ENET_PMD_ERR("Error opening file %s", absolute_file_name);
+
+ /* read UIO device name from first line in file */
+ ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
+ close(fd);
+
+ /* NULL-ify string */
+ line[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH - 1] = '\0';
+
+ if (ret <= 0) {
+ ENET_PMD_ERR("Error reading from file %s", absolute_file_name);
+ return ret;
+ }
+
+ return 0;
+}
+
+/** @brief Maps rx-tx bd range assigned for a bd ring.
+ *
+ * @param [in] uio_device_fd UIO device file descriptor
+ * @param [in] uio_device_id UIO device id
+ * @param [in] uio_map_id UIO allows maximum 5 different mapping for
+ each device. Maps start with id 0.
+ * @param [out] map_size Map size.
+ * @param [out] map_addr Map physical address
+ * @retval NULL if failed to map registers
+ * @retval Virtual address for mapped register address range
+ */
+static void *
+uio_map_mem(int uio_device_fd, int uio_device_id,
+ int uio_map_id, int *map_size, uint64_t *map_addr)
+{
+ void *mapped_address = NULL;
+ unsigned int uio_map_size = 0;
+ unsigned int uio_map_p_addr = 0;
+ char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
+ char uio_map_size_str[32];
+ char uio_map_p_addr_str[64];
+ int ret = 0;
+
+ /* compose the file name: root/subdir/filename */
+ memset(uio_sys_root, 0, sizeof(uio_sys_root));
+ memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
+ memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
+ memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
+
+ /* Compose string: /sys/class/uio/uioX */
+ snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
+ FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
+ /* Compose string: maps/mapY */
+ snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
+ FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
+
+ /* Read first (and only) line from file
+ * /sys/class/uio/uioX/maps/mapY/size
+ */
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "size", uio_map_size_str);
+ if (ret)
+ ENET_PMD_ERR("file_read_first_line() failed");
+
+ ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
+ "addr", uio_map_p_addr_str);
+ if (ret)
+ ENET_PMD_ERR("file_read_first_line() failed");
+
+ /* Read mapping size and physical address expressed in hexa(base 16) */
+ uio_map_size = strtol(uio_map_size_str, NULL, 16);
+ uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
+
+ if (uio_map_id == 0) {
+ /* Map the register address in user space when map_id is 0 */
+ mapped_address = mmap(0 /*dynamically choose virtual address */,
+ uio_map_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, 0);
+ } else {
+ /* Map the BD memory in user space */
+ mapped_address = mmap(NULL, uio_map_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
+ }
+
+ if (mapped_address == MAP_FAILED) {
+ ENET_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
+ "uio device id = %d, uio map id = %d", errno,
+ uio_device_fd, uio_device_id, uio_map_id);
+ return 0;
+ }
+
+ /* Save the map size to use it later on for munmap-ing */
+ *map_size = uio_map_size;
+ *map_addr = uio_map_p_addr;
+ ENET_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
+ uio_device_id, uio_map_id, uio_map_size, mapped_address);
+
+ return mapped_address;
+}
+
+int
+config_enetfec_uio(struct enetfec_private *fep)
+{
+ char uio_device_file_name[32];
+ struct uio_job *uio_job = NULL;
+
+ /* Mapping is done only one time */
+ if (count) {
+ printf("Mapping already done, can't map again!\n");
+ return 0;
+ }
+
+ uio_job = &enetfec_uio_job;
+
+ /* Find UIO device created by ENETFEC-UIO kernel driver */
+ memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
+ snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
+ FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
+
+ /* Open device file */
+ uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
+ if (uio_job->uio_fd < 0) {
+ printf("US_UIO: Open Failed\n");
+ exit(1);
+ }
+
+ ENET_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
+ uio_device_file_name, uio_job->uio_fd);
+
+ fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ fep->hw_baseaddr_p = uio_job->map_addr;
+ fep->reg_size = uio_job->map_size;
+
+ fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
+ uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
+ &uio_job->map_size, &uio_job->map_addr);
+ fep->bd_addr_p = uio_job->map_addr;
+ fep->bd_size = uio_job->map_size;
+
+ count++;
+
+ return 0;
+}
diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
new file mode 100644
index 000000000..b220cae9d
--- /dev/null
+++ b/drivers/net/enetfec/enet_uio.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include "enet_ethdev.h"
+
+/* Prefix path to sysfs directory where UIO device attributes are exported.
+ * Path for UIO device X is /sys/class/uio/uioX
+ */
+#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
+
+/* Subfolder in sysfs where mapping attributes are exported
+ * for each UIO device. Path for mapping Y for device X is:
+ * /sys/class/uio/uioX/maps/mapY
+ */
+#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
+
+/* Name of UIO device file prefix. Each UIO device will have a device file
+ * /dev/uioX, where X is the minor device number.
+ */
+#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
+
+/* Maximum length for the name of an UIO device file.
+ * Device file name format is: /dev/uioX.
+ */
+#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
+
+/* Maximum length for the name of an attribute file for an UIO device.
+ * Attribute files are exported in sysfs and have the name formatted as:
+ * /sys/class/uio/uioX/<attribute_file_name>
+ */
+#define FEC_UIO_MAX_ATTR_FILE_NAME 100
+
+/* The id for the mapping used to export ENETFEC registers and BD memory to
+ * user space through UIO device.
+ */
+#define FEC_UIO_REG_MAP_ID 0
+#define FEC_UIO_BD_MAP_ID 1
+
+#define MAP_PAGE_SIZE 4096
+
+struct uio_job {
+ uint32_t fec_id;
+ int uio_fd;
+ void *bd_start_addr;
+ void *register_base_addr;
+ int map_size;
+ uint64_t map_addr;
+ int uio_minor_number;
+};
+
+int config_enetfec_uio(struct enetfec_private *fep);
+void enetfec_uio_init(void);
+void enetfec_cleanup(void);
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 252bf8330..05183bd44 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -8,7 +8,8 @@ endif
deps += ['common_dpaax']
-sources = files('enet_ethdev.c')
+sources = files('enet_ethdev.c',
+ 'enet_uio.c')
if cc.has_argument('-Wno-pointer-arith')
cflags += '-Wno-pointer-arith'
--
2.17.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration
2021-04-30 4:34 [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-04-30 4:34 ` [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce " Apeksha Gupta
2021-04-30 4:34 ` [dpdk-dev] [PATCH 2/4] drivers/net/enetfec: UIO support added Apeksha Gupta
@ 2021-04-30 4:34 ` Apeksha Gupta
2021-06-08 13:38 ` Andrew Rybchenko
2021-07-04 6:46 ` Sachin Saxena (OSS)
2021-04-30 4:34 ` [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support Apeksha Gupta
` (2 subsequent siblings)
5 siblings, 2 replies; 17+ messages in thread
From: Apeksha Gupta @ 2021-04-30 4:34 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena, Apeksha Gupta
This patch added RX/TX queue configuration setup operations.
On packet Rx the respective BD Ring status bit is set which is then
used for packet processing.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
drivers/net/enetfec/enet_ethdev.c | 223 ++++++++++++++++++++++++++++++
1 file changed, 223 insertions(+)
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 5f4f2cf9e..b4816179a 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -48,6 +48,19 @@
int enetfec_logtype_pmd;
+/* Supported Rx offloads */
+static uint64_t dev_rx_offloads_sup =
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_CHECKSUM;
+
+static uint64_t dev_tx_offloads_sup =
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM;
+
/*
* This function is called to start or restart the FEC during a link
* change, transmit timeout or to reconfigure the FEC. The network
@@ -176,8 +189,218 @@ enetfec_eth_open(struct rte_eth_dev *dev)
return 0;
}
+
+static int
+enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
+{
+ ENET_PMD_INFO("%s: returning zero ", __func__);
+ return 0;
+}
+
+static int
+enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_queues = ENET_MAX_Q;
+ dev_info->max_tx_queues = ENET_MAX_Q;
+ dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->rx_offload_capa = dev_rx_offloads_sup;
+ dev_info->tx_offload_capa = dev_tx_offloads_sup;
+
+ return 0;
+}
+
+static const unsigned short offset_des_active_rxq[] = {
+ ENET_RDAR_0, ENET_RDAR_1, ENET_RDAR_2
+};
+
+static const unsigned short offset_des_active_txq[] = {
+ ENET_TDAR_0, ENET_TDAR_1, ENET_TDAR_2
+};
+
+static int
+enetfec_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ __rte_unused unsigned int socket_id,
+ __rte_unused const struct rte_eth_txconf *tx_conf)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bdp, *bd_base;
+ struct enetfec_priv_tx_q *txq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* allocate transmit queue */
+ txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
+ if (!txq) {
+ ENET_PMD_ERR("transmit queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_desc > MAX_TX_BD_RING_SIZE) {
+ nb_desc = MAX_TX_BD_RING_SIZE;
+ ENET_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n");
+ }
+ txq->bd.ring_size = nb_desc;
+ fep->total_tx_ring_size += txq->bd.ring_size;
+ fep->tx_queues[queue_idx] = txq;
+
+ rte_write32(fep->bd_addr_p_t[queue_idx],
+ fep->hw_baseaddr_v + ENET_TD_START(queue_idx));
+
+ /* Set transmit descriptor base. */
+ txq = fep->tx_queues[queue_idx];
+ txq->fep = fep;
+ size = dsize * txq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
+ txq->bd.que_id = queue_idx;
+ txq->bd.base = bd_base;
+ txq->bd.cur = bd_base;
+ txq->bd.d_size = dsize;
+ txq->bd.d_size_log2 = dsize_log2;
+ txq->bd.active_reg_desc =
+ fep->hw_baseaddr_v + offset_des_active_txq[queue_idx];
+ bd_base = (struct bufdesc *)(((void *)bd_base) + size);
+ txq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize);
+ bdp = txq->bd.base;
+ bdp = txq->bd.cur;
+
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+ if (txq->tx_mbuf[i]) {
+ rte_pktmbuf_free(txq->tx_mbuf[i]);
+ txq->tx_mbuf[i] = NULL;
+ }
+ rte_write32(rte_cpu_to_le_32(0), &bdp->bd_bufaddr);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &txq->bd);
+ rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ txq->dirty_tx = bdp;
+ dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
+ return 0;
+}
+
+static int
+enetfec_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_rx_desc,
+ __rte_unused unsigned int socket_id,
+ __rte_unused const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+ struct bufdesc *bd_base;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ unsigned int size;
+ unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
+ sizeof(struct bufdesc);
+ unsigned int dsize_log2 = fls64(dsize);
+
+ /* allocate receive queue */
+ rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
+ if (!rxq) {
+ ENET_PMD_ERR("receive queue allocation failed");
+ return -ENOMEM;
+ }
+
+ if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
+ nb_rx_desc = MAX_RX_BD_RING_SIZE;
+ ENET_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n");
+ }
+
+ rxq->bd.ring_size = nb_rx_desc;
+ fep->total_rx_ring_size += rxq->bd.ring_size;
+ fep->rx_queues[queue_idx] = rxq;
+
+ rte_write32(fep->bd_addr_p_r[queue_idx],
+ fep->hw_baseaddr_v + ENET_RD_START(queue_idx));
+ rte_write32(PKT_MAX_BUF_SIZE,
+ fep->hw_baseaddr_v + ENET_MRB_SIZE(queue_idx));
+
+ /* Set receive descriptor base. */
+ rxq = fep->rx_queues[queue_idx];
+ rxq->pool = mb_pool;
+ size = dsize * rxq->bd.ring_size;
+ bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
+ rxq->bd.que_id = queue_idx;
+ rxq->bd.base = bd_base;
+ rxq->bd.cur = bd_base;
+ rxq->bd.d_size = dsize;
+ rxq->bd.d_size_log2 = dsize_log2;
+ rxq->bd.active_reg_desc =
+ fep->hw_baseaddr_v + offset_des_active_rxq[queue_idx];
+ bd_base = (struct bufdesc *)(((void *)bd_base) + size);
+ rxq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize);
+
+ rxq->fep = fep;
+ bdp = rxq->bd.base;
+ rxq->bd.cur = bdp;
+
+ for (i = 0; i < nb_rx_desc; i++) {
+ /* Initialize Rx buffers from pktmbuf pool */
+ struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
+ if (mbuf == NULL) {
+ ENET_PMD_ERR("mbuf failed\n");
+ goto err_alloc;
+ }
+
+ /* Get the virtual address & physical address */
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+
+ rxq->rx_mbuf[i] = mbuf;
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Initialize the receive buffer descriptors. */
+ bdp = rxq->bd.cur;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ /* Initialize the BD for every fragment in the page. */
+ if (rte_read32(&bdp->bd_bufaddr))
+ rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
+ &bdp->bd_sc);
+ else
+ rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
+
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+
+ /* Set the last buffer to wrap */
+ bdp = enet_get_prevdesc(bdp, &rxq->bd);
+ rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
+ rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
+ dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
+ rte_write32(0x0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
+ return 0;
+
+err_alloc:
+ for (i = 0; i < nb_rx_desc; i++) {
+ rte_pktmbuf_free(rxq->rx_mbuf[i]);
+ rxq->rx_mbuf[i] = NULL;
+ }
+ rte_free(rxq);
+ return -1;
+}
+
static const struct eth_dev_ops ops = {
.dev_start = enetfec_eth_open,
+ .dev_configure = enetfec_eth_configure,
+ .dev_infos_get = enetfec_eth_info,
+ .rx_queue_setup = enetfec_rx_queue_setup,
+ .tx_queue_setup = enetfec_tx_queue_setup,
};
static int
--
2.17.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support
2021-04-30 4:34 [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Apeksha Gupta
` (2 preceding siblings ...)
2021-04-30 4:34 ` [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration Apeksha Gupta
@ 2021-04-30 4:34 ` Apeksha Gupta
2021-06-08 13:42 ` Andrew Rybchenko
2021-07-05 8:48 ` [dpdk-dev] " Sachin Saxena (OSS)
2021-05-04 15:40 ` [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Ferruh Yigit
2021-07-04 2:55 ` Sachin Saxena (OSS)
5 siblings, 2 replies; 17+ messages in thread
From: Apeksha Gupta @ 2021-04-30 4:34 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena, Apeksha Gupta
This patch supported checksum offloads and add burst enqueue and
dequeue operations to the enetfec PMD.
Loopback mode is added, compile time flag 'ENETFEC_LOOPBACK' is
used to enable this feature. By default loopback mode is disabled.
Basic features added like promiscuous enable, basic stats.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
---
doc/guides/nics/enetfec.rst | 4 +
doc/guides/nics/features/enetfec.ini | 5 +
drivers/net/enetfec/enet_ethdev.c | 212 +++++++++++-
drivers/net/enetfec/enet_rxtx.c | 499 +++++++++++++++++++++++++++
4 files changed, 719 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/enetfec/enet_rxtx.c
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index 10f495fb9..adbb52392 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -75,6 +75,10 @@ ENETFEC driver.
ENETFEC Features
~~~~~~~~~~~~~~~~~
+- Basic stats
+- Promiscuous
+- VLAN offload
+- L3/L4 checksum offload
- ARMv8
Supported ENETFEC SoCs
diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
index 570069798..fcc217773 100644
--- a/doc/guides/nics/features/enetfec.ini
+++ b/doc/guides/nics/features/enetfec.ini
@@ -4,5 +4,10 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Basic stats = Y
+Promiscuous mode = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
ARMv8 = Y
Usage doc = Y
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index b4816179a..ca2cf929f 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -46,6 +46,9 @@
#define ENET_ENET_OPD_V 0xFFF0
#define ENET_MDIO_PM_TIMEOUT 100 /* ms */
+/* Extended buffer descriptor */
+#define ENETFEC_EXTENDED_BD 0
+
int enetfec_logtype_pmd;
/* Supported Rx offloads */
@@ -61,6 +64,50 @@ static uint64_t dev_tx_offloads_sup =
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM;
+static void enet_free_buffers(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i, q;
+ struct rte_mbuf *mbuf;
+ struct bufdesc *bdp;
+ struct enetfec_priv_rx_q *rxq;
+ struct enetfec_priv_tx_q *txq;
+
+ for (q = 0; q < dev->data->nb_rx_queues; q++) {
+ rxq = fep->rx_queues[q];
+ bdp = rxq->bd.base;
+ for (i = 0; i < rxq->bd.ring_size; i++) {
+ mbuf = rxq->rx_mbuf[i];
+ rxq->rx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ }
+ }
+
+ for (q = 0; q < dev->data->nb_tx_queues; q++) {
+ txq = fep->tx_queues[q];
+ bdp = txq->bd.base;
+ for (i = 0; i < txq->bd.ring_size; i++) {
+ mbuf = txq->tx_mbuf[i];
+ txq->tx_mbuf[i] = NULL;
+ if (mbuf)
+ rte_pktmbuf_free(mbuf);
+ }
+ }
+}
+
+static void enet_free_queue(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ unsigned int i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_free(fep->rx_queues[i]);
+}
+
/*
* This function is called to start or restart the FEC during a link
* change, transmit timeout or to reconfigure the FEC. The network
@@ -189,7 +236,6 @@ enetfec_eth_open(struct rte_eth_dev *dev)
return 0;
}
-
static int
enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
{
@@ -395,12 +441,137 @@ enetfec_rx_queue_setup(struct rte_eth_dev *dev,
return -1;
}
+static int
+enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ uint32_t tmp;
+
+ tmp = rte_read32(fep->hw_baseaddr_v + ENET_RCR);
+ tmp |= 0x8;
+ tmp &= ~0x2;
+ rte_write32(tmp, fep->hw_baseaddr_v + ENET_RCR);
+
+ return 0;
+}
+
+static int
+enetfec_eth_link_update(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete)
+{
+ return 0;
+}
+
+static int
+enetfec_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+ struct rte_eth_stats *eth_stats = &fep->stats;
+
+ if (stats == NULL)
+ return -1;
+
+ memset(stats, 0, sizeof(struct rte_eth_stats));
+
+ stats->ipackets = eth_stats->ipackets;
+ stats->ibytes = eth_stats->ibytes;
+ stats->ierrors = eth_stats->ierrors;
+ stats->opackets = eth_stats->opackets;
+ stats->obytes = eth_stats->obytes;
+ stats->oerrors = eth_stats->oerrors;
+
+ return 0;
+}
+
+static void
+enetfec_stop(__rte_unused struct rte_eth_dev *dev)
+{
+/*TODO*/
+}
+
+static int
+enetfec_eth_close(__rte_unused struct rte_eth_dev *dev)
+{
+ /* phy_stop(ndev->phydev); */
+ enetfec_stop(dev);
+ /* phy_disconnect(ndev->phydev); */
+
+ enet_free_buffers(dev);
+ return 0;
+}
+
+static uint16_t
+enetfec_dummy_xmit_pkts(__rte_unused void *tx_queue,
+ __rte_unused struct rte_mbuf **tx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ return 0;
+}
+
+static uint16_t
+enetfec_dummy_recv_pkts(__rte_unused void *rxq,
+ __rte_unused struct rte_mbuf **rx_pkts,
+ __rte_unused uint16_t nb_pkts)
+{
+ return 0;
+}
+
+static int
+enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
+{
+ dev->rx_pkt_burst = &enetfec_dummy_recv_pkts;
+ dev->tx_pkt_burst = &enetfec_dummy_xmit_pkts;
+
+ return 0;
+}
+
+static int
+enetfec_multicast_enable(struct rte_eth_dev *dev)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ rte_write32(0xffffffff, fep->hw_baseaddr_v + ENET_GAUR);
+ rte_write32(0xffffffff, fep->hw_baseaddr_v + ENET_GALR);
+ dev->data->all_multicast = 1;
+
+ rte_write32(0x04400002, fep->hw_baseaddr_v + ENET_GAUR);
+ rte_write32(0x10800049, fep->hw_baseaddr_v + ENET_GALR);
+
+ return 0;
+}
+
+/* Set a MAC change in hardware. */
+static int
+enetfec_set_mac_address(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct enetfec_private *fep = dev->data->dev_private;
+
+ writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
+ (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
+ fep->hw_baseaddr_v + ENET_PALR);
+ writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
+ fep->hw_baseaddr_v + ENET_PAUR);
+
+ rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
+
+ return 0;
+}
+
static const struct eth_dev_ops ops = {
.dev_start = enetfec_eth_open,
+ .dev_stop = enetfec_eth_stop,
+ .dev_close = enetfec_eth_close,
.dev_configure = enetfec_eth_configure,
.dev_infos_get = enetfec_eth_info,
.rx_queue_setup = enetfec_rx_queue_setup,
.tx_queue_setup = enetfec_tx_queue_setup,
+ .link_update = enetfec_eth_link_update,
+ .mac_addr_set = enetfec_set_mac_address,
+ .stats_get = enetfec_stats_get,
+ .promiscuous_enable = enetfec_promiscuous_enable,
+ .allmulticast_enable = enetfec_multicast_enable
};
static int
@@ -434,6 +605,7 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
struct enetfec_private *fep;
const char *name;
int rc = -1;
+ struct rte_ether_addr macaddr;
int i;
unsigned int bdsize;
@@ -474,12 +646,37 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
fep->bd_addr_p = fep->bd_addr_p + bdsize;
}
+ /* Copy the station address into the dev structure, */
+ dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ENET_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
+ ETHER_ADDR_LEN);
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ /* TODO get mac address from device tree or get random addr.
+ * Currently setting default as 1,1,1,1,1,1
+ */
+ macaddr.addr_bytes[0] = 1;
+ macaddr.addr_bytes[1] = 1;
+ macaddr.addr_bytes[2] = 1;
+ macaddr.addr_bytes[3] = 1;
+ macaddr.addr_bytes[4] = 1;
+ macaddr.addr_bytes[5] = 1;
+
+ enetfec_set_mac_address(dev, &macaddr);
+ /* enable the extended buffer mode */
+ fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
+
rc = enetfec_eth_init(dev);
if (rc)
goto failed_init;
return 0;
failed_init:
ENET_PMD_ERR("Failed to init");
+err:
+ rte_eth_dev_release_port(dev);
return rc;
}
@@ -487,15 +684,28 @@ static int
pmd_enetfec_remove(struct rte_vdev_device *vdev)
{
struct rte_eth_dev *eth_dev = NULL;
+ struct enetfec_private *fep;
+ struct enetfec_priv_rx_q *rxq;
/* find the ethdev entry */
eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
if (!eth_dev)
return -ENODEV;
+ fep = eth_dev->data->dev_private;
+ /* Free descriptor base of first RX queue as it was configured
+ * first in enetfec_eth_init().
+ */
+ rxq = fep->rx_queues[0];
+ rte_free(rxq->bd.base);
+ enet_free_queue(eth_dev);
+
+ enetfec_eth_stop(eth_dev);
rte_eth_dev_release_port(eth_dev);
ENET_PMD_INFO("Closing sw device\n");
+ munmap(fep->hw_baseaddr_v, fep->cbus_size);
+
return 0;
}
diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
new file mode 100644
index 000000000..1b9b86c35
--- /dev/null
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -0,0 +1,499 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 NXP
+ */
+
+#include <signal.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include "enet_regs.h"
+#include "enet_ethdev.h"
+#include "enet_pmd_logs.h"
+
+#define ENETFEC_LOOPBACK 0
+#define ENETFEC_DUMP 0
+
+static volatile bool lb_quit;
+
+#if ENETFEC_DUMP
+static void
+enet_dump(struct enetfec_priv_tx_q *txq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENET_PMD_DEBUG("TX ring dump\n");
+ ENET_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = txq->bd.base;
+ do {
+ ENET_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == txq->bd.cur ? 'S' : ' ',
+ bdp == txq->dirty_tx ? 'H' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ txq->tx_mbuf[index]);
+ bdp = enet_get_nextdesc(bdp, &txq->bd);
+ index++;
+ } while (bdp != txq->bd.base);
+}
+
+static void
+enet_dump_rx(struct enetfec_priv_rx_q *rxq)
+{
+ struct bufdesc *bdp;
+ int index = 0;
+
+ ENET_PMD_DEBUG("RX ring dump\n");
+ ENET_PMD_DEBUG("Nr SC addr len MBUF\n");
+
+ bdp = rxq->bd.base;
+ do {
+ ENET_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n",
+ index,
+ bdp == rxq->bd.cur ? 'S' : ' ',
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
+ rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
+ rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
+ rxq->rx_mbuf[index]);
+ rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index],
+ rxq->rx_mbuf[index]->pkt_len);
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+ index++;
+ } while (bdp != rxq->bd.base);
+}
+#endif
+
+#if ENETFEC_LOOPBACK
+static void fec_signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
+ printf("\n\n %s: Signal %d received, preparing to exit...\n",
+ __func__, signum);
+ lb_quit = true;
+ }
+}
+
+static void
+enetfec_lb_rxtx(void *rxq1)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL;
+ struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len = 0;
+ int index_r = 0, index_t = 0;
+ u8 *data;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ unsigned int i;
+ struct enetfec_private *fep;
+ struct enetfec_priv_tx_q *txq;
+ fep = rxq->fep->dev->data->dev_private;
+ txq = fep->tx_queues[0];
+
+ pool = rxq->pool;
+ rx_bdp = rxq->bd.cur;
+ tx_bdp = txq->bd.cur;
+
+ signal(SIGTSTP, fec_signal_handler);
+ while (!lb_quit) {
+chk_again:
+ status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc));
+ if (status & RX_BD_EMPTY) {
+ if (!lb_quit)
+ goto chk_again;
+ rxq->bd.cur = rx_bdp;
+ txq->bd.cur = tx_bdp;
+ return;
+ }
+
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ ENET_PMD_ERR("rx_fifo_error\n");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENET_PMD_ERR("rx_length_error\n");
+ if (status & RX_BD_LAST)
+ ENET_PMD_ERR("rcv is not +last\n");
+ }
+ /* CRC Error */
+ if (status & RX_BD_CR)
+ ENET_PMD_ERR("rx_crc_errors\n");
+
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENET_PMD_ERR("rx_frame_error\n");
+ mbuf = NULL;
+ goto rx_processing_done;
+ }
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(!new_mbuf)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Process the incoming frame. */
+ pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen));
+
+ /* shows data with respect to the data_off field. */
+ index_r = enet_get_bd_index(rx_bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index_r];
+
+ /* adjust pkt_len */
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4);
+ if (rxq->fep->quirks & QUIRK_RACC)
+ rte_pktmbuf_adj(mbuf, 2);
+
+ /* Replace Buffer in BD */
+ rxq->rx_mbuf[index_r] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &rx_bdp->bd_bufaddr);
+
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+
+ /* TX begins: First clean the ring then process packet */
+ index_t = enet_get_bd_index(tx_bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc));
+ if (status & TX_BD_READY)
+ stats->oerrors++;
+ break;
+ if (txq->tx_mbuf[index_t]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index_t]);
+ txq->tx_mbuf[index_t] = NULL;
+ }
+
+ if (mbuf == NULL)
+ continue;
+
+ /* Fill in a Tx ring entry */
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ pkt_len = rte_pktmbuf_pkt_len(mbuf);
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+
+ for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &tx_bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen);
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+
+ /* Save mbuf pointer to clean later */
+ txq->tx_mbuf[index_t] = mbuf;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd);
+ }
+}
+#endif
+
+/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
+ * When update through the ring, just set the empty indicator.
+ */
+uint16_t
+enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mempool *pool;
+ struct bufdesc *bdp;
+ struct rte_mbuf *mbuf, *new_mbuf = NULL;
+ unsigned short status;
+ unsigned short pkt_len;
+ int pkt_received = 0, index = 0;
+ void *data, *mbuf_data;
+ uint16_t vlan_tag;
+ struct bufdesc_ex *ebdp = NULL;
+ bool vlan_packet_rcvd = false;
+ struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
+ struct rte_eth_stats *stats = &rxq->fep->stats;
+ struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
+ uint64_t rx_offloads = eth_conf->rxmode.offloads;
+ pool = rxq->pool;
+ bdp = rxq->bd.cur;
+#if ENETFEC_LOOPBACK
+ enetfec_lb_rxtx(rxq1);
+#endif
+ /* Process the incoming packet */
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ while (!(status & RX_BD_EMPTY)) {
+ if (pkt_received >= nb_pkts)
+ break;
+
+ new_mbuf = rte_pktmbuf_alloc(pool);
+ if (unlikely(!new_mbuf)) {
+ stats->ierrors++;
+ break;
+ }
+ /* Check for errors. */
+ status ^= RX_BD_LAST;
+ if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
+ RX_BD_CR | RX_BD_OV | RX_BD_LAST |
+ RX_BD_TR)) {
+ stats->ierrors++;
+ if (status & RX_BD_OV) {
+ /* FIFO overrun */
+ /* enet_dump_rx(rxq); */
+ ENET_PMD_ERR("rx_fifo_error\n");
+ goto rx_processing_done;
+ }
+ if (status & (RX_BD_LG | RX_BD_SH
+ | RX_BD_LAST)) {
+ /* Frame too long or too short. */
+ ENET_PMD_ERR("rx_length_error\n");
+ if (status & RX_BD_LAST)
+ ENET_PMD_ERR("rcv is not +last\n");
+ }
+ if (status & RX_BD_CR) { /* CRC Error */
+ ENET_PMD_ERR("rx_crc_errors\n");
+ }
+ /* Report late collisions as a frame error. */
+ if (status & (RX_BD_NO | RX_BD_TR))
+ ENET_PMD_ERR("rx_frame_error\n");
+ goto rx_processing_done;
+ }
+
+ /* Process the incoming frame. */
+ stats->ipackets++;
+ pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
+ stats->ibytes += pkt_len;
+
+ /* shows data with respect to the data_off field. */
+ index = enet_get_bd_index(bdp, &rxq->bd);
+ mbuf = rxq->rx_mbuf[index];
+
+ data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+ mbuf_data = data;
+ rte_prefetch0(data);
+ rte_pktmbuf_append((struct rte_mbuf *)mbuf,
+ pkt_len - 4);
+
+ if (rxq->fep->quirks & QUIRK_RACC)
+ data = rte_pktmbuf_adj(mbuf, 2);
+
+ rx_pkts[pkt_received] = mbuf;
+ pkt_received++;
+
+ /* Extract the enhanced buffer descriptor */
+ ebdp = NULL;
+ if (rxq->fep->bufdesc_ex)
+ ebdp = (struct bufdesc_ex *)bdp;
+
+ /* If this is a VLAN packet remove the VLAN Tag */
+ vlan_packet_rcvd = false;
+ if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
+ rxq->fep->bufdesc_ex &&
+ (rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(BD_ENET_RX_VLAN))) {
+ /* Push and remove the vlan tag */
+ struct rte_vlan_hdr *vlan_header =
+ (struct rte_vlan_hdr *)(data + ETH_HLEN);
+ vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
+
+ vlan_packet_rcvd = true;
+ memmove(mbuf_data + VLAN_HLEN, data, ETH_ALEN * 2);
+ rte_pktmbuf_adj(mbuf, VLAN_HLEN);
+ }
+
+ /* Get receive timestamp from the mbuf */
+ if (rxq->fep->hw_ts_rx_en && rxq->fep->bufdesc_ex)
+ mbuf->timestamp =
+ rte_le_to_cpu_32(rte_read32(&ebdp->ts));
+
+ if (rxq->fep->bufdesc_ex &&
+ (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
+ if (!(rte_read32(&ebdp->bd_esc) &
+ rte_cpu_to_le_32(RX_FLAG_CSUM_ERR))) {
+ /* don't check it */
+ mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD;
+ } else {
+ mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD;
+ }
+ }
+
+ /* Handle received VLAN packets */
+ if (vlan_packet_rcvd) {
+ mbuf->vlan_tci = vlan_tag;
+ mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ }
+
+ rxq->rx_mbuf[index] = new_mbuf;
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
+ &bdp->bd_bufaddr);
+rx_processing_done:
+ /* when rx_processing_done clear the status flags
+ * for this buffer
+ */
+ status &= ~RX_BD_STATS;
+
+ /* Mark the buffer empty */
+ status |= RX_BD_EMPTY;
+
+ if (rxq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+ rte_write32(rte_cpu_to_le_32(RX_BD_INT),
+ &ebdp->bd_esc);
+ rte_write32(0, &ebdp->bd_prot);
+ rte_write32(0, &ebdp->bd_bdu);
+ }
+
+ /* Make sure the updates to rest of the descriptor are
+ * performed before transferring ownership.
+ */
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Update BD pointer to next entry */
+ bdp = enet_get_nextdesc(bdp, &rxq->bd);
+
+ /* Doing this here will keep the FEC running while we process
+ * incoming frames.
+ */
+ rte_write32(0, rxq->bd.active_reg_desc);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+ }
+ rxq->bd.cur = bdp;
+ return pkt_received;
+}
+
+uint16_t
+enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct enetfec_priv_tx_q *txq =
+ (struct enetfec_priv_tx_q *)tx_queue;
+ struct rte_eth_stats *stats = &txq->fep->stats;
+ struct bufdesc *bdp, *last_bdp;
+ struct rte_mbuf *mbuf;
+ unsigned short status;
+ unsigned short buflen;
+ unsigned int index, estatus = 0;
+ unsigned int i, pkt_transmitted = 0;
+ u8 *data;
+ int tx_st = 1;
+
+ while (tx_st) {
+ if (pkt_transmitted >= nb_pkts) {
+ tx_st = 0;
+ break;
+ }
+ bdp = txq->bd.cur;
+ /* First clean the ring */
+ index = enet_get_bd_index(bdp, &txq->bd);
+ status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
+
+ if (status & TX_BD_READY) {
+ stats->oerrors++;
+ break;
+ }
+ if (txq->tx_mbuf[index]) {
+ rte_pktmbuf_free(txq->tx_mbuf[index]);
+ txq->tx_mbuf[index] = NULL;
+ }
+
+ mbuf = *(tx_pkts);
+ tx_pkts++;
+
+ /* Fill in a Tx ring entry */
+ last_bdp = bdp;
+ status &= ~TX_BD_STATS;
+
+ /* Set buffer length and buffer pointer */
+ buflen = rte_pktmbuf_pkt_len(mbuf);
+ stats->opackets++;
+ stats->obytes += buflen;
+
+ if (mbuf->nb_segs > 1) {
+ ENET_PMD_DEBUG("SG not supported");
+ return -1;
+ }
+ status |= (TX_BD_LAST);
+ data = rte_pktmbuf_mtod(mbuf, void *);
+ for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
+ dcbf(data + i);
+
+ rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
+ &bdp->bd_bufaddr);
+ rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
+
+ if (txq->fep->bufdesc_ex) {
+ struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
+
+ if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD)
+ estatus |= TX_BD_PINS | TX_BD_IINS;
+
+ rte_write32(0, &ebdp->bd_bdu);
+ rte_write32(rte_cpu_to_le_32(estatus),
+ &ebdp->bd_esc);
+ }
+
+ index = enet_get_bd_index(last_bdp, &txq->bd);
+ /* Save mbuf pointer */
+ txq->tx_mbuf[index] = mbuf;
+
+ /* Make sure the updates to rest of the descriptor are performed
+ * before transferring ownership.
+ */
+ status |= (TX_BD_READY | TX_BD_TC);
+ rte_wmb();
+ rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
+
+ /* Trigger transmission start */
+ rte_write32(0, txq->bd.active_reg_desc);
+ pkt_transmitted++;
+
+ /* If this was the last BD in the ring, start at the
+ * beginning again.
+ */
+ bdp = enet_get_nextdesc(last_bdp, &txq->bd);
+
+ /* Make sure the update to bdp and tx_skbuff are performed
+ * before txq->bd.cur.
+ */
+ txq->bd.cur = bdp;
+ }
+ return nb_pkts;
+}
--
2.17.1
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver
2021-04-30 4:34 [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Apeksha Gupta
` (3 preceding siblings ...)
2021-04-30 4:34 ` [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support Apeksha Gupta
@ 2021-05-04 15:40 ` Ferruh Yigit
2021-07-04 2:55 ` Sachin Saxena (OSS)
5 siblings, 0 replies; 17+ messages in thread
From: Ferruh Yigit @ 2021-05-04 15:40 UTC (permalink / raw)
To: Apeksha Gupta; +Cc: dev, hemant.agrawal, sachin.saxena
On 4/30/2021 5:34 AM, Apeksha Gupta wrote:
> This patch series introduce the enetfec ethernet driver,
> ENET fec (Fast Ethernet Controller) is a network poll mode driver for
> the inbuilt NIC found in the NXP imx8mmevk Soc.
>
> An overview of the enetfec driver with probe and remove are in patch 1.
> Patch 2 design UIO so that user space directly communicate with a
> hardware device. UIO interface mmap the Register & BD memory in DPDK
> which is allocated in kernel and this gives access to non-cacheble
> memory for BD.
>
> Patch 3 adds the RX/TX queue configuration setup operations.
> Patch 4 adds enqueue and dequeue support. Also adds some basic features
> like promiscuous enable, basic stats.
>
>
> Apeksha Gupta (4):
> drivers/net/enetfec: Introduce NXP ENETFEC driver
> drivers/net/enetfec: UIO support added
> drivers/net/enetfec: queue configuration
> drivers/net/enetfec: add enqueue and dequeue support
>
Hi Apeksha, Hemant,
It is a little late for v21.05, lets consider this driver for next release.
Meanwhile reviews can continue on it.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce NXP ENETFEC driver
2021-04-30 4:34 ` [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce " Apeksha Gupta
@ 2021-06-08 13:10 ` Andrew Rybchenko
2021-07-02 13:55 ` David Marchand
2021-07-04 2:57 ` Sachin Saxena (OSS)
1 sibling, 1 reply; 17+ messages in thread
From: Andrew Rybchenko @ 2021-06-08 13:10 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena
On 4/30/21 7:34 AM, Apeksha Gupta wrote:
> ENET fec (Fast Ethernet Controller) is a network poll mode driver
> for NXP SoC imx8mmevk.
>
> This patch add skeleton for enetfec driver with probe and
> uintialisation functions
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> doc/guides/nics/enetfec.rst | 121 ++++++++++++++++
> doc/guides/nics/features/enetfec.ini | 8 ++
> doc/guides/nics/index.rst | 1 +
> drivers/net/enetfec/enet_ethdev.c | 89 ++++++++++++
> drivers/net/enetfec/enet_ethdev.h | 203 +++++++++++++++++++++++++++
> drivers/net/enetfec/enet_pmd_logs.h | 31 ++++
> drivers/net/enetfec/meson.build | 15 ++
> drivers/net/enetfec/version.map | 3 +
> drivers/net/meson.build | 1 +
> 9 files changed, 472 insertions(+)
> create mode 100644 doc/guides/nics/enetfec.rst
> create mode 100644 doc/guides/nics/features/enetfec.ini
> create mode 100644 drivers/net/enetfec/enet_ethdev.c
> create mode 100644 drivers/net/enetfec/enet_ethdev.h
> create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
> create mode 100644 drivers/net/enetfec/meson.build
> create mode 100644 drivers/net/enetfec/version.map
>
> diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> new file mode 100644
> index 000000000..10f495fb9
> --- /dev/null
> +++ b/doc/guides/nics/enetfec.rst
> @@ -0,0 +1,121 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2021 NXP
> +
> +ENETFEC Poll Mode Driver
> +========================
> +
> +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
> +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
> +
> +More information can be found at NXP Official Website
> +<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
> +
> +ENETFEC
> +-------
> +
> +This section provides an overview of the NXP ENETFEC and how it is
> +integrated into the DPDK.
> +
> +Contents summary
> +
> +- ENETFEC overview
> +- ENETFEC features
> +- Supported ENETFEC SoCs
> +- Prerequisites
> +- Driver compilation and testing
> +- Limitations
> +
> +ENETFEC Overview
> +~~~~~~~~~~~~~~~~
> +The i.MX 8M Mini Media Applications Processor is built to achieve both high
> +performance and low power consumption. ENETFEC is a hardware programmable
> +packet forwarding engine to provide high performance Ethernet interface.
> +The diagram below shows a system level overview of ENETFEC:
> +
> + ====================================================+===============
> + US +-----------------------------------------+ | Kernel Space
> + | | |
> + | ENETFEC Ethernet Driver | |
> + +-----------------------------------------+ |
> + ^ | |
> + ENETFEC RXQ | | TXQ |
> + PMD | | |
> + | v | +----------+
> + +-------------+ | | fec-uio |
> + | net_enetfec | | +----------+
> + +-------------+ |
> + ^ | |
> + TXQ | | RXQ |
> + | | |
> + | v |
> + ===================================================+===============
> + +----------------------------------------+
> + | | HW
> + | i.MX 8M MINI EVK |
> + | +-----+ |
> + | | MAC | |
> + +---------------+-----+------------------+
> + | PHY |
> + +-----+
> +
> +ENETFEC ethernet driver is traditional DPDK PMD driver running in the userspace.
ethernet -> Ethernet
> +The MAC and PHY are the hardware blocks. 'fec-uio' is the uio driver, enetfec PMD
> +uses uio interface to interact with kernel for PHY initialisation and for mapping
uio -> UIO ?
> +the allocated memory of register & BD in kernel with DPDK which gives access to
> +non-cacheble memory for BD. net_enetfec is logical ethernet interface, created by
cacheble -> cacheable
ethernet -> Ethernet
> +ENETFEC driver.
> +
> +- ENETFEC driver registers the device in virtual device driver.
> +- RTE framework scans and will invoke the probe function of ENETFEC driver.
> +- The probe function will set the basic device registers and also setups BD rings.
> +- On packet Rx the respective BD Ring status bit is set which is then used for
> + packet processing.
> +- Then Tx is done first followed by Rx via logical interfaces.
> +
> +ENETFEC Features
> +~~~~~~~~~~~~~~~~~
> +
> +- ARMv8
> +
> +Supported ENETFEC SoCs
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +- i.MX 8M Mini
> +
> +Prerequisites
> +~~~~~~~~~~~~~
> +
> +There are three main pre-requisites for executing ENETfec PMD on a i.MX
I see "enetfec", "ENETfec" and "ENETFEC" referencing driver or
PMD. Please, be consistent.
> +compatible board:
> +
> +1. **ARM 64 Tool Chain**
> +
> + For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
> +
> +2. **Linux Kernel**
> +
> + It can be obtained from `NXP's bitbucket: <https://bitbucket.sw.nxp.com/projects/LFAC/repos/linux-nxp/commits?until=refs%2Fheads%2Fnet%2Ffec-uio&merges=include>`_.
> +
> +3. **Rootfile system**
> +
> + Any *aarch64* supporting filesystem can be used. For example,
> + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
> + from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
> +
> +4. The ethernet device will be registered as virtual device, so enetfec has dependency on
ethernet -> Ethernet
> + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
> + run DPDK application.
> +
> +Driver compilation and testing
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Follow instructions available in the document
> +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +to launch **testpmd**
> +
> +Limitations
> +~~~~~~~~~~~
> +
> +- Multi queue is not supported.
> +- Link status is down always.
> +- Single ethernet interface.
> diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
> new file mode 100644
> index 000000000..570069798
> --- /dev/null
> +++ b/doc/guides/nics/features/enetfec.ini
> @@ -0,0 +1,8 @@
> +;
> +; Supported features of the 'enetfec' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +ARMv8 = Y
> +Usage doc = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 799697caf..93b68e701 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -25,6 +25,7 @@ Network Interface Controller Drivers
> e1000em
> ena
> enetc
> + enetfec
> enic
> fm10k
> hinic
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> new file mode 100644
> index 000000000..5fd2dbc2d
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -0,0 +1,89 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +#include <stdio.h>
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <rte_kvargs.h>
> +#include <ethdev_vdev.h>
> +#include <rte_bus_vdev.h>
> +#include <rte_dev.h>
> +#include <rte_ether.h>
> +#include "enet_pmd_logs.h"
> +#include "enet_ethdev.h"
> +
> +#define ENETFEC_NAME_PMD net_enetfec
> +#define ENET_VDEV_GEM_ID_ARG "intf"
> +#define ENET_CDEV_INVALID_FD -1
> +
> +int enetfec_logtype_pmd;
> +
> +static int
> +enetfec_eth_init(struct rte_eth_dev *dev)
> +{
> + rte_eth_dev_probing_finish(dev);
> + return 0;
> +}
> +
> +static int
> +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *dev = NULL;
> + struct enetfec_private *fep;
> + const char *name;
> + int rc = -1;
> +
> + name = rte_vdev_device_name(vdev);
> + if (name == NULL)
> + return -EINVAL;
> + ENET_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
> +
> + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
> + if (dev == NULL)
> + return -ENOMEM;
> +
> + /* setup board info structure */
> + fep = dev->data->dev_private;
> + fep->dev = dev;
> + rc = enetfec_eth_init(dev);
> + if (rc)
DPDK coding style requires excplicit check vs 0 and NULL [1].
[1] https://doc.dpdk.org/guides/contributing/coding_style.html#null-pointers
> + goto failed_init;
> + return 0;
> +failed_init:
> + ENET_PMD_ERR("Failed to init");
> + return rc;
> +}
> +
> +static int
> +pmd_enetfec_remove(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *eth_dev = NULL;
> +
> + /* find the ethdev entry */
> + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> + if (!eth_dev)
eth_dev == NULL
> + return -ENODEV;
> +
> + rte_eth_dev_release_port(eth_dev);
> +
> + ENET_PMD_INFO("Closing sw device\n");
Unnecessary \n since it is added by the macro.
> + return 0;
> +}
> +
> +static
> +struct rte_vdev_driver pmd_enetfec_drv = {
> + .probe = pmd_enetfec_probe,
> + .remove = pmd_enetfec_remove,
> +};
> +
> +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
> +RTE_PMD_REGISTER_PARAM_STRING(ENETFEC_NAME_PMD, ENET_VDEV_GEM_ID_ARG "=<int>");
> +
> +RTE_INIT(enetfec_pmd_init_log)
> +{
> + enetfec_logtype_pmd = rte_log_register("pmd.net.enetfec");
> + if (enetfec_logtype_pmd >= 0)
> + rte_log_set_level(enetfec_logtype_pmd, RTE_LOG_NOTICE);
rte_log_register_type_and_pick_level() should be used.
> +}
> diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
> new file mode 100644
> index 000000000..3833a70fc
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.h
> @@ -0,0 +1,203 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#ifndef __ENET_ETHDEV_H__
> +#define __ENET_ETHDEV_H__
> +
> +#include <compat.h>
> +#include <rte_ethdev.h>
> +
> +/* ENET with AVB IP can support maximum 3 rx and tx queues.
> + */
> +#define ENET_MAX_Q 3
> +
> +#define BD_LEN 49152
> +#define ENET_TX_FR_SIZE 2048
> +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
> +#define MAX_RX_BD_RING_SIZE 512
> +
> +/* full duplex or half duplex */
> +#define HALF_DUPLEX 0x00
> +#define FULL_DUPLEX 0x01
> +#define UNKNOWN_DUPLEX 0xff
> +
> +#define PKT_MAX_BUF_SIZE 1984
> +#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
> +#define ETH_ALEN RTE_ETHER_ADDR_LEN
> +#define ETH_HLEN RTE_ETHER_HDR_LEN
> +#define VLAN_HLEN 4
> +
> +
> +struct bufdesc {
> + uint16_t bd_datlen; /* buffer data length */
> + uint16_t bd_sc; /* buffer control & status */
> + uint32_t bd_bufaddr; /* buffer address */
> +};
> +
> +struct bufdesc_ex {
> + struct bufdesc desc;
> + uint32_t bd_esc;
> + uint32_t bd_prot;
> + uint32_t bd_bdu;
> + uint32_t ts;
> + uint16_t res0[4];
> +};
> +
> +struct bufdesc_prop {
> + int que_id;
> + /* Addresses of Tx and Rx buffers */
> + struct bufdesc *base;
> + struct bufdesc *last;
> + struct bufdesc *cur;
> + void __iomem *active_reg_desc;
> + uint64_t descr_baseaddr_p;
> + unsigned short ring_size;
> + unsigned char d_size;
> + unsigned char d_size_log2;
> +};
> +
> +struct enetfec_priv_tx_q {
> + struct bufdesc_prop bd;
> + struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
> + struct bufdesc *dirty_tx;
> + struct rte_mempool *pool;
> + struct enetfec_private *fep;
> +};
> +
> +struct enetfec_priv_rx_q {
> + struct bufdesc_prop bd;
> + struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
> + struct rte_mempool *pool;
> + struct enetfec_private *fep;
> +};
> +
> +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
Standalone FEC is used in DPDK ask Forward Error correction.
I think it is always better to use in the driver together
with ENET prefix.
> + * descriptor base is x_bd_base. Currently available buffer are x_cur
> + * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
> + * that is sent by the controller.
> + * The tx_cur and dirty_tx are same in completely full and empty
> + * conditions. Actual condition is determine by empty & ready bits.
> + */
> +struct enetfec_private {
> + struct rte_eth_dev *dev;
> + struct rte_eth_stats stats;
> + struct rte_mempool *pool;
> +
> + struct enetfec_priv_rx_q *rx_queues[ENET_MAX_Q];
> + struct enetfec_priv_tx_q *tx_queues[ENET_MAX_Q];
> + uint16_t max_rx_queues;
> + uint16_t max_tx_queues;
> +
> + unsigned int total_tx_ring_size;
> + unsigned int total_rx_ring_size;
> +
> + bool bufdesc_ex;
> + unsigned int tx_align;
> + unsigned int rx_align;
> + int full_duplex;
> + unsigned int phy_speed;
> + u_int32_t quirks;
> + int flag_csum;
> + int flag_pause;
> + int flag_wol;
> + bool rgmii_txc_delay;
> + bool rgmii_rxc_delay;
> + int link;
> + void *hw_baseaddr_v;
> + uint64_t hw_baseaddr_p;
> + void *bd_addr_v;
> + uint64_t bd_addr_p;
> + uint64_t bd_addr_p_r[ENET_MAX_Q];
> + uint64_t bd_addr_p_t[ENET_MAX_Q];
> + void *dma_baseaddr_r[ENET_MAX_Q];
> + void *dma_baseaddr_t[ENET_MAX_Q];
> + uint64_t cbus_size;
> + unsigned int reg_size;
> + unsigned int bd_size;
> + int hw_ts_rx_en;
> + int hw_ts_tx_en;
> +};
> +
> +#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
> +#define readl(p) rte_read32(p)
> +
> +static __always_inline
> +void __read_once_size(volatile void *p, void *res, int size)
Please, move void to the previous line to be consistent
with return type placement when function is implemented.
> +{
> + switch (size) {
> + case 1:
> + *(__u8 *)res = *(volatile __u8 *)p;
> + break;
> + case 2:
> + *(__u16 *)res = *(volatile __u16 *)p;
> + break;
> + case 4:
> + *(__u32 *)res = *(volatile __u32 *)p;
> + break;
> + case 8:
> + *(__u64 *)res = *(volatile __u64 *)p;
> + break;
> + default:
> + break;
> + }
> +}
> +
> +#define __READ_ONCE(x)\
> +({\
> + union { typeof(x) __val; char __c[1]; } __u;\
> + __read_once_size(&(x), __u.__c, sizeof(x));\
> + __u.__val;\
> +})
> +#ifndef READ_ONCE
> +#define READ_ONCE(x) __READ_ONCE(x)
> +#endif
> +
> +static inline struct
> +bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
Really strange line split. May be:
static inline struct bufdesc *
enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd)
> +
Extra empty line?
> + struct bufdesc_prop *bd)
> +{
> + return (bdp >= bd->last) ? bd->base
> + : (struct bufdesc *)(((void *)bdp) + bd->d_size);
> +}
> +
> +static inline struct
> +bufdesc *enet_get_prevdesc(struct bufdesc *bdp,
Really strange line split (same as above)
> + struct bufdesc_prop *bd)
> +{
> + return (bdp <= bd->base) ? bd->last
> + : (struct bufdesc *)(((void *)bdp) - bd->d_size);
> +}
> +
> +static inline int
> +enet_get_bd_index(struct bufdesc *bdp,
> + struct bufdesc_prop *bd)
This one LGTM, but I guess bd could fit into previous line.
> +{
> + return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
> +}
> +
> +static inline phys_addr_t enetfec_mem_vtop(uint64_t vaddr)
when function is implemented, function name is at postion 0
in above function. Please, be consistnet and follow it here
and in the next function.
> +{
> + const struct rte_memseg *memseg;
> + memseg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
> + if (memseg)
> + return memseg->iova + RTE_PTR_DIFF(vaddr, memseg->addr);
> + return (size_t)NULL;
> +}
> +
> +static inline int fls64(unsigned long word)
> +{
> + return (64 - __builtin_clzl(word)) - 1;
> +}
> +
> +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts);
> +uint16_t
> +enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
function name is on the same line as return time above,
I think it would be good to be consistent here as well.
> +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
> + struct bufdesc_prop *bd);
> +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
> + struct rte_mbuf *mbuf);
> +
> +#endif /*__FEC_ETHDEV_H__*/
> diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
> new file mode 100644
> index 000000000..ff8daa359
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_pmd_logs.h
> @@ -0,0 +1,31 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#ifndef _ENET_LOGS_H_
> +#define _ENET_LOGS_H_
> +
> +extern int enetfec_logtype_pmd;
> +
> +/* PMD related logs */
> +#define ENET_PMD_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "fec_net: %s()" \
It looks like prefix "fec_net" is inconsistent with PMD name
> + fmt "\n", __func__, ##args)
There is no space betwen ## and args below. Please, be
consistent.
> +
> +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
> +
> +#define ENET_PMD_DEBUG(fmt, args...) \
> + ENET_PMD_LOG(DEBUG, fmt, ## args)
> +#define ENET_PMD_ERR(fmt, args...) \
> + ENET_PMD_LOG(ERR, fmt, ## args)
> +#define ENET_PMD_INFO(fmt, args...) \
> + ENET_PMD_LOG(INFO, fmt, ## args)
> +
> +#define ENET_PMD_WARN(fmt, args...) \
> + ENET_PMD_LOG(WARNING, fmt, ## args)
> +
> +/* DP Logs, toggled out at compile time if level lower than current level */
> +#define ENET_DP_LOG(level, fmt, args...) \
> + RTE_LOG_DP(level, PMD, fmt, ## args)
> +
> +#endif /* _ENET_LOGS_H_ */
> diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
> new file mode 100644
> index 000000000..252bf8330
> --- /dev/null
> +++ b/drivers/net/enetfec/meson.build
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright 2021 NXP
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on linux'
> +endif
> +
> +deps += ['common_dpaax']
> +
> +sources = files('enet_ethdev.c')
> +
> +if cc.has_argument('-Wno-pointer-arith')
> + cflags += '-Wno-pointer-arith'
> +endif
> diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
> new file mode 100644
> index 000000000..6e4fb220a
> --- /dev/null
> +++ b/drivers/net/enetfec/version.map
> @@ -0,0 +1,3 @@
> +DPDK_21 {
> + local: *;
> +};
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index c8b5ce298..c1307a3a6 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -18,6 +18,7 @@ drivers = [
> 'e1000',
> 'ena',
> 'enetc',
> + 'enetfec',
> 'enic',
> 'failsafe',
> 'fm10k',
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 2/4] drivers/net/enetfec: UIO support added
2021-04-30 4:34 ` [dpdk-dev] [PATCH 2/4] drivers/net/enetfec: UIO support added Apeksha Gupta
@ 2021-06-08 13:21 ` Andrew Rybchenko
2021-07-04 4:27 ` Sachin Saxena (OSS)
1 sibling, 0 replies; 17+ messages in thread
From: Andrew Rybchenko @ 2021-06-08 13:21 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena
Summary should be "add UIO support".
On 4/30/21 7:34 AM, Apeksha Gupta wrote:
> Implemented the fec-uio driver in kernel. enetfec PMD uses
> UIO interface to interact with kernel for PHY initialisation
> and for mapping the allocated memory of register & BD from
> kernel to DPDK which gives access to non-cacheble memory for BD.
cacheble -> cacheable ?
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> drivers/net/enetfec/enet_ethdev.c | 204 ++++++++++++++++++++++++++++++
> drivers/net/enetfec/enet_regs.h | 179 ++++++++++++++++++++++++++
> drivers/net/enetfec/enet_uio.c | 192 ++++++++++++++++++++++++++++
> drivers/net/enetfec/enet_uio.h | 54 ++++++++
> drivers/net/enetfec/meson.build | 3 +-
> 5 files changed, 631 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/enetfec/enet_regs.h
> create mode 100644 drivers/net/enetfec/enet_uio.c
> create mode 100644 drivers/net/enetfec/enet_uio.h
>
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> index 5fd2dbc2d..5f4f2cf9e 100644
> --- a/drivers/net/enetfec/enet_ethdev.c
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -11,18 +11,195 @@
> #include <rte_bus_vdev.h>
> #include <rte_dev.h>
> #include <rte_ether.h>
> +#include <rte_io.h>
> #include "enet_pmd_logs.h"
> #include "enet_ethdev.h"
> +#include "enet_regs.h"
> +#include "enet_uio.h"
>
> #define ENETFEC_NAME_PMD net_enetfec
> #define ENET_VDEV_GEM_ID_ARG "intf"
> #define ENET_CDEV_INVALID_FD -1
>
> +#define BIT(nr) (1 << (nr))
Shouldn't it be 1U (or 1u) to be able to compose most
significant bit?
If you add the define, it is better to be consistent
and use it whereever it is applicable.
> +/* FEC receive acceleration */
> +#define ENET_RACC_IPDIS (1 << 1)
> +#define ENET_RACC_PRODIS (1 << 2)
Why is BIT not used above?
> +#define ENET_RACC_SHIFT16 BIT(7)
> +#define ENET_RACC_OPTIONS (ENET_RACC_IPDIS | ENET_RACC_PRODIS)
> +
> +/* Transmitter timeout */
> +#define TX_TIMEOUT (2 * HZ)
> +
> +#define ENET_PAUSE_FLAG_AUTONEG 0x1
> +#define ENET_PAUSE_FLAG_ENABLE 0x2
> +#define ENET_WOL_HAS_MAGIC_PACKET (0x1 << 0)
> +#define ENET_WOL_FLAG_ENABLE (0x1 << 1)
> +#define ENET_WOL_FLAG_SLEEP_ON (0x1 << 2)
Why is BIT not used above?
> +
> +/* Pause frame feild and FIFO threshold */
> +#define ENET_ENET_FCE (1 << 5)
Why is BIT not used above?
> +#define ENET_ENET_RSEM_V 0x84
> +#define ENET_ENET_RSFL_V 16
> +#define ENET_ENET_RAEM_V 0x8
> +#define ENET_ENET_RAFL_V 0x8
> +#define ENET_ENET_OPD_V 0xFFF0
> +#define ENET_MDIO_PM_TIMEOUT 100 /* ms */
> +
> int enetfec_logtype_pmd;
>
> +/*
> + * This function is called to start or restart the FEC during a link
> + * change, transmit timeout or to reconfigure the FEC. The network
> + * packet processing for this device must be stopped before this call.
> + */
> +static void
> +enetfec_restart(struct rte_eth_dev *dev)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + uint32_t temp_mac[2];
> + uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
> + uint32_t ecntl = ENET_ETHEREN; /* ETHEREN */
> + /* TODO get eth addr from eth dev */
> + struct rte_ether_addr addr = {
> + .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
> + uint32_t val;
> +
> + /*
> + * enet-mac reset will reset mac address registers too,
> + * so need to reconfigure it.
> + */
> + memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
> + rte_write32(rte_cpu_to_be_32(temp_mac[0]),
> + fep->hw_baseaddr_v + ENET_PALR);
> + rte_write32(rte_cpu_to_be_32(temp_mac[1]),
> + fep->hw_baseaddr_v + ENET_PAUR);
> +
> + /* Clear any outstanding interrupt. */
> + writel(0xffffffff, fep->hw_baseaddr_v + ENET_EIR);
> +
> + /* Enable MII mode */
> + if (fep->full_duplex == FULL_DUPLEX) {
> + /* FD enable */
> + rte_write32(0x04, fep->hw_baseaddr_v + ENET_TCR);
> + } else {
> + /* No Rcv on Xmit */
> + rcntl |= 0x02;
> + rte_write32(0x0, fep->hw_baseaddr_v + ENET_TCR);
> + }
> +
> + if (fep->quirks & QUIRK_RACC) {
> + val = rte_read32(fep->hw_baseaddr_v + ENET_RACC);
> + /* align IP header */
> + val |= ENET_RACC_SHIFT16;
> + if (fep->flag_csum & RX_FLAG_CSUM_EN)
> + /* set RX checksum */
> + val |= ENET_RACC_OPTIONS;
> + else
> + val &= ~ENET_RACC_OPTIONS;
> + rte_write32(val, fep->hw_baseaddr_v + ENET_RACC);
> + rte_write32(PKT_MAX_BUF_SIZE,
> + fep->hw_baseaddr_v + ENET_FRAME_TRL);
> + }
> +
> + /*
> + * The phy interface and speed need to get configured
> + * differently on enet-mac.
> + */
> + if (fep->quirks & QUIRK_HAS_ENET_MAC) {
> + /* Enable flow control and length check */
> + rcntl |= 0x40000000 | 0x00000020;
> +
> + /* RGMII, RMII or MII */
> + rcntl |= (1 << 6);
> + ecntl |= (1 << 5);
BIT?
> + }
> +
> + /* enable pause frame*/
> + if ((fep->flag_pause & ENET_PAUSE_FLAG_ENABLE) ||
> + ((fep->flag_pause & ENET_PAUSE_FLAG_AUTONEG)
> + /*&& ndev->phydev && ndev->phydev->pause*/)) {
> + rcntl |= ENET_ENET_FCE;
> +
> + /* set FIFO threshold parameter to reduce overrun */
> + rte_write32(ENET_ENET_RSEM_V,
> + fep->hw_baseaddr_v + ENET_R_FIFO_SEM);
> + rte_write32(ENET_ENET_RSFL_V,
> + fep->hw_baseaddr_v + ENET_R_FIFO_SFL);
> + rte_write32(ENET_ENET_RAEM_V,
> + fep->hw_baseaddr_v + ENET_R_FIFO_AEM);
> + rte_write32(ENET_ENET_RAFL_V,
> + fep->hw_baseaddr_v + ENET_R_FIFO_AFL);
> +
> + /* OPD */
> + rte_write32(ENET_ENET_OPD_V, fep->hw_baseaddr_v + ENET_OPD);
> + } else {
> + rcntl &= ~ENET_ENET_FCE;
> + }
> +
> + rte_write32(rcntl, fep->hw_baseaddr_v + ENET_RCR);
> +
> + rte_write32(0, fep->hw_baseaddr_v + ENET_IAUR);
> + rte_write32(0, fep->hw_baseaddr_v + ENET_IALR);
> +
> + if (fep->quirks & QUIRK_HAS_ENET_MAC) {
> + /* enable ENET endian swap */
> + ecntl |= (1 << 8);
> + /* enable ENET store and forward mode */
> + rte_write32(1 << 8, fep->hw_baseaddr_v + ENET_TFWR);
> + }
> +
> + if (fep->bufdesc_ex)
> + ecntl |= (1 << 4);
> +
> + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
> + fep->rgmii_txc_delay)
> + ecntl |= ENET_TXC_DLY;
> + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
> + fep->rgmii_rxc_delay)
> + ecntl |= ENET_RXC_DLY;
> +
> + /* Enable the MIB statistic event counters */
> + rte_write32(0 << 31, fep->hw_baseaddr_v + ENET_MIBC);
0 << 31 looks confusing
> +
> + ecntl |= 0x70000000;
> + /* And last, enable the transmit and receive processing */
> + rte_write32(ecntl, fep->hw_baseaddr_v + ENET_ECR);
> + rte_delay_us(10);
> +}
> +
> +static int
> +enetfec_eth_open(struct rte_eth_dev *dev)
> +{
> + enetfec_restart(dev);
> +
> + return 0;
> +}
> +
> +static const struct eth_dev_ops ops = {
> + .dev_start = enetfec_eth_open,
I guess it is unreable w/o dev_configure. So, I guess order
of patches is incorrect.
Also for consistency dev_stop should be implemented in
the patch as well.
> +};
> +
> static int
> enetfec_eth_init(struct rte_eth_dev *dev)
> {
> + struct enetfec_private *fep = dev->data->dev_private;
> + struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf;
> + uint64_t rx_offloads = eth_conf->rxmode.offloads;
> +
> + fep->full_duplex = FULL_DUPLEX;
> +
> + dev->dev_ops = &ops;
> + if (fep->quirks & QUIRK_VLAN)
> + /* enable hw VLAN support */
> + rx_offloads |= DEV_RX_OFFLOAD_VLAN;
> +
> + if (fep->quirks & QUIRK_CSUM) {
> + /* enable hw accelerator */
> + rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> + fep->flag_csum |= RX_FLAG_CSUM_EN;
> + }
> +
> rte_eth_dev_probing_finish(dev);
> return 0;
> }
> @@ -34,6 +211,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> struct enetfec_private *fep;
> const char *name;
> int rc = -1;
> + int i;
> + unsigned int bdsize;
>
> name = rte_vdev_device_name(vdev);
> if (name == NULL)
> @@ -47,6 +226,31 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> /* setup board info structure */
> fep = dev->data->dev_private;
> fep->dev = dev;
> +
> + fep->max_rx_queues = ENET_MAX_Q;
> + fep->max_tx_queues = ENET_MAX_Q;
> + fep->quirks = QUIRK_HAS_ENET_MAC | QUIRK_GBIT | QUIRK_BUFDESC_EX
> + | QUIRK_CSUM | QUIRK_VLAN | QUIRK_ERR007885
> + | QUIRK_RACC | QUIRK_COALESCE | QUIRK_EEE;
> +
> + config_enetfec_uio(fep);
> +
> + /* Get the BD size for distributing among six queues */
> + bdsize = (fep->bd_size) / 6;
> +
> + for (i = 0; i < fep->max_tx_queues; i++) {
> + fep->dma_baseaddr_t[i] = fep->bd_addr_v;
> + fep->bd_addr_p_t[i] = fep->bd_addr_p;
> + fep->bd_addr_v = fep->bd_addr_v + bdsize;
> + fep->bd_addr_p = fep->bd_addr_p + bdsize;
> + }
> + for (i = 0; i < fep->max_rx_queues; i++) {
> + fep->dma_baseaddr_r[i] = fep->bd_addr_v;
> + fep->bd_addr_p_r[i] = fep->bd_addr_p;
> + fep->bd_addr_v = fep->bd_addr_v + bdsize;
> + fep->bd_addr_p = fep->bd_addr_p + bdsize;
> + }
> +
> rc = enetfec_eth_init(dev);
> if (rc)
> goto failed_init;
> diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
> new file mode 100644
> index 000000000..d037aafae
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_regs.h
> @@ -0,0 +1,179 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +#ifndef __ENET_REGS_H
> +#define __ENET_REGS_H
> +
> +/* Ethernet receive use control and status of buffer descriptor
> + */
> +#define RX_BD_TR ((ushort)0x0001) /* Truncated */
> +#define RX_BD_OV ((ushort)0x0002) /* Over-run */
> +#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
> +#define RX_BD_SH ((ushort)0x0008) /* Reserved */
> +#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
> +#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
> +#define RX_BD_MC ((ushort)0x0040) /* Rcvd Multicast */
> +#define RX_BD_BC ((ushort)0x0080) /* Rcvd Broadcast */
> +#define RX_BD_MISS ((ushort)0x0100) /* Miss: promisc mode frame */
> +#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
> +#define RX_BD_LAST ((ushort)0x0800) /* Buffer is the last in the frame */
> +#define RX_BD_INTR ((ushort)0x1000) /* Software specified field */
> +/* 0 The next BD in consecutive location
> + * 1 The next BD in ENETn_RDSR.
> + */
> +#define RX_BD_WRAP ((ushort)0x2000)
> +#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
> +#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
> +
> +/* Ethernet receive use control and status of enhanced buffer descriptor */
> +#define BD_ENET_RX_VLAN 0x00000004
> +
> +/* Ethernet transmit use control and status of buffer descriptor.
> + */
> +#define TX_BD_CSL ((ushort)0x0001)
> +#define TX_BD_UN ((ushort)0x0002)
> +#define TX_BD_RCMASK ((ushort)0x003c)
> +#define TX_BD_RL ((ushort)0x0040)
> +#define TX_BD_LC ((ushort)0x0080)
> +#define TX_BD_HB ((ushort)0x0100)
> +#define TX_BD_DEF ((ushort)0x0200)
> +#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
> +#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
> +#define TX_BD_INTR ((ushort)0x1000)
> +#define TX_BD_WRAP ((ushort)0x2000)
> +#define TX_BD_PAD ((ushort)0x4000)
> +#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
> +
> +#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
> +
> +/* Ethernet transmit use control and status of enhanced buffer descriptor */
> +#define TX_BD_IINS 0x08000000
> +#define TX_BD_PINS 0x10000000
> +#define TX_BD_TS 0x20000000
> +#define TX_BD_INT 0x40000000
> +
> +#define ENET_RD_START(X) (((X) == 1) ? ENET_RD_START_1 : \
> + (((X) == 2) ? \
> + ENET_RD_START_2 : ENET_RD_START_0))
> +#define ENET_TD_START(X) (((X) == 1) ? ENET_TD_START_1 : \
> + (((X) == 2) ? \
> + ENET_TD_START_2 : ENET_TD_START_0))
> +#define ENET_MRB_SIZE(X) (((X) == 1) ? ENET_MRB_SIZE_1 : \
> + (((X) == 2) ? \
> + ENET_MRB_SIZE_2 : ENET_MRB_SIZE_0))
> +
> +#define ENET_DMACFG(X) (((X) == 2) ? ENET_DMA2CFG : ENET_DMA1CFG)
> +
> +#define ENABLE_DMA_CLASS (1 << 16)
> +#define ENET_RCM(X) (((X) == 2) ? ENET_RCM2 : ENET_RCM1)
> +#define SLOPE_IDLE_MASK 0xffff
> +#define SLOPE_IDLE_1 0x200 /* BW_fraction: 0.5 */
> +#define SLOPE_IDLE_2 0x200 /* BW_fraction: 0.5 */
> +#define SLOPE_IDLE(X) (((X) == 1) ? \
> + (SLOPE_IDLE_1 & SLOPE_IDLE_MASK) : \
> + (SLOPE_IDLE_2 & SLOPE_IDLE_MASK))
> +#define RCM_MATCHEN (0x1 << 16)
> +#define CFG_RCMR_CMP(v, n) (((v) & 0x7) << ((n) << 2))
> +#define RCMR_CMP1 (CFG_RCMR_CMP(0, 0) | CFG_RCMR_CMP(1, 1) | \
> + CFG_RCMR_CMP(2, 2) | CFG_RCMR_CMP(3, 3))
> +#define RCMR_CMP2 (CFG_RCMR_CMP(4, 0) | CFG_RCMR_CMP(5, 1) | \
> + CFG_RCMR_CMP(6, 2) | CFG_RCMR_CMP(7, 3))
> +#define RCM_CMP(X) (((X) == 1) ? RCMR_CMP1 : RCMR_CMP2)
> +#define BD_TX_FTYPE(X) (((X) & 0xf) << 20)
> +
> +#define RX_BD_INT 0x00800000
> +#define RX_BD_PTP ((ushort)0x0400)
> +#define RX_BD_ICE 0x00000020
> +#define RX_BD_PCR 0x00000010
> +#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
> +#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
> +#define ENET_MII ((uint)0x00800000) /*MII_interrupt*/
> +
> +#define ENET_ETHEREN ((uint)0x00000002)
> +#define ENET_TXC_DLY ((uint)0x00010000)
> +#define ENET_RXC_DLY ((uint)0x00020000)
> +
> +/* ENET MAC is in controller */
> +#define QUIRK_HAS_ENET_MAC (1 << 0)
> +/* gasket is used in controller */
> +#define QUIRK_GASKET (1 << 2)
> +/* GBIT supported in controller */
> +#define QUIRK_GBIT (1 << 3)
> +/* Controller has extended descriptor buffer */
> +#define QUIRK_BUFDESC_EX (1 << 4)
> +/* Controller support hardware checksum */
> +#define QUIRK_CSUM (1 << 5)
> +/* Controller support hardware vlan*/
> +#define QUIRK_VLAN (1 << 6)
> +/* ENET IP hardware AVB
> + * i.MX8MM ENET IP supports the AVB (Audio Video Bridging) feature.
> + */
> +#define QUIRK_AVB (1 << 8)
> +#define QUIRK_ERR007885 (1 << 9)
> +/* RACC register supported by controller */
> +#define QUIRK_RACC (1 << 12)
> +/* interrupt coalesc supported by controller*/
> +#define QUIRK_COALESCE (1 << 13)
> +/* To support IEEE 802.3az EEE std, new feature is added by i.MX8MQ ENET IP
> + * version.
> + */
> +#define QUIRK_EEE (1 << 17)
> +/* i.MX8QM ENET IP version added the feature to generate the delayed TXC or
> + * RXC. For its implementation, ENET uses synchronized clocks (250MHz) for
> + * generating delay of 2ns.
> + */
> +#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
> +
> +#define ENET_EIR 0x004 /* Interrupt event register */
> +#define ENET_EIMR 0x008 /* Interrupt mask register */
> +#define ENET_RDAR_0 0x010 /* Receive descriptor active register ring0 */
> +#define ENET_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
> +#define ENET_ECR 0x024 /* Ethernet control register */
> +#define ENET_MSCR 0x044 /* MII speed control register */
> +#define ENET_MIBC 0x064 /* MIB control and status register */
> +#define ENET_RCR 0x084 /* Receive control register */
> +#define ENET_TCR 0x0c4 /* Transmit Control register */
> +#define ENET_PALR 0x0e4 /* MAC address low 32 bits */
> +#define ENET_PAUR 0x0e8 /* MAC address high 16 bits */
> +#define ENET_OPD 0x0ec /* Opcode/Pause duration register */
> +#define ENET_IAUR 0x118 /* hash table 32 bits high */
> +#define ENET_IALR 0x11c /* hash table 32 bits low */
> +#define ENET_GAUR 0x120 /* grp hash table 32 bits high */
> +#define ENET_GALR 0x124 /* grp hash table 32 bits low */
> +#define ENET_TFWR 0x144 /* transmit FIFO water_mark */
> +#define ENET_RD_START_1 0x160 /* Receive descriptor ring1 start register */
> +#define ENET_TD_START_1 0x164 /* Transmit descriptor ring1 start register */
> +#define ENET_MRB_SIZE_1 0x168 /* Maximum receive buffer size register ring1 */
> +#define ENET_RD_START_2 0x16c /* Receive descriptor ring2 start register */
> +#define ENET_TD_START_2 0x170 /* Transmit descriptor ring2 start register */
> +#define ENET_MRB_SIZE_2 0x174 /* Maximum receive buffer size register ring2 */
> +#define ENET_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
> +#define ENET_TD_START_0 0x184 /* Transmit buffer descriptor ring0 start reg */
> +#define ENET_MRB_SIZE_0 0x188 /* Maximum receive buffer size register ring0*/
> +#define ENET_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
> +#define ENET_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
> +#define ENET_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
> +#define ENET_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
> +#define ENET_FRAME_TRL 0x1b0 /* Frame truncation length */
> +#define ENET_RACC 0x1c4 /* Receive Accelerator function configuration*/
> +#define ENET_RCM1 0x1c8 /* Receive classification match register ring1 */
> +#define ENET_RCM2 0x1cc /* Receive classification match register ring2 */
> +#define ENET_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
> +#define ENET_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
> +#define ENET_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
> +#define ENET_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
> +#define ENET_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
> +#define ENET_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
> +#define ENET_MII_GSK_CFGR 0x300 /* MII_GSK Configuration register */
> +#define ENET_MII_GSK_ENR 0x308 /* MII_GSK Enable register*/
> +
> +#define BM_MII_GSK_CFGR_MII 0x00
> +#define BM_MII_GSK_CFGR_RMII 0x01
> +#define BM_MII_GSK_CFGR_FRCONT_10M 0x40
> +
> +/* full duplex or half duplex */
> +#define HALF_DUPLEX 0x00
> +#define FULL_DUPLEX 0x01
> +#define UNKNOWN_DUPLEX 0xff
> +
> +#endif /*__ENET_REGS_H */
> diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
> new file mode 100644
> index 000000000..b64dc522e
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_uio.c
> @@ -0,0 +1,192 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <unistd.h>
> +#include <stdlib.h>
> +#include <dirent.h>
> +#include <string.h>
> +#include <sys/mman.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +
> +#include <rte_common.h>
> +#include <rte_malloc.h>
> +#include "enet_pmd_logs.h"
> +#include "enet_uio.h"
> +
> +static struct uio_job enetfec_uio_job;
> +int count;
> +
> +/** @brief Reads first line from a file.
> + * Composes file name as: root/subdir/filename
> + *
> + * @param [in] root Root path
> + * @param [in] subdir Subdirectory name
> + * @param [in] filename File name
> + * @param [out] line The first line read from file.
> + *
> + * @retval 0 for succes
> + * @retval other value for error
> + */
> +static int
> +file_read_first_line(const char root[], const char subdir[],
> + const char filename[], char *line)
> +{
> + char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
> + int fd = 0, ret = 0;
> +
> + /*compose the file name: root/subdir/filename */
> + memset(absolute_file_name, 0, sizeof(absolute_file_name));
> + snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
> + "%s/%s/%s", root, subdir, filename);
> +
> + fd = open(absolute_file_name, O_RDONLY);
> + if (fd <= 0)
> + ENET_PMD_ERR("Error opening file %s", absolute_file_name);
> +
> + /* read UIO device name from first line in file */
> + ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
> + close(fd);
> +
> + /* NULL-ify string */
> + line[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH - 1] = '\0';
> +
> + if (ret <= 0) {
> + ENET_PMD_ERR("Error reading from file %s", absolute_file_name);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +/** @brief Maps rx-tx bd range assigned for a bd ring.
> + *
> + * @param [in] uio_device_fd UIO device file descriptor
> + * @param [in] uio_device_id UIO device id
> + * @param [in] uio_map_id UIO allows maximum 5 different mapping for
> + each device. Maps start with id 0.
> + * @param [out] map_size Map size.
> + * @param [out] map_addr Map physical address
> + * @retval NULL if failed to map registers
> + * @retval Virtual address for mapped register address range
> + */
> +static void *
> +uio_map_mem(int uio_device_fd, int uio_device_id,
> + int uio_map_id, int *map_size, uint64_t *map_addr)
> +{
> + void *mapped_address = NULL;
> + unsigned int uio_map_size = 0;
> + unsigned int uio_map_p_addr = 0;
> + char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
> + char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
> + char uio_map_size_str[32];
> + char uio_map_p_addr_str[64];
> + int ret = 0;
> +
> + /* compose the file name: root/subdir/filename */
> + memset(uio_sys_root, 0, sizeof(uio_sys_root));
> + memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
> + memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
> + memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
> +
> + /* Compose string: /sys/class/uio/uioX */
> + snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
> + FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
> + /* Compose string: maps/mapY */
> + snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
> + FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
> +
> + /* Read first (and only) line from file
> + * /sys/class/uio/uioX/maps/mapY/size
> + */
> + ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
> + "size", uio_map_size_str);
> + if (ret)
Compare vs 0
> + ENET_PMD_ERR("file_read_first_line() failed");
> +
> + ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
> + "addr", uio_map_p_addr_str);
> + if (ret)
Compare vs 0
> + ENET_PMD_ERR("file_read_first_line() failed");
> +
> + /* Read mapping size and physical address expressed in hexa(base 16) */
> + uio_map_size = strtol(uio_map_size_str, NULL, 16);
> + uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
> +
> + if (uio_map_id == 0) {
> + /* Map the register address in user space when map_id is 0 */
> + mapped_address = mmap(0 /*dynamically choose virtual address */,
> + uio_map_size, PROT_READ | PROT_WRITE,
> + MAP_SHARED, uio_device_fd, 0);
> + } else {
> + /* Map the BD memory in user space */
> + mapped_address = mmap(NULL, uio_map_size,
> + PROT_READ | PROT_WRITE,
> + MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
> + }
> +
> + if (mapped_address == MAP_FAILED) {
> + ENET_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
> + "uio device id = %d, uio map id = %d", errno,
> + uio_device_fd, uio_device_id, uio_map_id);
> + return 0;
> + }
> +
> + /* Save the map size to use it later on for munmap-ing */
> + *map_size = uio_map_size;
> + *map_addr = uio_map_p_addr;
> + ENET_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
> + uio_device_id, uio_map_id, uio_map_size, mapped_address);
> +
> + return mapped_address;
> +}
> +
> +int
> +config_enetfec_uio(struct enetfec_private *fep)
> +{
> + char uio_device_file_name[32];
> + struct uio_job *uio_job = NULL;
> +
> + /* Mapping is done only one time */
> + if (count) {
Compare vs 0
> + printf("Mapping already done, can't map again!\n");
> + return 0;
> + }
> +
> + uio_job = &enetfec_uio_job;
> +
> + /* Find UIO device created by ENETFEC-UIO kernel driver */
> + memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
> + snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
> + FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
> +
> + /* Open device file */
> + uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
> + if (uio_job->uio_fd < 0) {
> + printf("US_UIO: Open Failed\n");
> + exit(1);
> + }
> +
> + ENET_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
> + uio_device_file_name, uio_job->uio_fd);
> +
> + fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
> + uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
> + &uio_job->map_size, &uio_job->map_addr);
> + fep->hw_baseaddr_p = uio_job->map_addr;
> + fep->reg_size = uio_job->map_size;
> +
> + fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
> + uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
> + &uio_job->map_size, &uio_job->map_addr);
> + fep->bd_addr_p = uio_job->map_addr;
> + fep->bd_size = uio_job->map_size;
> +
> + count++;
> +
> + return 0;
> +}
> diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
> new file mode 100644
> index 000000000..b220cae9d
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_uio.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#include "enet_ethdev.h"
> +
> +/* Prefix path to sysfs directory where UIO device attributes are exported.
> + * Path for UIO device X is /sys/class/uio/uioX
> + */
> +#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
> +
> +/* Subfolder in sysfs where mapping attributes are exported
> + * for each UIO device. Path for mapping Y for device X is:
> + * /sys/class/uio/uioX/maps/mapY
> + */
> +#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
> +
> +/* Name of UIO device file prefix. Each UIO device will have a device file
> + * /dev/uioX, where X is the minor device number.
> + */
> +#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
> +
> +/* Maximum length for the name of an UIO device file.
> + * Device file name format is: /dev/uioX.
> + */
> +#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
> +
> +/* Maximum length for the name of an attribute file for an UIO device.
> + * Attribute files are exported in sysfs and have the name formatted as:
> + * /sys/class/uio/uioX/<attribute_file_name>
> + */
> +#define FEC_UIO_MAX_ATTR_FILE_NAME 100
> +
> +/* The id for the mapping used to export ENETFEC registers and BD memory to
> + * user space through UIO device.
> + */
> +#define FEC_UIO_REG_MAP_ID 0
> +#define FEC_UIO_BD_MAP_ID 1
> +
> +#define MAP_PAGE_SIZE 4096
> +
> +struct uio_job {
> + uint32_t fec_id;
> + int uio_fd;
> + void *bd_start_addr;
> + void *register_base_addr;
> + int map_size;
> + uint64_t map_addr;
> + int uio_minor_number;
> +};
> +
> +int config_enetfec_uio(struct enetfec_private *fep);
> +void enetfec_uio_init(void);
> +void enetfec_cleanup(void);
> diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
> index 252bf8330..05183bd44 100644
> --- a/drivers/net/enetfec/meson.build
> +++ b/drivers/net/enetfec/meson.build
> @@ -8,7 +8,8 @@ endif
>
> deps += ['common_dpaax']
>
> -sources = files('enet_ethdev.c')
> +sources = files('enet_ethdev.c',
> + 'enet_uio.c')
>
> if cc.has_argument('-Wno-pointer-arith')
> cflags += '-Wno-pointer-arith'
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration
2021-04-30 4:34 ` [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration Apeksha Gupta
@ 2021-06-08 13:38 ` Andrew Rybchenko
2021-07-04 6:46 ` Sachin Saxena (OSS)
1 sibling, 0 replies; 17+ messages in thread
From: Andrew Rybchenko @ 2021-06-08 13:38 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena
Summary is incorrect.
"net/enetfec: support queue configuration" ?
On 4/30/21 7:34 AM, Apeksha Gupta wrote:
> This patch added RX/TX queue configuration setup operations.
RX -> Rx, TX -> Tx
> On packet Rx the respective BD Ring status bit is set which is then
> used for packet processing.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> drivers/net/enetfec/enet_ethdev.c | 223 ++++++++++++++++++++++++++++++
> 1 file changed, 223 insertions(+)
>
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> index 5f4f2cf9e..b4816179a 100644
> --- a/drivers/net/enetfec/enet_ethdev.c
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -48,6 +48,19 @@
>
> int enetfec_logtype_pmd;
>
> +/* Supported Rx offloads */
> +static uint64_t dev_rx_offloads_sup =
> + DEV_RX_OFFLOAD_IPV4_CKSUM |
> + DEV_RX_OFFLOAD_UDP_CKSUM |
> + DEV_RX_OFFLOAD_TCP_CKSUM |
> + DEV_RX_OFFLOAD_VLAN_STRIP |
> + DEV_RX_OFFLOAD_CHECKSUM;
> +
> +static uint64_t dev_tx_offloads_sup =
> + DEV_TX_OFFLOAD_IPV4_CKSUM |
> + DEV_TX_OFFLOAD_UDP_CKSUM |
> + DEV_TX_OFFLOAD_TCP_CKSUM;
> +
> /*
> * This function is called to start or restart the FEC during a link
> * change, transmit timeout or to reconfigure the FEC. The network
> @@ -176,8 +189,218 @@ enetfec_eth_open(struct rte_eth_dev *dev)
> return 0;
> }
>
> +
> +static int
> +enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
> +{
> + ENET_PMD_INFO("%s: returning zero ", __func__);
> + return 0;
> +}
> +
> +static int
> +enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
> + struct rte_eth_dev_info *dev_info)
> +{
> + dev_info->max_rx_queues = ENET_MAX_Q;
> + dev_info->max_tx_queues = ENET_MAX_Q;
> + dev_info->min_mtu = RTE_ETHER_MIN_MTU;
max_mtu?
> + dev_info->rx_offload_capa = dev_rx_offloads_sup;
> + dev_info->tx_offload_capa = dev_tx_offloads_sup;
> +
> + return 0;
> +}
> +
> +static const unsigned short offset_des_active_rxq[] = {
> + ENET_RDAR_0, ENET_RDAR_1, ENET_RDAR_2
> +};
> +
> +static const unsigned short offset_des_active_txq[] = {
> + ENET_TDAR_0, ENET_TDAR_1, ENET_TDAR_2
> +};
> +
> +static int
> +enetfec_tx_queue_setup(struct rte_eth_dev *dev,
> + uint16_t queue_idx,
> + uint16_t nb_desc,
> + __rte_unused unsigned int socket_id,
> + __rte_unused const struct rte_eth_txconf *tx_conf)
It is incorrect to silently ignore tx_conf. Not everything in
it has corresponding capabilties to be advertised by the driver
and checked by ethdev. So, you need to check that not supported
configuration is not requested. E.g. deferred start.
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + unsigned int i;
> + struct bufdesc *bdp, *bd_base;
> + struct enetfec_priv_tx_q *txq;
> + unsigned int size;
> + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
> + sizeof(struct bufdesc);
> + unsigned int dsize_log2 = fls64(dsize);
> +
> + /* allocate transmit queue */
> + txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
> + if (!txq) {
Compare vs NULL
> + ENET_PMD_ERR("transmit queue allocation failed");
> + return -ENOMEM;
> + }
> +
> + if (nb_desc > MAX_TX_BD_RING_SIZE) {
> + nb_desc = MAX_TX_BD_RING_SIZE;
> + ENET_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n");
> + }
> + txq->bd.ring_size = nb_desc;
> + fep->total_tx_ring_size += txq->bd.ring_size;
> + fep->tx_queues[queue_idx] = txq;
> +
> + rte_write32(fep->bd_addr_p_t[queue_idx],
> + fep->hw_baseaddr_v + ENET_TD_START(queue_idx));
> +
> + /* Set transmit descriptor base. */
> + txq = fep->tx_queues[queue_idx];
> + txq->fep = fep;
> + size = dsize * txq->bd.ring_size;
> + bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
> + txq->bd.que_id = queue_idx;
> + txq->bd.base = bd_base;
> + txq->bd.cur = bd_base;
> + txq->bd.d_size = dsize;
> + txq->bd.d_size_log2 = dsize_log2;
> + txq->bd.active_reg_desc =
> + fep->hw_baseaddr_v + offset_des_active_txq[queue_idx];
> + bd_base = (struct bufdesc *)(((void *)bd_base) + size);
> + txq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize);
> + bdp = txq->bd.base;
> + bdp = txq->bd.cur;
> +
> + for (i = 0; i < txq->bd.ring_size; i++) {
> + /* Initialize the BD for every fragment in the page. */
> + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
> + if (txq->tx_mbuf[i]) {
Compare vs NULL
> + rte_pktmbuf_free(txq->tx_mbuf[i]);
> + txq->tx_mbuf[i] = NULL;
> + }
> + rte_write32(rte_cpu_to_le_32(0), &bdp->bd_bufaddr);
> + bdp = enet_get_nextdesc(bdp, &txq->bd);
> + }
> +
> + /* Set the last buffer to wrap */
> + bdp = enet_get_prevdesc(bdp, &txq->bd);
> + rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
> + rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
> + txq->dirty_tx = bdp;
> + dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
> + return 0;
> +}
> +
> +static int
> +enetfec_rx_queue_setup(struct rte_eth_dev *dev,
> + uint16_t queue_idx,
> + uint16_t nb_rx_desc,
> + __rte_unused unsigned int socket_id,
> + __rte_unused const struct rte_eth_rxconf *rx_conf,
> + struct rte_mempool *mb_pool)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + unsigned int i;
> + struct bufdesc *bd_base;
> + struct bufdesc *bdp;
> + struct enetfec_priv_rx_q *rxq;
> + unsigned int size;
> + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
> + sizeof(struct bufdesc);
> + unsigned int dsize_log2 = fls64(dsize);
> +
> + /* allocate receive queue */
> + rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
> + if (!rxq) {
Compare vs NULL
> + ENET_PMD_ERR("receive queue allocation failed");
> + return -ENOMEM;
> + }
> +
> + if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
> + nb_rx_desc = MAX_RX_BD_RING_SIZE;
> + ENET_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n");
> + }
> +
> + rxq->bd.ring_size = nb_rx_desc;
> + fep->total_rx_ring_size += rxq->bd.ring_size;
> + fep->rx_queues[queue_idx] = rxq;
> +
> + rte_write32(fep->bd_addr_p_r[queue_idx],
> + fep->hw_baseaddr_v + ENET_RD_START(queue_idx));
> + rte_write32(PKT_MAX_BUF_SIZE,
> + fep->hw_baseaddr_v + ENET_MRB_SIZE(queue_idx));
> +
> + /* Set receive descriptor base. */
> + rxq = fep->rx_queues[queue_idx];
> + rxq->pool = mb_pool;
> + size = dsize * rxq->bd.ring_size;
> + bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
> + rxq->bd.que_id = queue_idx;
> + rxq->bd.base = bd_base;
> + rxq->bd.cur = bd_base;
> + rxq->bd.d_size = dsize;
> + rxq->bd.d_size_log2 = dsize_log2;
> + rxq->bd.active_reg_desc =
> + fep->hw_baseaddr_v + offset_des_active_rxq[queue_idx];
> + bd_base = (struct bufdesc *)(((void *)bd_base) + size);
> + rxq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize);
> +
> + rxq->fep = fep;
> + bdp = rxq->bd.base;
> + rxq->bd.cur = bdp;
> +
> + for (i = 0; i < nb_rx_desc; i++) {
> + /* Initialize Rx buffers from pktmbuf pool */
> + struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
> + if (mbuf == NULL) {
> + ENET_PMD_ERR("mbuf failed\n");
> + goto err_alloc;
> + }
> +
> + /* Get the virtual address & physical address */
> + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
> + &bdp->bd_bufaddr);
> +
> + rxq->rx_mbuf[i] = mbuf;
> + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
> +
> + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> + }
> +
> + /* Initialize the receive buffer descriptors. */
> + bdp = rxq->bd.cur;
> + for (i = 0; i < rxq->bd.ring_size; i++) {
> + /* Initialize the BD for every fragment in the page. */
> + if (rte_read32(&bdp->bd_bufaddr))
Compare vs 0
> + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
> + &bdp->bd_sc);
> + else
> + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
> +
> + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> + }
> +
> + /* Set the last buffer to wrap */
> + bdp = enet_get_prevdesc(bdp, &rxq->bd);
> + rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
> + rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
> + dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
> + rte_write32(0x0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
> + return 0;
> +
> +err_alloc:
> + for (i = 0; i < nb_rx_desc; i++) {
> + rte_pktmbuf_free(rxq->rx_mbuf[i]);
> + rxq->rx_mbuf[i] = NULL;
> + }
> + rte_free(rxq);
> + return -1;
Callback returns negative errno, not -1
> +}
> +
> static const struct eth_dev_ops ops = {
> .dev_start = enetfec_eth_open,
> + .dev_configure = enetfec_eth_configure,
> + .dev_infos_get = enetfec_eth_info,
> + .rx_queue_setup = enetfec_rx_queue_setup,
> + .tx_queue_setup = enetfec_tx_queue_setup,
Order in the structure should be in the same order as
order in the eth_dev_ops for consistency.
> };
>
> static int
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support
2021-04-30 4:34 ` [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support Apeksha Gupta
@ 2021-06-08 13:42 ` Andrew Rybchenko
2021-06-21 9:14 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-07-05 8:48 ` [dpdk-dev] " Sachin Saxena (OSS)
1 sibling, 1 reply; 17+ messages in thread
From: Andrew Rybchenko @ 2021-06-08 13:42 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena
On 4/30/21 7:34 AM, Apeksha Gupta wrote:
> This patch supported checksum offloads and add burst enqueue and
> dequeue operations to the enetfec PMD.
>
> Loopback mode is added, compile time flag 'ENETFEC_LOOPBACK' is
> used to enable this feature. By default loopback mode is disabled.
> Basic features added like promiscuous enable, basic stats.
Please, apply style fixes from the previous patches
to the patch as well.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> doc/guides/nics/enetfec.rst | 4 +
> doc/guides/nics/features/enetfec.ini | 5 +
> drivers/net/enetfec/enet_ethdev.c | 212 +++++++++++-
> drivers/net/enetfec/enet_rxtx.c | 499 +++++++++++++++++++++++++++
> 4 files changed, 719 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/enetfec/enet_rxtx.c
>
> diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> index 10f495fb9..adbb52392 100644
> --- a/doc/guides/nics/enetfec.rst
> +++ b/doc/guides/nics/enetfec.rst
> @@ -75,6 +75,10 @@ ENETFEC driver.
> ENETFEC Features
> ~~~~~~~~~~~~~~~~~
>
> +- Basic stats
> +- Promiscuous
> +- VLAN offload
> +- L3/L4 checksum offload
> - ARMv8
>
> Supported ENETFEC SoCs
> diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
> index 570069798..fcc217773 100644
> --- a/doc/guides/nics/features/enetfec.ini
> +++ b/doc/guides/nics/features/enetfec.ini
> @@ -4,5 +4,10 @@
> ; Refer to default.ini for the full list of available PMD features.
> ;
> [Features]
> +Basic stats = Y
> +Promiscuous mode = Y
> +VLAN offload = Y
> +L3 checksum offload = Y
> +L4 checksum offload = Y
I don't understand why all above features are in the patch.
It looks like all could be added one by one after
the patch with adds basic Rx/Tx suppport.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support
2021-06-08 13:42 ` Andrew Rybchenko
@ 2021-06-21 9:14 ` Apeksha Gupta
0 siblings, 0 replies; 17+ messages in thread
From: Apeksha Gupta @ 2021-06-21 9:14 UTC (permalink / raw)
To: Andrew Rybchenko, ferruh.yigit; +Cc: dev, Hemant Agrawal, Sachin Saxena
Hi,
I will be reworking on the 'enetfec' driver. It may take time to send the V2 patch.
Thanks & Regards,
Apeksha
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Tuesday, June 8, 2021 7:12 PM
> To: Apeksha Gupta <apeksha.gupta@nxp.com>; ferruh.yigit@intel.com
> Cc: dev@dpdk.org; Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin
> Saxena <sachin.saxena@nxp.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and
> dequeue support
>
> Caution: EXT Email
>
> On 4/30/21 7:34 AM, Apeksha Gupta wrote:
> > This patch supported checksum offloads and add burst enqueue and
> > dequeue operations to the enetfec PMD.
> >
> > Loopback mode is added, compile time flag 'ENETFEC_LOOPBACK' is
> > used to enable this feature. By default loopback mode is disabled.
> > Basic features added like promiscuous enable, basic stats.
>
> Please, apply style fixes from the previous patches
> to the patch as well.
>
> >
> > Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> > Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> > ---
> > doc/guides/nics/enetfec.rst | 4 +
> > doc/guides/nics/features/enetfec.ini | 5 +
> > drivers/net/enetfec/enet_ethdev.c | 212 +++++++++++-
> > drivers/net/enetfec/enet_rxtx.c | 499 +++++++++++++++++++++++++++
> > 4 files changed, 719 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/net/enetfec/enet_rxtx.c
> >
> > diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> > index 10f495fb9..adbb52392 100644
> > --- a/doc/guides/nics/enetfec.rst
> > +++ b/doc/guides/nics/enetfec.rst
> > @@ -75,6 +75,10 @@ ENETFEC driver.
> > ENETFEC Features
> > ~~~~~~~~~~~~~~~~~
> >
> > +- Basic stats
> > +- Promiscuous
> > +- VLAN offload
> > +- L3/L4 checksum offload
> > - ARMv8
> >
> > Supported ENETFEC SoCs
> > diff --git a/doc/guides/nics/features/enetfec.ini
> b/doc/guides/nics/features/enetfec.ini
> > index 570069798..fcc217773 100644
> > --- a/doc/guides/nics/features/enetfec.ini
> > +++ b/doc/guides/nics/features/enetfec.ini
> > @@ -4,5 +4,10 @@
> > ; Refer to default.ini for the full list of available PMD features.
> > ;
> > [Features]
> > +Basic stats = Y
> > +Promiscuous mode = Y
> > +VLAN offload = Y
> > +L3 checksum offload = Y
> > +L4 checksum offload = Y
>
> I don't understand why all above features are in the patch.
> It looks like all could be added one by one after
> the patch with adds basic Rx/Tx suppport.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce NXP ENETFEC driver
2021-06-08 13:10 ` Andrew Rybchenko
@ 2021-07-02 13:55 ` David Marchand
0 siblings, 0 replies; 17+ messages in thread
From: David Marchand @ 2021-07-02 13:55 UTC (permalink / raw)
To: Apeksha Gupta
Cc: Andrew Rybchenko, Yigit, Ferruh, dev, Hemant Agrawal, Sachin Saxena
On Tue, Jun 8, 2021 at 3:10 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
> > +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
> > +RTE_PMD_REGISTER_PARAM_STRING(ENETFEC_NAME_PMD, ENET_VDEV_GEM_ID_ARG "=<int>");
> > +
> > +RTE_INIT(enetfec_pmd_init_log)
> > +{
> > + enetfec_logtype_pmd = rte_log_register("pmd.net.enetfec");
> > + if (enetfec_logtype_pmd >= 0)
> > + rte_log_set_level(enetfec_logtype_pmd, RTE_LOG_NOTICE);
>
> rte_log_register_type_and_pick_level() should be used.
>
> > +}
Please prefer RTE_LOG_REGISTER_DEFAULT().
Thanks.
--
David Marchand
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver
2021-04-30 4:34 [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Apeksha Gupta
` (4 preceding siblings ...)
2021-05-04 15:40 ` [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Ferruh Yigit
@ 2021-07-04 2:55 ` Sachin Saxena (OSS)
5 siblings, 0 replies; 17+ messages in thread
From: Sachin Saxena (OSS) @ 2021-07-04 2:55 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit, dev; +Cc: hemant.agrawal
On 30-Apr-21 10:04 AM, Apeksha Gupta wrote:
> This patch series introduce the enetfec ethernet driver,
enetfec ethernet driver -> enetfec driver
> ENET fec (Fast Ethernet Controller) is a network poll mode driver for
ENET fec ->enetfec
Also, please use "enetfec" consistently at all places.
> the inbuilt NIC found in the NXP imx8mmevk Soc.
SoC
>
> An overview of the enetfec driver with probe and remove are in patch 1.
> Patch 2 design UIO so that user space directly communicate with a
UIO -> UIO interface
> hardware device. UIO interface mmap the Register & BD memory in DPDK
hardware device -> UIO based hardware device
Register ->Control and Status Registers (/CSR/)
> which is allocated in kernel and this gives access to non-cacheble
> memory for BD.
>
> Patch 3 adds the RX/TX queue configuration setup operations.
> Patch 4 adds enqueue and dequeue support. Also adds some basic features
> like promiscuous enable, basic stats.
>
>
> Apeksha Gupta (4):
> drivers/net/enetfec: Introduce NXP ENETFEC driver
> drivers/net/enetfec: UIO support added
> drivers/net/enetfec: queue configuration
> drivers/net/enetfec: add enqueue and dequeue support
>
> doc/guides/nics/enetfec.rst | 125 +++++
> doc/guides/nics/features/enetfec.ini | 13 +
> doc/guides/nics/index.rst | 1 +
> drivers/net/enetfec/enet_ethdev.c | 726 +++++++++++++++++++++++++++
> drivers/net/enetfec/enet_ethdev.h | 203 ++++++++
> drivers/net/enetfec/enet_pmd_logs.h | 31 ++
> drivers/net/enetfec/enet_regs.h | 179 +++++++
> drivers/net/enetfec/enet_rxtx.c | 499 ++++++++++++++++++
> drivers/net/enetfec/enet_uio.c | 192 +++++++
> drivers/net/enetfec/enet_uio.h | 54 ++
> drivers/net/enetfec/meson.build | 16 +
> drivers/net/enetfec/version.map | 3 +
> drivers/net/meson.build | 1 +
> 13 files changed, 2043 insertions(+)
> create mode 100644 doc/guides/nics/enetfec.rst
> create mode 100644 doc/guides/nics/features/enetfec.ini
> create mode 100644 drivers/net/enetfec/enet_ethdev.c
> create mode 100644 drivers/net/enetfec/enet_ethdev.h
> create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
> create mode 100644 drivers/net/enetfec/enet_regs.h
> create mode 100644 drivers/net/enetfec/enet_rxtx.c
> create mode 100644 drivers/net/enetfec/enet_uio.c
> create mode 100644 drivers/net/enetfec/enet_uio.h
> create mode 100644 drivers/net/enetfec/meson.build
> create mode 100644 drivers/net/enetfec/version.map
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce NXP ENETFEC driver
2021-04-30 4:34 ` [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce " Apeksha Gupta
2021-06-08 13:10 ` Andrew Rybchenko
@ 2021-07-04 2:57 ` Sachin Saxena (OSS)
1 sibling, 0 replies; 17+ messages in thread
From: Sachin Saxena (OSS) @ 2021-07-04 2:57 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit; +Cc: dev, hemant.agrawal
On 30-Apr-21 10:04 AM, Apeksha Gupta wrote:
> ENET fec (Fast Ethernet Controller) is a network poll mode driver
> for NXP SoC imx8mmevk.
Either use imx8mmevk or "i.MX 8M Mini"at all places
ENET fec-> enetfec, please change it at all places.
> This patch add skeleton for enetfec driver with probe and
patch add -> patch adds
> uintialisation functions
uintialisation-> remove functionality
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> doc/guides/nics/enetfec.rst | 121 ++++++++++++++++
> doc/guides/nics/features/enetfec.ini | 8 ++
> doc/guides/nics/index.rst | 1 +
> drivers/net/enetfec/enet_ethdev.c | 89 ++++++++++++
> drivers/net/enetfec/enet_ethdev.h | 203 +++++++++++++++++++++++++++
> drivers/net/enetfec/enet_pmd_logs.h | 31 ++++
> drivers/net/enetfec/meson.build | 15 ++
> drivers/net/enetfec/version.map | 3 +
> drivers/net/meson.build | 1 +
> 9 files changed, 472 insertions(+)
> create mode 100644 doc/guides/nics/enetfec.rst
> create mode 100644 doc/guides/nics/features/enetfec.ini
> create mode 100644 drivers/net/enetfec/enet_ethdev.c
> create mode 100644 drivers/net/enetfec/enet_ethdev.h
> create mode 100644 drivers/net/enetfec/enet_pmd_logs.h
> create mode 100644 drivers/net/enetfec/meson.build
> create mode 100644 drivers/net/enetfec/version.map
>
> diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> new file mode 100644
> index 000000000..10f495fb9
> --- /dev/null
> +++ b/doc/guides/nics/enetfec.rst
> @@ -0,0 +1,121 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2021 NXP
> +
> +ENETFEC Poll Mode Driver
> +========================
> +
> +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver
> +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
Either use imx8mmevk or "i.MX 8M Mini"at all places
> +
> +More information can be found at NXP Official Website
> +<https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-mini-arm-cortex-a53-cortex-m4-audio-voice-video:i.MX8MMINI>
> +
> +ENETFEC
> +-------
> +
> +This section provides an overview of the NXP ENETFEC and how it is
> +integrated into the DPDK.
> +
> +Contents summary
> +
> +- ENETFEC overview
> +- ENETFEC features
> +- Supported ENETFEC SoCs
> +- Prerequisites
> +- Driver compilation and testing
> +- Limitations
> +
> +ENETFEC Overview
> +~~~~~~~~~~~~~~~~
> +The i.MX 8M Mini Media Applications Processor is built to achieve both high
> +performance and low power consumption. ENETFEC is a hardware programmable
> +packet forwarding engine to provide high performance Ethernet interface.
> +The diagram below shows a system level overview of ENETFEC:
> +
> + ====================================================+===============
> + US +-----------------------------------------+ | Kernel Space
> + | | |
> + | ENETFEC Ethernet Driver | |
> + +-----------------------------------------+ |
> + ^ | |
> + ENETFEC RXQ | | TXQ |
> + PMD | | |
> + | v | +----------+
> + +-------------+ | | fec-uio |
> + | net_enetfec | | +----------+
> + +-------------+ |
> + ^ | |
> + TXQ | | RXQ |
> + | | |
> + | v |
> + ===================================================+===============
> + +----------------------------------------+
> + | | HW
> + | i.MX 8M MINI EVK |
> + | +-----+ |
> + | | MAC | |
> + +---------------+-----+------------------+
> + | PHY |
> + +-----+
> +
> +ENETFEC ethernet driver is traditional DPDK PMD driver running in the userspace.
> +The MAC and PHY are the hardware blocks. 'fec-uio' is the uio driver, enetfec PMD
> +uses uio interface to interact with kernel for PHY initialisation and for mapping
> +the allocated memory of register & BD in kernel with DPDK which gives access to
> +non-cacheble memory for BD. net_enetfec is logical ethernet interface, created by
> +ENETFEC driver.
> +
> +- ENETFEC driver registers the device in virtual device driver.
> +- RTE framework scans and will invoke the probe function of ENETFEC driver.
> +- The probe function will set the basic device registers and also setups BD rings.
> +- On packet Rx the respective BD Ring status bit is set which is then used for
> + packet processing.
> +- Then Tx is done first followed by Rx via logical interfaces.
> +
> +ENETFEC Features
> +~~~~~~~~~~~~~~~~~
> +
> +- ARMv8
> +
> +Supported ENETFEC SoCs
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +- i.MX 8M Mini
> +
> +Prerequisites
> +~~~~~~~~~~~~~
> +
> +There are three main pre-requisites for executing ENETfec PMD on a i.MX
> +compatible board:
> +
> +1. **ARM 64 Tool Chain**
> +
> + For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
> +
> +2. **Linux Kernel**
> +
> + It can be obtained from `NXP's bitbucket: <https://bitbucket.sw.nxp.com/projects/LFAC/repos/linux-nxp/commits?until=refs%2Fheads%2Fnet%2Ffec-uio&merges=include>`_.
Bitbucket is an internal repository. Please check.
> +
> +3. **Rootfile system**
> +
> + Any *aarch64* supporting filesystem can be used. For example,
> + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained
> + from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
> +
> +4. The ethernet device will be registered as virtual device, so enetfec has dependency on
> + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to
> + run DPDK application.
> +
> +Driver compilation and testing
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Follow instructions available in the document
> +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +to launch **testpmd**
> +
> +Limitations
> +~~~~~~~~~~~
> +
> +- Multi queue is not supported.
> +- Link status is down always.
> +- Single ethernet interface.
> diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
> new file mode 100644
> index 000000000..570069798
> --- /dev/null
> +++ b/doc/guides/nics/features/enetfec.ini
> @@ -0,0 +1,8 @@
> +;
> +; Supported features of the 'enetfec' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +ARMv8 = Y
> +Usage doc = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 799697caf..93b68e701 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -25,6 +25,7 @@ Network Interface Controller Drivers
> e1000em
> ena
> enetc
> + enetfec
> enic
> fm10k
> hinic
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> new file mode 100644
> index 000000000..5fd2dbc2d
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -0,0 +1,89 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
1 line gap is recommended after copyright syntax.
> +#include <stdio.h>
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <errno.h>
> +#include <rte_kvargs.h>
> +#include <ethdev_vdev.h>
> +#include <rte_bus_vdev.h>
> +#include <rte_dev.h>
> +#include <rte_ether.h>
> +#include "enet_pmd_logs.h"
> +#include "enet_ethdev.h"
> +
> +#define ENETFEC_NAME_PMD net_enetfec
> +#define ENET_VDEV_GEM_ID_ARG "intf"
> +#define ENET_CDEV_INVALID_FD -1
> +
> +int enetfec_logtype_pmd;
> +
> +static int
> +enetfec_eth_init(struct rte_eth_dev *dev)
> +{
> + rte_eth_dev_probing_finish(dev);
> + return 0;
> +}
> +
> +static int
> +pmd_enetfec_probe(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *dev = NULL;
> + struct enetfec_private *fep;
> + const char *name;
> + int rc = -1;
Initialization with -1 is optional as rc is always getting a return value.
> +
> + name = rte_vdev_device_name(vdev);
> + if (name == NULL)
> + return -EINVAL;
> + ENET_PMD_LOG(INFO, "Initializing pmd_fec for %s", name);
> +
> + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep));
> + if (dev == NULL)
> + return -ENOMEM;
> +
> + /* setup board info structure */
> + fep = dev->data->dev_private;
> + fep->dev = dev;
> + rc = enetfec_eth_init(dev);
> + if (rc)
> + goto failed_init;
> + return 0;
> +failed_init:
> + ENET_PMD_ERR("Failed to init");
> + return rc;
> +}
> +
> +static int
> +pmd_enetfec_remove(struct rte_vdev_device *vdev)
> +{
> + struct rte_eth_dev *eth_dev = NULL;
> +
> + /* find the ethdev entry */
> + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> + if (!eth_dev)
> + return -ENODEV;
> +
> + rte_eth_dev_release_port(eth_dev);
Function may return error. Please handle the same.
> +
> + ENET_PMD_INFO("Closing sw device\n");
> + return 0;
> +}
> +
> +static
> +struct rte_vdev_driver pmd_enetfec_drv = {
> + .probe = pmd_enetfec_probe,
> + .remove = pmd_enetfec_remove,
> +};
> +
> +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv);
> +RTE_PMD_REGISTER_PARAM_STRING(ENETFEC_NAME_PMD, ENET_VDEV_GEM_ID_ARG "=<int>");
> +
> +RTE_INIT(enetfec_pmd_init_log)
> +{
> + enetfec_logtype_pmd = rte_log_register("pmd.net.enetfec");
> + if (enetfec_logtype_pmd >= 0)
> + rte_log_set_level(enetfec_logtype_pmd, RTE_LOG_NOTICE);
> +}
> diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h
> new file mode 100644
> index 000000000..3833a70fc
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_ethdev.h
> @@ -0,0 +1,203 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#ifndef __ENET_ETHDEV_H__
I think, we should use ENETFEC in place of ENET.
> +#define __ENET_ETHDEV_H__
> +
> +#include <compat.h>
> +#include <rte_ethdev.h>
> +
> +/* ENET with AVB IP can support maximum 3 rx and tx queues.
> + */
> +#define ENET_MAX_Q 3
> +
> +#define BD_LEN 49152
> +#define ENET_TX_FR_SIZE 2048
Alignment issues
> +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */
> +#define MAX_RX_BD_RING_SIZE 512
> +
> +/* full duplex or half duplex */
> +#define HALF_DUPLEX 0x00
> +#define FULL_DUPLEX 0x01
> +#define UNKNOWN_DUPLEX 0xff
> +
> +#define PKT_MAX_BUF_SIZE 1984
> +#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16)
> +#define ETH_ALEN RTE_ETHER_ADDR_LEN
> +#define ETH_HLEN RTE_ETHER_HDR_LEN
> +#define VLAN_HLEN 4
> +
> +
> +struct bufdesc {
> + uint16_t bd_datlen; /* buffer data length */
> + uint16_t bd_sc; /* buffer control & status */
> + uint32_t bd_bufaddr; /* buffer address */
> +};
> +
> +struct bufdesc_ex {
> + struct bufdesc desc;
> + uint32_t bd_esc;
> + uint32_t bd_prot;
> + uint32_t bd_bdu;
> + uint32_t ts;
> + uint16_t res0[4];
> +};
> +
> +struct bufdesc_prop {
> + int que_id;
> + /* Addresses of Tx and Rx buffers */
> + struct bufdesc *base;
> + struct bufdesc *last;
> + struct bufdesc *cur;
> + void __iomem *active_reg_desc;
> + uint64_t descr_baseaddr_p;
> + unsigned short ring_size;
> + unsigned char d_size;
> + unsigned char d_size_log2;
> +};
> +
> +struct enetfec_priv_tx_q {
> + struct bufdesc_prop bd;
> + struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE];
> + struct bufdesc *dirty_tx;
> + struct rte_mempool *pool;
> + struct enetfec_private *fep;
> +};
> +
> +struct enetfec_priv_rx_q {
> + struct bufdesc_prop bd;
> + struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE];
> + struct rte_mempool *pool;
> + struct enetfec_private *fep;
> +};
> +
> +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer
> + * descriptor base is x_bd_base. Currently available buffer are x_cur
> + * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx
> + * that is sent by the controller.
> + * The tx_cur and dirty_tx are same in completely full and empty
> + * conditions. Actual condition is determine by empty & ready bits.
determine -> is determined
> + */
> +struct enetfec_private {
> + struct rte_eth_dev *dev;
> + struct rte_eth_stats stats;
> + struct rte_mempool *pool;
> +
> + struct enetfec_priv_rx_q *rx_queues[ENET_MAX_Q];
> + struct enetfec_priv_tx_q *tx_queues[ENET_MAX_Q];
> + uint16_t max_rx_queues;
> + uint16_t max_tx_queues;
> +
> + unsigned int total_tx_ring_size;
> + unsigned int total_rx_ring_size;
> +
> + bool bufdesc_ex;
> + unsigned int tx_align;
> + unsigned int rx_align;
> + int full_duplex;
> + unsigned int phy_speed;
> + u_int32_t quirks;
> + int flag_csum;
> + int flag_pause;
> + int flag_wol;
> + bool rgmii_txc_delay;
> + bool rgmii_rxc_delay;
> + int link;
> + void *hw_baseaddr_v;
> + uint64_t hw_baseaddr_p;
> + void *bd_addr_v;
> + uint64_t bd_addr_p;
> + uint64_t bd_addr_p_r[ENET_MAX_Q];
> + uint64_t bd_addr_p_t[ENET_MAX_Q];
> + void *dma_baseaddr_r[ENET_MAX_Q];
> + void *dma_baseaddr_t[ENET_MAX_Q];
> + uint64_t cbus_size;
> + unsigned int reg_size;
> + unsigned int bd_size;
> + int hw_ts_rx_en;
> + int hw_ts_tx_en;
> +};
> +
> +#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); })
> +#define readl(p) rte_read32(p)
> +
> +static __always_inline
> +void __read_once_size(volatile void *p, void *res, int size)
> +{
> + switch (size) {
> + case 1:
> + *(__u8 *)res = *(volatile __u8 *)p;
> + break;
> + case 2:
> + *(__u16 *)res = *(volatile __u16 *)p;
> + break;
> + case 4:
> + *(__u32 *)res = *(volatile __u32 *)p;
> + break;
> + case 8:
> + *(__u64 *)res = *(volatile __u64 *)p;
> + break;
> + default:
> + break;
> + }
> +}
> +
> +#define __READ_ONCE(x)\
> +({\
> + union { typeof(x) __val; char __c[1]; } __u;\
> + __read_once_size(&(x), __u.__c, sizeof(x));\
> + __u.__val;\
> +})
> +#ifndef READ_ONCE
> +#define READ_ONCE(x) __READ_ONCE(x)
> +#endif
it appears that "READ_ONCE" is never used. Please delete all unused
portion of code.
> +
> +static inline struct
> +bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
> +
> + struct bufdesc_prop *bd)
> +{
> + return (bdp >= bd->last) ? bd->base
> + : (struct bufdesc *)(((void *)bdp) + bd->d_size);
> +}
> +
> +static inline struct
> +bufdesc *enet_get_prevdesc(struct bufdesc *bdp,
> + struct bufdesc_prop *bd)
> +{
> + return (bdp <= bd->base) ? bd->last
> + : (struct bufdesc *)(((void *)bdp) - bd->d_size);
> +}
> +
> +static inline int
> +enet_get_bd_index(struct bufdesc *bdp,
> + struct bufdesc_prop *bd)
> +{
> + return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2;
> +}
> +
> +static inline phys_addr_t enetfec_mem_vtop(uint64_t vaddr)
it appears that it is never used. Please check.
> +{
> + const struct rte_memseg *memseg;
> + memseg = rte_mem_virt2memseg((void *)(uintptr_t)vaddr, NULL);
> + if (memseg)
> + return memseg->iova + RTE_PTR_DIFF(vaddr, memseg->addr);
> + return (size_t)NULL;
> +}
> +
> +static inline int fls64(unsigned long word)
function name should start from new line.
> +{
> + return (64 - __builtin_clzl(word)) - 1;
> +}
> +
> +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts);
> +uint16_t
> +enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
Unlike function definitions, the function prototypes do not need to
place the function return type on a separate line.
> +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp,
> + struct bufdesc_prop *bd);
> +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp,
> + struct rte_mbuf *mbuf);
> +
> +#endif /*__FEC_ETHDEV_H__*/
> diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h
> new file mode 100644
> index 000000000..ff8daa359
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_pmd_logs.h
> @@ -0,0 +1,31 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#ifndef _ENET_LOGS_H_
> +#define _ENET_LOGS_H_
> +
> +extern int enetfec_logtype_pmd;
> +
> +/* PMD related logs */
> +#define ENET_PMD_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "fec_net: %s()" \
> + fmt "\n", __func__, ##args)
> +
> +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>")
> +
> +#define ENET_PMD_DEBUG(fmt, args...) \
> + ENET_PMD_LOG(DEBUG, fmt, ## args)
> +#define ENET_PMD_ERR(fmt, args...) \
> + ENET_PMD_LOG(ERR, fmt, ## args)
> +#define ENET_PMD_INFO(fmt, args...) \
> + ENET_PMD_LOG(INFO, fmt, ## args)
> +
> +#define ENET_PMD_WARN(fmt, args...) \
> + ENET_PMD_LOG(WARNING, fmt, ## args)
> +
> +/* DP Logs, toggled out at compile time if level lower than current level */
> +#define ENET_DP_LOG(level, fmt, args...) \
> + RTE_LOG_DP(level, PMD, fmt, ## args)
> +
> +#endif /* _ENET_LOGS_H_ */
> diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
> new file mode 100644
> index 000000000..252bf8330
> --- /dev/null
> +++ b/drivers/net/enetfec/meson.build
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright 2021 NXP
> +
> +if not is_linux
> + build = false
> + reason = 'only supported on linux'
> +endif
> +
> +deps += ['common_dpaax']
> +
> +sources = files('enet_ethdev.c')
> +
> +if cc.has_argument('-Wno-pointer-arith')
> + cflags += '-Wno-pointer-arith'
> +endif
> diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map
> new file mode 100644
> index 000000000..6e4fb220a
> --- /dev/null
> +++ b/drivers/net/enetfec/version.map
> @@ -0,0 +1,3 @@
> +DPDK_21 {
> + local: *;
> +};
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index c8b5ce298..c1307a3a6 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -18,6 +18,7 @@ drivers = [
> 'e1000',
> 'ena',
> 'enetc',
> + 'enetfec',
alignment issue.
> 'enic',
> 'failsafe',
> 'fm10k',
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 2/4] drivers/net/enetfec: UIO support added
2021-04-30 4:34 ` [dpdk-dev] [PATCH 2/4] drivers/net/enetfec: UIO support added Apeksha Gupta
2021-06-08 13:21 ` Andrew Rybchenko
@ 2021-07-04 4:27 ` Sachin Saxena (OSS)
1 sibling, 0 replies; 17+ messages in thread
From: Sachin Saxena (OSS) @ 2021-07-04 4:27 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit, dev; +Cc: hemant.agrawal
On 30-Apr-21 10:04 AM, Apeksha Gupta wrote:
> Implemented the fec-uio driver in kernel. enetfec PMD uses
> UIO interface to interact with kernel for PHY initialisation
enetfec PMD uses UIO interface to interact with "fec-uio" driver implemented in kernel for PHY initialization...
> and for mapping the allocated memory of register & BD from
> kernel to DPDK which gives access to non-cacheble memory for BD.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> drivers/net/enetfec/enet_ethdev.c | 204 ++++++++++++++++++++++++++++++
> drivers/net/enetfec/enet_regs.h | 179 ++++++++++++++++++++++++++
> drivers/net/enetfec/enet_uio.c | 192 ++++++++++++++++++++++++++++
> drivers/net/enetfec/enet_uio.h | 54 ++++++++
> drivers/net/enetfec/meson.build | 3 +-
> 5 files changed, 631 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/enetfec/enet_regs.h
> create mode 100644 drivers/net/enetfec/enet_uio.c
> create mode 100644 drivers/net/enetfec/enet_uio.h
>
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> index 5fd2dbc2d..5f4f2cf9e 100644
> --- a/drivers/net/enetfec/enet_ethdev.c
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -11,18 +11,195 @@
> #include <rte_bus_vdev.h>
> #include <rte_dev.h>
> #include <rte_ether.h>
> +#include <rte_io.h>
> #include "enet_pmd_logs.h"
> #include "enet_ethdev.h"
> +#include "enet_regs.h"
> +#include "enet_uio.h"
>
> #define ENETFEC_NAME_PMD net_enetfec
> #define ENET_VDEV_GEM_ID_ARG "intf"
> #define ENET_CDEV_INVALID_FD -1
>
> +#define BIT(nr) (1 << (nr))
> +/* FEC receive acceleration */
> +#define ENET_RACC_IPDIS (1 << 1)
> +#define ENET_RACC_PRODIS (1 << 2)
Please consider Andrew's suggestions here.
> +#define ENET_RACC_SHIFT16 BIT(7)
> +#define ENET_RACC_OPTIONS (ENET_RACC_IPDIS | ENET_RACC_PRODIS)
> +
> +/* Transmitter timeout */
> +#define TX_TIMEOUT (2 * HZ)
> +
> +#define ENET_PAUSE_FLAG_AUTONEG 0x1
> +#define ENET_PAUSE_FLAG_ENABLE 0x2
> +#define ENET_WOL_HAS_MAGIC_PACKET (0x1 << 0)
> +#define ENET_WOL_FLAG_ENABLE (0x1 << 1)
> +#define ENET_WOL_FLAG_SLEEP_ON (0x1 << 2)
ENET_WOL_* are unused. please remove the unused code.
> +
> +/* Pause frame feild and FIFO threshold */
> +#define ENET_ENET_FCE (1 << 5)
ENET_ENET_*, repeating ENET don't look reasonable.
> +#define ENET_ENET_RSEM_V 0x84
> +#define ENET_ENET_RSFL_V 16
> +#define ENET_ENET_RAEM_V 0x8
> +#define ENET_ENET_RAFL_V 0x8
> +#define ENET_ENET_OPD_V 0xFFF0
Unused.
> +#define ENET_MDIO_PM_TIMEOUT 100 /* ms */
> +
> int enetfec_logtype_pmd;
>
> +/*
> + * This function is called to start or restart the FEC during a link
FEC -> ENETFEC
> + * change, transmit timeout or to reconfigure the FEC. The network
> + * packet processing for this device must be stopped before this call.
> + */
> +static void
> +enetfec_restart(struct rte_eth_dev *dev)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + uint32_t temp_mac[2];
> + uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
> + uint32_t ecntl = ENET_ETHEREN; /* ETHEREN */
> + /* TODO get eth addr from eth dev */
Remove TODO. We may use this addr as default Mac address for device for now.
> + struct rte_ether_addr addr = {
> + .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} };
> + uint32_t val;
> +
> + /*
> + * enet-mac reset will reset mac address registers too,
> + * so need to reconfigure it.
> + */
> + memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN);
> + rte_write32(rte_cpu_to_be_32(temp_mac[0]),
> + fep->hw_baseaddr_v + ENET_PALR);
> + rte_write32(rte_cpu_to_be_32(temp_mac[1]),
> + fep->hw_baseaddr_v + ENET_PAUR);
> +
> + /* Clear any outstanding interrupt. */
> + writel(0xffffffff, fep->hw_baseaddr_v + ENET_EIR);
> +
> + /* Enable MII mode */
> + if (fep->full_duplex == FULL_DUPLEX) {
> + /* FD enable */
> + rte_write32(0x04, fep->hw_baseaddr_v + ENET_TCR);
> + } else {
> + /* No Rcv on Xmit */
> + rcntl |= 0x02;
> + rte_write32(0x0, fep->hw_baseaddr_v + ENET_TCR);
> + }
> +
> + if (fep->quirks & QUIRK_RACC) {
> + val = rte_read32(fep->hw_baseaddr_v + ENET_RACC);
> + /* align IP header */
> + val |= ENET_RACC_SHIFT16;
> + if (fep->flag_csum & RX_FLAG_CSUM_EN)
> + /* set RX checksum */
> + val |= ENET_RACC_OPTIONS;
> + else
> + val &= ~ENET_RACC_OPTIONS;
> + rte_write32(val, fep->hw_baseaddr_v + ENET_RACC);
> + rte_write32(PKT_MAX_BUF_SIZE,
> + fep->hw_baseaddr_v + ENET_FRAME_TRL);
> + }
> +
> + /*
> + * The phy interface and speed need to get configured
> + * differently on enet-mac.
> + */
> + if (fep->quirks & QUIRK_HAS_ENET_MAC) {
> + /* Enable flow control and length check */
> + rcntl |= 0x40000000 | 0x00000020;
> +
> + /* RGMII, RMII or MII */
> + rcntl |= (1 << 6);
> + ecntl |= (1 << 5);
> + }
> +
> + /* enable pause frame*/
> + if ((fep->flag_pause & ENET_PAUSE_FLAG_ENABLE) ||
> + ((fep->flag_pause & ENET_PAUSE_FLAG_AUTONEG)
> + /*&& ndev->phydev && ndev->phydev->pause*/)) {
> + rcntl |= ENET_ENET_FCE;
> +
> + /* set FIFO threshold parameter to reduce overrun */
> + rte_write32(ENET_ENET_RSEM_V,
> + fep->hw_baseaddr_v + ENET_R_FIFO_SEM);
> + rte_write32(ENET_ENET_RSFL_V,
> + fep->hw_baseaddr_v + ENET_R_FIFO_SFL);
> + rte_write32(ENET_ENET_RAEM_V,
> + fep->hw_baseaddr_v + ENET_R_FIFO_AEM);
> + rte_write32(ENET_ENET_RAFL_V,
> + fep->hw_baseaddr_v + ENET_R_FIFO_AFL);
> +
> + /* OPD */
> + rte_write32(ENET_ENET_OPD_V, fep->hw_baseaddr_v + ENET_OPD);
> + } else {
> + rcntl &= ~ENET_ENET_FCE;
> + }
> +
> + rte_write32(rcntl, fep->hw_baseaddr_v + ENET_RCR);
> +
> + rte_write32(0, fep->hw_baseaddr_v + ENET_IAUR);
> + rte_write32(0, fep->hw_baseaddr_v + ENET_IALR);
> +
> + if (fep->quirks & QUIRK_HAS_ENET_MAC) {
> + /* enable ENET endian swap */
> + ecntl |= (1 << 8);
> + /* enable ENET store and forward mode */
> + rte_write32(1 << 8, fep->hw_baseaddr_v + ENET_TFWR);
> + }
> +
> + if (fep->bufdesc_ex)
> + ecntl |= (1 << 4);
> +
> + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
> + fep->rgmii_txc_delay)
> + ecntl |= ENET_TXC_DLY;
> + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS &&
> + fep->rgmii_rxc_delay)
> + ecntl |= ENET_RXC_DLY;
> +
> + /* Enable the MIB statistic event counters */
> + rte_write32(0 << 31, fep->hw_baseaddr_v + ENET_MIBC);
> +
> + ecntl |= 0x70000000;
> + /* And last, enable the transmit and receive processing */
> + rte_write32(ecntl, fep->hw_baseaddr_v + ENET_ECR);
> + rte_delay_us(10);
> +}
> +
> +static int
> +enetfec_eth_open(struct rte_eth_dev *dev)
> +{
> + enetfec_restart(dev);
> +
> + return 0;
> +}
> +
> +static const struct eth_dev_ops ops = {
> + .dev_start = enetfec_eth_open,
enetfec_eth_open -> enetfec_eth_start ?
Also, enetfec_eth_stop() is missing in this patch.
> +};
> +
> static int
> enetfec_eth_init(struct rte_eth_dev *dev)
> {
> + struct enetfec_private *fep = dev->data->dev_private;
> + struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf;
> + uint64_t rx_offloads = eth_conf->rxmode.offloads;
> +
> + fep->full_duplex = FULL_DUPLEX;
> +
> + dev->dev_ops = &ops;
> + if (fep->quirks & QUIRK_VLAN)
> + /* enable hw VLAN support */
> + rx_offloads |= DEV_RX_OFFLOAD_VLAN;
> +
> + if (fep->quirks & QUIRK_CSUM) {
> + /* enable hw accelerator */
> + rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> + fep->flag_csum |= RX_FLAG_CSUM_EN;
> + }
These changes and updating features list in patch 4/4, should be handled
in separate individual patches to add NIC features.
> +
> rte_eth_dev_probing_finish(dev);
> return 0;
> }
> @@ -34,6 +211,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> struct enetfec_private *fep;
> const char *name;
> int rc = -1;
> + int i;
> + unsigned int bdsize;
>
> name = rte_vdev_device_name(vdev);
> if (name == NULL)
> @@ -47,6 +226,31 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> /* setup board info structure */
> fep = dev->data->dev_private;
> fep->dev = dev;
> +
> + fep->max_rx_queues = ENET_MAX_Q;
> + fep->max_tx_queues = ENET_MAX_Q;
> + fep->quirks = QUIRK_HAS_ENET_MAC | QUIRK_GBIT | QUIRK_BUFDESC_EX
> + | QUIRK_CSUM | QUIRK_VLAN | QUIRK_ERR007885
> + | QUIRK_RACC | QUIRK_COALESCE | QUIRK_EEE;
Same as above.
> +
> + config_enetfec_uio(fep);
return type should be checked.
> +
> + /* Get the BD size for distributing among six queues */
> + bdsize = (fep->bd_size) / 6;
Use MACRO for number of queues instead of hard coding. Also, should
configure only 1 queue
as a part of this patch as multi-queue support is upcoming feature.
> +
> + for (i = 0; i < fep->max_tx_queues; i++) {
> + fep->dma_baseaddr_t[i] = fep->bd_addr_v;
> + fep->bd_addr_p_t[i] = fep->bd_addr_p;
> + fep->bd_addr_v = fep->bd_addr_v + bdsize;
> + fep->bd_addr_p = fep->bd_addr_p + bdsize;
> + }
> + for (i = 0; i < fep->max_rx_queues; i++) {
> + fep->dma_baseaddr_r[i] = fep->bd_addr_v;
> + fep->bd_addr_p_r[i] = fep->bd_addr_p;
> + fep->bd_addr_v = fep->bd_addr_v + bdsize;
> + fep->bd_addr_p = fep->bd_addr_p + bdsize;
> + }
> +
> rc = enetfec_eth_init(dev);
> if (rc)
> goto failed_init;
> diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h
> new file mode 100644
> index 000000000..d037aafae
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_regs.h
> @@ -0,0 +1,179 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
Need a 1 line gap.
> +#ifndef __ENET_REGS_H
use ENETFEC at all places.
> +#define __ENET_REGS_H
> +
> +/* Ethernet receive use control and status of buffer descriptor
> + */
> +#define RX_BD_TR ((ushort)0x0001) /* Truncated */
> +#define RX_BD_OV ((ushort)0x0002) /* Over-run */
> +#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */
> +#define RX_BD_SH ((ushort)0x0008) /* Reserved */
> +#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */
> +#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */
> +#define RX_BD_MC ((ushort)0x0040) /* Rcvd Multicast */
> +#define RX_BD_BC ((ushort)0x0080) /* Rcvd Broadcast */
> +#define RX_BD_MISS ((ushort)0x0100) /* Miss: promisc mode frame */
> +#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */
> +#define RX_BD_LAST ((ushort)0x0800) /* Buffer is the last in the frame */
> +#define RX_BD_INTR ((ushort)0x1000) /* Software specified field */
> +/* 0 The next BD in consecutive location
> + * 1 The next BD in ENETn_RDSR.
> + */
> +#define RX_BD_WRAP ((ushort)0x2000)
> +#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */
> +#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */
> +
> +/* Ethernet receive use control and status of enhanced buffer descriptor */
> +#define BD_ENET_RX_VLAN 0x00000004
> +
> +/* Ethernet transmit use control and status of buffer descriptor.
> + */
> +#define TX_BD_CSL ((ushort)0x0001)
> +#define TX_BD_UN ((ushort)0x0002)
> +#define TX_BD_RCMASK ((ushort)0x003c)
> +#define TX_BD_RL ((ushort)0x0040)
> +#define TX_BD_LC ((ushort)0x0080)
> +#define TX_BD_HB ((ushort)0x0100)
> +#define TX_BD_DEF ((ushort)0x0200)
> +#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */
> +#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */
> +#define TX_BD_INTR ((ushort)0x1000)
> +#define TX_BD_WRAP ((ushort)0x2000)
> +#define TX_BD_PAD ((ushort)0x4000)
> +#define TX_BD_READY ((ushort)0x8000) /* Data is ready */
> +
> +#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */
> +
> +/* Ethernet transmit use control and status of enhanced buffer descriptor */
> +#define TX_BD_IINS 0x08000000
> +#define TX_BD_PINS 0x10000000
> +#define TX_BD_TS 0x20000000
> +#define TX_BD_INT 0x40000000
> +
> +#define ENET_RD_START(X) (((X) == 1) ? ENET_RD_START_1 : \
> + (((X) == 2) ? \
> + ENET_RD_START_2 : ENET_RD_START_0))
> +#define ENET_TD_START(X) (((X) == 1) ? ENET_TD_START_1 : \
> + (((X) == 2) ? \
> + ENET_TD_START_2 : ENET_TD_START_0))
> +#define ENET_MRB_SIZE(X) (((X) == 1) ? ENET_MRB_SIZE_1 : \
> + (((X) == 2) ? \
> + ENET_MRB_SIZE_2 : ENET_MRB_SIZE_0))
> +
> +#define ENET_DMACFG(X) (((X) == 2) ? ENET_DMA2CFG : ENET_DMA1CFG)
unused.
> +
> +#define ENABLE_DMA_CLASS (1 << 16)
> +#define ENET_RCM(X) (((X) == 2) ? ENET_RCM2 : ENET_RCM1)
> +#define SLOPE_IDLE_MASK 0xffff
> +#define SLOPE_IDLE_1 0x200 /* BW_fraction: 0.5 */
> +#define SLOPE_IDLE_2 0x200 /* BW_fraction: 0.5 */
> +#define SLOPE_IDLE(X) (((X) == 1) ? \
> + (SLOPE_IDLE_1 & SLOPE_IDLE_MASK) : \
> + (SLOPE_IDLE_2 & SLOPE_IDLE_MASK))
> +#define RCM_MATCHEN (0x1 << 16)
> +#define CFG_RCMR_CMP(v, n) (((v) & 0x7) << ((n) << 2))
> +#define RCMR_CMP1 (CFG_RCMR_CMP(0, 0) | CFG_RCMR_CMP(1, 1) | \
> + CFG_RCMR_CMP(2, 2) | CFG_RCMR_CMP(3, 3))
> +#define RCMR_CMP2 (CFG_RCMR_CMP(4, 0) | CFG_RCMR_CMP(5, 1) | \
> + CFG_RCMR_CMP(6, 2) | CFG_RCMR_CMP(7, 3))
> +#define RCM_CMP(X) (((X) == 1) ? RCMR_CMP1 : RCMR_CMP2)
> +#define BD_TX_FTYPE(X) (((X) & 0xf) << 20)
> +
All above Macros appears unused as of now. Please check and remove.
> +#define RX_BD_INT 0x00800000
> +#define RX_BD_PTP ((ushort)0x0400)
PTP is supported?
> +#define RX_BD_ICE 0x00000020
> +#define RX_BD_PCR 0x00000010
> +#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR)
> +#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR)
> +#define ENET_MII ((uint)0x00800000) /*MII_interrupt*/
> +
> +#define ENET_ETHEREN ((uint)0x00000002)
> +#define ENET_TXC_DLY ((uint)0x00010000)
> +#define ENET_RXC_DLY ((uint)0x00020000)
> +
> +/* ENET MAC is in controller */
> +#define QUIRK_HAS_ENET_MAC (1 << 0)
> +/* gasket is used in controller */
> +#define QUIRK_GASKET (1 << 2)
> +/* GBIT supported in controller */
> +#define QUIRK_GBIT (1 << 3)
> +/* Controller has extended descriptor buffer */
> +#define QUIRK_BUFDESC_EX (1 << 4)
> +/* Controller support hardware checksum */
> +#define QUIRK_CSUM (1 << 5)
> +/* Controller support hardware vlan*/
> +#define QUIRK_VLAN (1 << 6)
> +/* ENET IP hardware AVB
> + * i.MX8MM ENET IP supports the AVB (Audio Video Bridging) feature.
> + */
> +#define QUIRK_AVB (1 << 8)
> +#define QUIRK_ERR007885 (1 << 9)
> +/* RACC register supported by controller */
> +#define QUIRK_RACC (1 << 12)
> +/* interrupt coalesc supported by controller*/
> +#define QUIRK_COALESCE (1 << 13)
> +/* To support IEEE 802.3az EEE std, new feature is added by i.MX8MQ ENET IP
> + * version.
> + */
> +#define QUIRK_EEE (1 << 17)
> +/* i.MX8QM ENET IP version added the feature to generate the delayed TXC or
> + * RXC. For its implementation, ENET uses synchronized clocks (250MHz) for
> + * generating delay of 2ns.
> + */
> +#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18)
> +
> +#define ENET_EIR 0x004 /* Interrupt event register */
> +#define ENET_EIMR 0x008 /* Interrupt mask register */
> +#define ENET_RDAR_0 0x010 /* Receive descriptor active register ring0 */
> +#define ENET_TDAR_0 0x014 /* Transmit descriptor active register ring0 */
> +#define ENET_ECR 0x024 /* Ethernet control register */
> +#define ENET_MSCR 0x044 /* MII speed control register */
> +#define ENET_MIBC 0x064 /* MIB control and status register */
> +#define ENET_RCR 0x084 /* Receive control register */
> +#define ENET_TCR 0x0c4 /* Transmit Control register */
> +#define ENET_PALR 0x0e4 /* MAC address low 32 bits */
> +#define ENET_PAUR 0x0e8 /* MAC address high 16 bits */
> +#define ENET_OPD 0x0ec /* Opcode/Pause duration register */
> +#define ENET_IAUR 0x118 /* hash table 32 bits high */
> +#define ENET_IALR 0x11c /* hash table 32 bits low */
> +#define ENET_GAUR 0x120 /* grp hash table 32 bits high */
> +#define ENET_GALR 0x124 /* grp hash table 32 bits low */
> +#define ENET_TFWR 0x144 /* transmit FIFO water_mark */
> +#define ENET_RD_START_1 0x160 /* Receive descriptor ring1 start register */
> +#define ENET_TD_START_1 0x164 /* Transmit descriptor ring1 start register */
> +#define ENET_MRB_SIZE_1 0x168 /* Maximum receive buffer size register ring1 */
> +#define ENET_RD_START_2 0x16c /* Receive descriptor ring2 start register */
> +#define ENET_TD_START_2 0x170 /* Transmit descriptor ring2 start register */
> +#define ENET_MRB_SIZE_2 0x174 /* Maximum receive buffer size register ring2 */
> +#define ENET_RD_START_0 0x180 /* Receive descriptor ring0 start reg */
> +#define ENET_TD_START_0 0x184 /* Transmit buffer descriptor ring0 start reg */
> +#define ENET_MRB_SIZE_0 0x188 /* Maximum receive buffer size register ring0*/
> +#define ENET_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */
> +#define ENET_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */
> +#define ENET_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */
> +#define ENET_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */
> +#define ENET_FRAME_TRL 0x1b0 /* Frame truncation length */
> +#define ENET_RACC 0x1c4 /* Receive Accelerator function configuration*/
> +#define ENET_RCM1 0x1c8 /* Receive classification match register ring1 */
> +#define ENET_RCM2 0x1cc /* Receive classification match register ring2 */
> +#define ENET_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */
> +#define ENET_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */
> +#define ENET_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */
> +#define ENET_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */
> +#define ENET_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */
> +#define ENET_TDAR_2 0x1ec /* Tx descriptor active register ring2 */
> +#define ENET_MII_GSK_CFGR 0x300 /* MII_GSK Configuration register */
> +#define ENET_MII_GSK_ENR 0x308 /* MII_GSK Enable register*/
> +
> +#define BM_MII_GSK_CFGR_MII 0x00
> +#define BM_MII_GSK_CFGR_RMII 0x01
> +#define BM_MII_GSK_CFGR_FRCONT_10M 0x40
We may remove all MII_* defines as will not use same. Please check.
> +
> +/* full duplex or half duplex */
> +#define HALF_DUPLEX 0x00
> +#define FULL_DUPLEX 0x01
> +#define UNKNOWN_DUPLEX 0xff
Duplicate Macros definition. Also defined in "enet_ethdev.h"
> +
> +#endif /*__ENET_REGS_H */
> diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
> new file mode 100644
> index 000000000..b64dc522e
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_uio.c
> @@ -0,0 +1,192 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <unistd.h>
> +#include <stdlib.h>
> +#include <dirent.h>
> +#include <string.h>
> +#include <sys/mman.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +
> +#include <rte_common.h>
> +#include <rte_malloc.h>
> +#include "enet_pmd_logs.h"
> +#include "enet_uio.h"
> +
> +static struct uio_job enetfec_uio_job;
> +int count;
> +
> +/** @brief Reads first line from a file.
> + * Composes file name as: root/subdir/filename
> + *
> + * @param [in] root Root path
> + * @param [in] subdir Subdirectory name
> + * @param [in] filename File name
> + * @param [out] line The first line read from file.
> + *
> + * @retval 0 for succes
> + * @retval other value for error
> + */
> +static int
> +file_read_first_line(const char root[], const char subdir[],
> + const char filename[], char *line)
> +{
> + char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME];
> + int fd = 0, ret = 0;
> +
> + /*compose the file name: root/subdir/filename */
> + memset(absolute_file_name, 0, sizeof(absolute_file_name));
> + snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME,
> + "%s/%s/%s", root, subdir, filename);
> +
> + fd = open(absolute_file_name, O_RDONLY);
> + if (fd <= 0)
> + ENET_PMD_ERR("Error opening file %s", absolute_file_name);
> +
> + /* read UIO device name from first line in file */
> + ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH);
> + close(fd);
> +
> + /* NULL-ify string */
> + line[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH - 1] = '\0';
We must NULL-ify after actual number of bytes read.
line[ret] ='\0';
Also, set this after checking ret value.
> +
> + if (ret <= 0) {
> + ENET_PMD_ERR("Error reading from file %s", absolute_file_name);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +/** @brief Maps rx-tx bd range assigned for a bd ring.
> + *
> + * @param [in] uio_device_fd UIO device file descriptor
> + * @param [in] uio_device_id UIO device id
> + * @param [in] uio_map_id UIO allows maximum 5 different mapping for
> + each device. Maps start with id 0.
> + * @param [out] map_size Map size.
> + * @param [out] map_addr Map physical address
> + * @retval NULL if failed to map registers
> + * @retval Virtual address for mapped register address range
> + */
> +static void *
> +uio_map_mem(int uio_device_fd, int uio_device_id,
> + int uio_map_id, int *map_size, uint64_t *map_addr)
> +{
> + void *mapped_address = NULL;
> + unsigned int uio_map_size = 0;
> + unsigned int uio_map_p_addr = 0;
> + char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME];
> + char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME];
> + char uio_map_size_str[32];
must be
uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1];
> + char uio_map_p_addr_str[64];
> + int ret = 0;
> +
> + /* compose the file name: root/subdir/filename */
> + memset(uio_sys_root, 0, sizeof(uio_sys_root));
> + memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir));
> + memset(uio_map_size_str, 0, sizeof(uio_map_size_str));
> + memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str));
> +
> + /* Compose string: /sys/class/uio/uioX */
> + snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d",
> + FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id);
> + /* Compose string: maps/mapY */
> + snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d",
> + FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id);
> +
> + /* Read first (and only) line from file
> + * /sys/class/uio/uioX/maps/mapY/size
> + */
> + ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
> + "size", uio_map_size_str);
> + if (ret)
check with 0. Please handle it every where.
> + ENET_PMD_ERR("file_read_first_line() failed");
return NULL on error.
> +
> + ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir,
> + "addr", uio_map_p_addr_str);
the file_read_first_line() will read only
FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH bytes. If that is sufficient then why
uio_map_p_addr_str[] has length of 64?
> + if (ret)
> + ENET_PMD_ERR("file_read_first_line() failed");
return NULL on error.
> +
> + /* Read mapping size and physical address expressed in hexa(base 16) */
> + uio_map_size = strtol(uio_map_size_str, NULL, 16);
> + uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16);
> +
> + if (uio_map_id == 0) {
> + /* Map the register address in user space when map_id is 0 */
> + mapped_address = mmap(0 /*dynamically choose virtual address */,
> + uio_map_size, PROT_READ | PROT_WRITE,
> + MAP_SHARED, uio_device_fd, 0);
> + } else {
> + /* Map the BD memory in user space */
> + mapped_address = mmap(NULL, uio_map_size,
> + PROT_READ | PROT_WRITE,
> + MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE));
> + }
> +
> + if (mapped_address == MAP_FAILED) {
> + ENET_PMD_ERR("Failed to map! errno = %d uio job fd = %d,"
> + "uio device id = %d, uio map id = %d", errno,
> + uio_device_fd, uio_device_id, uio_map_id);
> + return 0;
return NULL on error.
> + }
> +
> + /* Save the map size to use it later on for munmap-ing */
> + *map_size = uio_map_size;
> + *map_addr = uio_map_p_addr;
> + ENET_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p",
> + uio_device_id, uio_map_id, uio_map_size, mapped_address);
> +
> + return mapped_address;
> +}
> +
> +int
> +config_enetfec_uio(struct enetfec_private *fep)
> +{
> + char uio_device_file_name[32];
> + struct uio_job *uio_job = NULL;
> +
> + /* Mapping is done only one time */
> + if (count) {
Suggestion: may be we can use any self explanatory flag name like, "mapped".
> + printf("Mapping already done, can't map again!\n");
> + return 0;
> + }
> +
> + uio_job = &enetfec_uio_job;
> +
> + /* Find UIO device created by ENETFEC-UIO kernel driver */
> + memset(uio_device_file_name, 0, sizeof(uio_device_file_name));
> + snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d",
> + FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number);
> +
> + /* Open device file */
> + uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
> + if (uio_job->uio_fd < 0) {
> + printf("US_UIO: Open Failed\n");
> + exit(1);
> + }
> +
> + ENET_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d",
> + uio_device_file_name, uio_job->uio_fd);
> +
> + fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd,
> + uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID,
> + &uio_job->map_size, &uio_job->map_addr);
Check for NULL return.
> + fep->hw_baseaddr_p = uio_job->map_addr;
> + fep->reg_size = uio_job->map_size;
> +
> + fep->bd_addr_v = uio_map_mem(uio_job->uio_fd,
Check for NULL return.
> + uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID,
> + &uio_job->map_size, &uio_job->map_addr);
> + fep->bd_addr_p = uio_job->map_addr;
> + fep->bd_size = uio_job->map_size;
> +
> + count++;
> +
> + return 0;
> +}
> diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h
> new file mode 100644
> index 000000000..b220cae9d
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_uio.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2021 NXP
> + */
> +
> +#include "enet_ethdev.h"
> +
> +/* Prefix path to sysfs directory where UIO device attributes are exported.
> + * Path for UIO device X is /sys/class/uio/uioX
> + */
> +#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio"
> +
> +/* Subfolder in sysfs where mapping attributes are exported
> + * for each UIO device. Path for mapping Y for device X is:
> + * /sys/class/uio/uioX/maps/mapY
> + */
> +#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map"
> +
> +/* Name of UIO device file prefix. Each UIO device will have a device file
> + * /dev/uioX, where X is the minor device number.
> + */
> +#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio"
> +
> +/* Maximum length for the name of an UIO device file.
> + * Device file name format is: /dev/uioX.
> + */
> +#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30
> +
> +/* Maximum length for the name of an attribute file for an UIO device.
> + * Attribute files are exported in sysfs and have the name formatted as:
> + * /sys/class/uio/uioX/<attribute_file_name>
> + */
> +#define FEC_UIO_MAX_ATTR_FILE_NAME 100
> +
> +/* The id for the mapping used to export ENETFEC registers and BD memory to
> + * user space through UIO device.
> + */
> +#define FEC_UIO_REG_MAP_ID 0
> +#define FEC_UIO_BD_MAP_ID 1
> +
> +#define MAP_PAGE_SIZE 4096
> +
> +struct uio_job {
> + uint32_t fec_id;
> + int uio_fd;
> + void *bd_start_addr;
> + void *register_base_addr;
> + int map_size;
> + uint64_t map_addr;
> + int uio_minor_number;
> +};
> +
> +int config_enetfec_uio(struct enetfec_private *fep);
> +void enetfec_uio_init(void);
> +void enetfec_cleanup(void);
> diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
> index 252bf8330..05183bd44 100644
> --- a/drivers/net/enetfec/meson.build
> +++ b/drivers/net/enetfec/meson.build
> @@ -8,7 +8,8 @@ endif
>
> deps += ['common_dpaax']
>
> -sources = files('enet_ethdev.c')
> +sources = files('enet_ethdev.c',
> + 'enet_uio.c')
>
> if cc.has_argument('-Wno-pointer-arith')
> cflags += '-Wno-pointer-arith'
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration
2021-04-30 4:34 ` [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration Apeksha Gupta
2021-06-08 13:38 ` Andrew Rybchenko
@ 2021-07-04 6:46 ` Sachin Saxena (OSS)
1 sibling, 0 replies; 17+ messages in thread
From: Sachin Saxena (OSS) @ 2021-07-04 6:46 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit; +Cc: dev, hemant.agrawal, sachin.saxena
On 30-Apr-21 10:04 AM, Apeksha Gupta wrote:
> This patch added RX/TX queue configuration setup operations.
added -> adds
> On packet Rx the respective BD Ring status bit is set which is then
Suggestion: Rx -> reception
> used for packet processing.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> drivers/net/enetfec/enet_ethdev.c | 223 ++++++++++++++++++++++++++++++
> 1 file changed, 223 insertions(+)
>
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> index 5f4f2cf9e..b4816179a 100644
> --- a/drivers/net/enetfec/enet_ethdev.c
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -48,6 +48,19 @@
>
> int enetfec_logtype_pmd;
>
> +/* Supported Rx offloads */
> +static uint64_t dev_rx_offloads_sup =
> + DEV_RX_OFFLOAD_IPV4_CKSUM |
> + DEV_RX_OFFLOAD_UDP_CKSUM |
> + DEV_RX_OFFLOAD_TCP_CKSUM |
> + DEV_RX_OFFLOAD_VLAN_STRIP |
> + DEV_RX_OFFLOAD_CHECKSUM;
> +
> +static uint64_t dev_tx_offloads_sup =
> + DEV_TX_OFFLOAD_IPV4_CKSUM |
> + DEV_TX_OFFLOAD_UDP_CKSUM |
> + DEV_TX_OFFLOAD_TCP_CKSUM;
> +
> /*
> * This function is called to start or restart the FEC during a link
> * change, transmit timeout or to reconfigure the FEC. The network
> @@ -176,8 +189,218 @@ enetfec_eth_open(struct rte_eth_dev *dev)
> return 0;
> }
>
> +
> +static int
> +enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
> +{
> + ENET_PMD_INFO("%s: returning zero ", __func__);
> + return 0;
> +}
> +
Remove this if not required.
> +static int
> +enetfec_eth_info(__rte_unused struct rte_eth_dev *dev,
> + struct rte_eth_dev_info *dev_info)
> +{
> + dev_info->max_rx_queues = ENET_MAX_Q;
> + dev_info->max_tx_queues = ENET_MAX_Q;
> + dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> + dev_info->rx_offload_capa = dev_rx_offloads_sup;
> + dev_info->tx_offload_capa = dev_tx_offloads_sup;
> +
> + return 0;
> +}
> +
> +static const unsigned short offset_des_active_rxq[] = {
> + ENET_RDAR_0, ENET_RDAR_1, ENET_RDAR_2
> +};
> +
> +static const unsigned short offset_des_active_txq[] = {
> + ENET_TDAR_0, ENET_TDAR_1, ENET_TDAR_2
> +};
> +
> +static int
> +enetfec_tx_queue_setup(struct rte_eth_dev *dev,
> + uint16_t queue_idx,
> + uint16_t nb_desc,
> + __rte_unused unsigned int socket_id,
> + __rte_unused const struct rte_eth_txconf *tx_conf)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + unsigned int i;
> + struct bufdesc *bdp, *bd_base;
> + struct enetfec_priv_tx_q *txq;
> + unsigned int size;
> + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
> + sizeof(struct bufdesc);
> + unsigned int dsize_log2 = fls64(dsize);
> +
> + /* allocate transmit queue */
> + txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE);
> + if (!txq) {
> + ENET_PMD_ERR("transmit queue allocation failed");
> + return -ENOMEM;
> + }
> +
> + if (nb_desc > MAX_TX_BD_RING_SIZE) {
> + nb_desc = MAX_TX_BD_RING_SIZE;
> + ENET_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n");
> + }
> + txq->bd.ring_size = nb_desc;
> + fep->total_tx_ring_size += txq->bd.ring_size;
> + fep->tx_queues[queue_idx] = txq;
> +
> + rte_write32(fep->bd_addr_p_t[queue_idx],
> + fep->hw_baseaddr_v + ENET_TD_START(queue_idx));
Do we need rte_cpu_to_le_* ?
> +
> + /* Set transmit descriptor base. */
> + txq = fep->tx_queues[queue_idx];
> + txq->fep = fep;
> + size = dsize * txq->bd.ring_size;
> + bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx];
> + txq->bd.que_id = queue_idx;
> + txq->bd.base = bd_base;
> + txq->bd.cur = bd_base;
> + txq->bd.d_size = dsize;
> + txq->bd.d_size_log2 = dsize_log2;
> + txq->bd.active_reg_desc =
> + fep->hw_baseaddr_v + offset_des_active_txq[queue_idx];
> + bd_base = (struct bufdesc *)(((void *)bd_base) + size);
> + txq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize);
> + bdp = txq->bd.base;
> + bdp = txq->bd.cur;
> +
> + for (i = 0; i < txq->bd.ring_size; i++) {
> + /* Initialize the BD for every fragment in the page. */
> + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
rte_cpu_to_le_16(0) has no effect on '0'.
> + if (txq->tx_mbuf[i]) {
Compare vs NULL
> + rte_pktmbuf_free(txq->tx_mbuf[i]);
> + txq->tx_mbuf[i] = NULL;
> + }
> + rte_write32(rte_cpu_to_le_32(0), &bdp->bd_bufaddr);
rte_cpu_to_le_32 has no effect.
> + bdp = enet_get_nextdesc(bdp, &txq->bd);
> + }
> +
> + /* Set the last buffer to wrap */
> + bdp = enet_get_prevdesc(bdp, &txq->bd);
> + rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) |
> + rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
> + txq->dirty_tx = bdp;
> + dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx];
> + return 0;
> +}
> +
> +static int
> +enetfec_rx_queue_setup(struct rte_eth_dev *dev,
> + uint16_t queue_idx,
> + uint16_t nb_rx_desc,
> + __rte_unused unsigned int socket_id,
> + __rte_unused const struct rte_eth_rxconf *rx_conf,
> + struct rte_mempool *mb_pool)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + unsigned int i;
> + struct bufdesc *bd_base;
> + struct bufdesc *bdp;
> + struct enetfec_priv_rx_q *rxq;
> + unsigned int size;
> + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) :
> + sizeof(struct bufdesc);
> + unsigned int dsize_log2 = fls64(dsize);
> +
> + /* allocate receive queue */
> + rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE);
> + if (!rxq) {
> + ENET_PMD_ERR("receive queue allocation failed");
> + return -ENOMEM;
> + }
> +
> + if (nb_rx_desc > MAX_RX_BD_RING_SIZE) {
> + nb_rx_desc = MAX_RX_BD_RING_SIZE;
> + ENET_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n");
> + }
> +
> + rxq->bd.ring_size = nb_rx_desc;
> + fep->total_rx_ring_size += rxq->bd.ring_size;
> + fep->rx_queues[queue_idx] = rxq;
> +
> + rte_write32(fep->bd_addr_p_r[queue_idx],
> + fep->hw_baseaddr_v + ENET_RD_START(queue_idx));
> + rte_write32(PKT_MAX_BUF_SIZE,
> + fep->hw_baseaddr_v + ENET_MRB_SIZE(queue_idx));
Please check if we need rte_cpu_to_le_* .
> +
> + /* Set receive descriptor base. */
> + rxq = fep->rx_queues[queue_idx];
> + rxq->pool = mb_pool;
> + size = dsize * rxq->bd.ring_size;
> + bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx];
> + rxq->bd.que_id = queue_idx;
> + rxq->bd.base = bd_base;
> + rxq->bd.cur = bd_base;
> + rxq->bd.d_size = dsize;
> + rxq->bd.d_size_log2 = dsize_log2;
> + rxq->bd.active_reg_desc =
> + fep->hw_baseaddr_v + offset_des_active_rxq[queue_idx];
> + bd_base = (struct bufdesc *)(((void *)bd_base) + size);
> + rxq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize);
> +
> + rxq->fep = fep;
> + bdp = rxq->bd.base;
> + rxq->bd.cur = bdp;
> +
> + for (i = 0; i < nb_rx_desc; i++) {
> + /* Initialize Rx buffers from pktmbuf pool */
> + struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool);
> + if (mbuf == NULL) {
> + ENET_PMD_ERR("mbuf failed\n");
> + goto err_alloc;
> + }
> +
> + /* Get the virtual address & physical address */
> + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
> + &bdp->bd_bufaddr);
> +
> + rxq->rx_mbuf[i] = mbuf;
> + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc);
> +
> + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> + }
> +
> + /* Initialize the receive buffer descriptors. */
> + bdp = rxq->bd.cur;
> + for (i = 0; i < rxq->bd.ring_size; i++) {
> + /* Initialize the BD for every fragment in the page. */
> + if (rte_read32(&bdp->bd_bufaddr))
Compare vs 0
> + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY),
> + &bdp->bd_sc);
> + else
> + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc);
No effect of rte_cpu_to_le_16.
> +
> + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> + }
> +
> + /* Set the last buffer to wrap */
> + bdp = enet_get_prevdesc(bdp, &rxq->bd);
> + rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) |
> + rte_read16(&bdp->bd_sc)), &bdp->bd_sc);
> + dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx];
> + rte_write32(0x0, fep->rx_queues[queue_idx]->bd.active_reg_desc);
> + return 0;
> +
> +err_alloc:
> + for (i = 0; i < nb_rx_desc; i++) {
> + rte_pktmbuf_free(rxq->rx_mbuf[i]);
Only free, if mbuf was allocated earlier. (rxq->rx_mbuf[i] != NULL)
> + rxq->rx_mbuf[i] = NULL;
> + }
> + rte_free(rxq);
> + return -1;
> +}
> +
> static const struct eth_dev_ops ops = {
> .dev_start = enetfec_eth_open,
> + .dev_configure = enetfec_eth_configure,
Better to introduce function pointers later when they will be implemented.
> + .dev_infos_get = enetfec_eth_info,
> + .rx_queue_setup = enetfec_rx_queue_setup,
> + .tx_queue_setup = enetfec_tx_queue_setup,
> };
>
> static int
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support
2021-04-30 4:34 ` [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support Apeksha Gupta
2021-06-08 13:42 ` Andrew Rybchenko
@ 2021-07-05 8:48 ` Sachin Saxena (OSS)
1 sibling, 0 replies; 17+ messages in thread
From: Sachin Saxena (OSS) @ 2021-07-05 8:48 UTC (permalink / raw)
To: Apeksha Gupta, ferruh.yigit, dev; +Cc: hemant.agrawal
On 30-Apr-21 10:04 AM, Apeksha Gupta wrote:
> This patch supported checksum offloads and add burst enqueue and
> dequeue operations to the enetfec PMD.
>
> Loopback mode is added, compile time flag 'ENETFEC_LOOPBACK' is
> used to enable this feature. By default loopback mode is disabled.
> Basic features added like promiscuous enable, basic stats.
>
> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
> ---
> doc/guides/nics/enetfec.rst | 4 +
> doc/guides/nics/features/enetfec.ini | 5 +
> drivers/net/enetfec/enet_ethdev.c | 212 +++++++++++-
> drivers/net/enetfec/enet_rxtx.c | 499 +++++++++++++++++++++++++++
> 4 files changed, 719 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/enetfec/enet_rxtx.c
>
> diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
> index 10f495fb9..adbb52392 100644
> --- a/doc/guides/nics/enetfec.rst
> +++ b/doc/guides/nics/enetfec.rst
> @@ -75,6 +75,10 @@ ENETFEC driver.
> ENETFEC Features
> ~~~~~~~~~~~~~~~~~
>
> +- Basic stats
> +- Promiscuous
> +- VLAN offload
> +- L3/L4 checksum offload
As suggested by Andrew, we should add features in separate patch/es.
> - ARMv8
>
> Supported ENETFEC SoCs
> diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini
> index 570069798..fcc217773 100644
> --- a/doc/guides/nics/features/enetfec.ini
> +++ b/doc/guides/nics/features/enetfec.ini
> @@ -4,5 +4,10 @@
> ; Refer to default.ini for the full list of available PMD features.
> ;
> [Features]
> +Basic stats = Y
> +Promiscuous mode = Y
> +VLAN offload = Y
> +L3 checksum offload = Y
> +L4 checksum offload = Y
> ARMv8 = Y
> Usage doc = Y
> diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
> index b4816179a..ca2cf929f 100644
> --- a/drivers/net/enetfec/enet_ethdev.c
> +++ b/drivers/net/enetfec/enet_ethdev.c
> @@ -46,6 +46,9 @@
> #define ENET_ENET_OPD_V 0xFFF0
> #define ENET_MDIO_PM_TIMEOUT 100 /* ms */
>
> +/* Extended buffer descriptor */
> +#define ENETFEC_EXTENDED_BD 0
> +
> int enetfec_logtype_pmd;
>
> /* Supported Rx offloads */
> @@ -61,6 +64,50 @@ static uint64_t dev_tx_offloads_sup =
> DEV_TX_OFFLOAD_UDP_CKSUM |
> DEV_TX_OFFLOAD_TCP_CKSUM;
>
> +static void enet_free_buffers(struct rte_eth_dev *dev)
This should be part of previous patch 3/4.
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + unsigned int i, q;
> + struct rte_mbuf *mbuf;
> + struct bufdesc *bdp;
> + struct enetfec_priv_rx_q *rxq;
> + struct enetfec_priv_tx_q *txq;
> +
> + for (q = 0; q < dev->data->nb_rx_queues; q++) {
> + rxq = fep->rx_queues[q];
> + bdp = rxq->bd.base;
> + for (i = 0; i < rxq->bd.ring_size; i++) {
> + mbuf = rxq->rx_mbuf[i];
> + rxq->rx_mbuf[i] = NULL;
> + if (mbuf)
Compare vs NULL
> + rte_pktmbuf_free(mbuf);
> + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> + }
> + }
> +
> + for (q = 0; q < dev->data->nb_tx_queues; q++) {
> + txq = fep->tx_queues[q];
> + bdp = txq->bd.base;
> + for (i = 0; i < txq->bd.ring_size; i++) {
> + mbuf = txq->tx_mbuf[i];
> + txq->tx_mbuf[i] = NULL;
> + if (mbuf)
Compare vs NULL
> + rte_pktmbuf_free(mbuf);
> + }
> + }
> +}
> +
> +static void enet_free_queue(struct rte_eth_dev *dev)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + unsigned int i;
> +
> + for (i = 0; i < dev->data->nb_rx_queues; i++)
> + rte_free(fep->rx_queues[i]);
> + for (i = 0; i < dev->data->nb_tx_queues; i++)
> + rte_free(fep->rx_queues[i]);
> +}
> +
> /*
> * This function is called to start or restart the FEC during a link
> * change, transmit timeout or to reconfigure the FEC. The network
> @@ -189,7 +236,6 @@ enetfec_eth_open(struct rte_eth_dev *dev)
> return 0;
> }
>
> -
> static int
> enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev)
> {
> @@ -395,12 +441,137 @@ enetfec_rx_queue_setup(struct rte_eth_dev *dev,
> return -1;
> }
>
> +static int
> +enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + uint32_t tmp;
> +
> + tmp = rte_read32(fep->hw_baseaddr_v + ENET_RCR);
> + tmp |= 0x8;
> + tmp &= ~0x2;
> + rte_write32(tmp, fep->hw_baseaddr_v + ENET_RCR);
> +
> + return 0;
> +}
> +
> +static int
> +enetfec_eth_link_update(__rte_unused struct rte_eth_dev *dev,
> + __rte_unused int wait_to_complete)
> +{
> + return 0;
> +}
> +
Remove unimplemented functions.
> +static int
> +enetfec_stats_get(struct rte_eth_dev *dev,
> + struct rte_eth_stats *stats)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> + struct rte_eth_stats *eth_stats = &fep->stats;
> +
> + if (stats == NULL)
> + return -1;
> +
> + memset(stats, 0, sizeof(struct rte_eth_stats));
> +
> + stats->ipackets = eth_stats->ipackets;
> + stats->ibytes = eth_stats->ibytes;
> + stats->ierrors = eth_stats->ierrors;
> + stats->opackets = eth_stats->opackets;
> + stats->obytes = eth_stats->obytes;
> + stats->oerrors = eth_stats->oerrors;
> +
> + return 0;
> +}
> +
> +static void
> +enetfec_stop(__rte_unused struct rte_eth_dev *dev)
> +{
> +/*TODO*/
> +}
Remove unimplemented functions.
> +
> +static int
> +enetfec_eth_close(__rte_unused struct rte_eth_dev *dev)
> +{
> + /* phy_stop(ndev->phydev); */
> + enetfec_stop(dev);
Not supported as of now.
> + /* phy_disconnect(ndev->phydev); */
> +
> + enet_free_buffers(dev);
This should be part of previous patch 3/4.
> + return 0;
> +}
> +
> +static uint16_t
> +enetfec_dummy_xmit_pkts(__rte_unused void *tx_queue,
> + __rte_unused struct rte_mbuf **tx_pkts,
> + __rte_unused uint16_t nb_pkts)
> +{
> + return 0;
> +}
> +
> +static uint16_t
> +enetfec_dummy_recv_pkts(__rte_unused void *rxq,
> + __rte_unused struct rte_mbuf **rx_pkts,
> + __rte_unused uint16_t nb_pkts)
> +{
> + return 0;
> +}
> +
> +static int
> +enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev)
> +{
> + dev->rx_pkt_burst = &enetfec_dummy_recv_pkts;
> + dev->tx_pkt_burst = &enetfec_dummy_xmit_pkts;
> +
> + return 0;
> +}
> +
> +static int
> +enetfec_multicast_enable(struct rte_eth_dev *dev)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> +
> + rte_write32(0xffffffff, fep->hw_baseaddr_v + ENET_GAUR);
> + rte_write32(0xffffffff, fep->hw_baseaddr_v + ENET_GALR);
> + dev->data->all_multicast = 1;
> +
> + rte_write32(0x04400002, fep->hw_baseaddr_v + ENET_GAUR);
> + rte_write32(0x10800049, fep->hw_baseaddr_v + ENET_GALR);
> +
> + return 0;
> +}
Should be part of features addition patch.
> +
> +/* Set a MAC change in hardware. */
> +static int
> +enetfec_set_mac_address(struct rte_eth_dev *dev,
> + struct rte_ether_addr *addr)
> +{
> + struct enetfec_private *fep = dev->data->dev_private;
> +
> + writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) |
> + (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24),
> + fep->hw_baseaddr_v + ENET_PALR);
> + writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24),
> + fep->hw_baseaddr_v + ENET_PAUR);
> +
> + rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
> +
> + return 0;
> +}
> +
> static const struct eth_dev_ops ops = {
> .dev_start = enetfec_eth_open,
> + .dev_stop = enetfec_eth_stop,
> + .dev_close = enetfec_eth_close,
> .dev_configure = enetfec_eth_configure,
> .dev_infos_get = enetfec_eth_info,
> .rx_queue_setup = enetfec_rx_queue_setup,
> .tx_queue_setup = enetfec_tx_queue_setup,
> + .link_update = enetfec_eth_link_update,
Not supported as of now.
> + .mac_addr_set = enetfec_set_mac_address,
> + .stats_get = enetfec_stats_get,
> + .promiscuous_enable = enetfec_promiscuous_enable,
Please implement enetfec_promiscuous_disable() as well.
> + .allmulticast_enable = enetfec_multicast_enable
> };
>
> static int
> @@ -434,6 +605,7 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> struct enetfec_private *fep;
> const char *name;
> int rc = -1;
> + struct rte_ether_addr macaddr;
> int i;
> unsigned int bdsize;
>
> @@ -474,12 +646,37 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev)
> fep->bd_addr_p = fep->bd_addr_p + bdsize;
> }
>
> + /* Copy the station address into the dev structure, */
> + dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0);
> + if (dev->data->mac_addrs == NULL) {
> + ENET_PMD_ERR("Failed to allocate mem %d to store MAC addresses",
> + ETHER_ADDR_LEN);
> + rc = -ENOMEM;
> + goto err;
> + }
> +
> + /* TODO get mac address from device tree or get random addr.
> + * Currently setting default as 1,1,1,1,1,1
> + */
> + macaddr.addr_bytes[0] = 1;
> + macaddr.addr_bytes[1] = 1;
> + macaddr.addr_bytes[2] = 1;
> + macaddr.addr_bytes[3] = 1;
> + macaddr.addr_bytes[4] = 1;
> + macaddr.addr_bytes[5] = 1;
> +
> + enetfec_set_mac_address(dev, &macaddr);
> + /* enable the extended buffer mode */
> + fep->bufdesc_ex = ENETFEC_EXTENDED_BD;
> +
> rc = enetfec_eth_init(dev);
> if (rc)
> goto failed_init;
> return 0;
> failed_init:
> ENET_PMD_ERR("Failed to init");
> +err:
> + rte_eth_dev_release_port(dev);
> return rc;
> }
>
> @@ -487,15 +684,28 @@ static int
> pmd_enetfec_remove(struct rte_vdev_device *vdev)
> {
> struct rte_eth_dev *eth_dev = NULL;
> + struct enetfec_private *fep;
> + struct enetfec_priv_rx_q *rxq;
>
> /* find the ethdev entry */
> eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
> if (!eth_dev)
> return -ENODEV;
>
> + fep = eth_dev->data->dev_private;
> + /* Free descriptor base of first RX queue as it was configured
> + * first in enetfec_eth_init().
> + */
> + rxq = fep->rx_queues[0];
Although we are supporting only 1 queue as of now, but code should be
generic enough to handle multi-queue in future.
> + rte_free(rxq->bd.base);
Do we need similar handling for TX queues?
> + enet_free_queue(eth_dev);
> +
> + enetfec_eth_stop(eth_dev);
> rte_eth_dev_release_port(eth_dev);
>
> ENET_PMD_INFO("Closing sw device\n");
> + munmap(fep->hw_baseaddr_v, fep->cbus_size);
> +
> return 0;
> }
>
> diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
> new file mode 100644
> index 000000000..1b9b86c35
> --- /dev/null
> +++ b/drivers/net/enetfec/enet_rxtx.c
> @@ -0,0 +1,499 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2020 NXP
> + */
Please update Copyright syntax to current year.
> +
> +#include <signal.h>
> +#include <rte_mbuf.h>
> +#include <rte_io.h>
> +#include "enet_regs.h"
> +#include "enet_ethdev.h"
> +#include "enet_pmd_logs.h"
> +
> +#define ENETFEC_LOOPBACK 0
> +#define ENETFEC_DUMP 0
> +
> +static volatile bool lb_quit;
> +
> +#if ENETFEC_DUMP
> +static void
> +enet_dump(struct enetfec_priv_tx_q *txq)
Dump functions defined but never used. Either please try to use them at
appropriate places or remove.
> +{
> + struct bufdesc *bdp;
> + int index = 0;
> +
> + ENET_PMD_DEBUG("TX ring dump\n");
> + ENET_PMD_DEBUG("Nr SC addr len MBUF\n");
> +
> + bdp = txq->bd.base;
> + do {
> + ENET_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n",
> + index,
> + bdp == txq->bd.cur ? 'S' : ' ',
> + bdp == txq->dirty_tx ? 'H' : ' ',
> + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
> + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
> + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
> + txq->tx_mbuf[index]);
> + bdp = enet_get_nextdesc(bdp, &txq->bd);
> + index++;
> + } while (bdp != txq->bd.base);
> +}
> +
> +static void
> +enet_dump_rx(struct enetfec_priv_rx_q *rxq)
Unused.
> +{
> + struct bufdesc *bdp;
> + int index = 0;
> +
> + ENET_PMD_DEBUG("RX ring dump\n");
> + ENET_PMD_DEBUG("Nr SC addr len MBUF\n");
> +
> + bdp = rxq->bd.base;
> + do {
> + ENET_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n",
> + index,
> + bdp == rxq->bd.cur ? 'S' : ' ',
> + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)),
> + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)),
> + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)),
> + rxq->rx_mbuf[index]);
> + rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index],
> + rxq->rx_mbuf[index]->pkt_len);
> + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> + index++;
> + } while (bdp != rxq->bd.base);
> +}
> +#endif
> +
> +#if ENETFEC_LOOPBACK
> +static void fec_signal_handler(int signum)
> +{
> + if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) {
> + printf("\n\n %s: Signal %d received, preparing to exit...\n",
> + __func__, signum);
> + lb_quit = true;
> + }
> +}
> +
> +static void
> +enetfec_lb_rxtx(void *rxq1)
> +{
> + struct rte_mempool *pool;
> + struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL;
> + struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL;
> + unsigned short status;
> + unsigned short pkt_len = 0;
> + int index_r = 0, index_t = 0;
> + u8 *data;
> + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
> + struct rte_eth_stats *stats = &rxq->fep->stats;
> + unsigned int i;
> + struct enetfec_private *fep;
> + struct enetfec_priv_tx_q *txq;
> + fep = rxq->fep->dev->data->dev_private;
> + txq = fep->tx_queues[0];
> +
> + pool = rxq->pool;
> + rx_bdp = rxq->bd.cur;
> + tx_bdp = txq->bd.cur;
> +
> + signal(SIGTSTP, fec_signal_handler);
> + while (!lb_quit) {
Compare vs NULL
> +chk_again:
> + status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc));
> + if (status & RX_BD_EMPTY) {
> + if (!lb_quit)
> + goto chk_again;
> + rxq->bd.cur = rx_bdp;
> + txq->bd.cur = tx_bdp;
> + return;
> + }
> +
> + /* Check for errors. */
> + status ^= RX_BD_LAST;
> + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
> + RX_BD_CR | RX_BD_OV | RX_BD_LAST |
> + RX_BD_TR)) {
> + stats->ierrors++;
> + if (status & RX_BD_OV) {
> + /* FIFO overrun */
> + ENET_PMD_ERR("rx_fifo_error\n");
> + goto rx_processing_done;
> + }
> + if (status & (RX_BD_LG | RX_BD_SH
> + | RX_BD_LAST)) {
> + /* Frame too long or too short. */
> + ENET_PMD_ERR("rx_length_error\n");
> + if (status & RX_BD_LAST)
> + ENET_PMD_ERR("rcv is not +last\n");
> + }
> + /* CRC Error */
> + if (status & RX_BD_CR)
> + ENET_PMD_ERR("rx_crc_errors\n");
> +
> + /* Report late collisions as a frame error. */
> + if (status & (RX_BD_NO | RX_BD_TR))
> + ENET_PMD_ERR("rx_frame_error\n");
> + mbuf = NULL;
> + goto rx_processing_done;
> + }
> +
> + new_mbuf = rte_pktmbuf_alloc(pool);
> + if (unlikely(!new_mbuf)) {
Compare vs NULL
> + stats->ierrors++;
> + break;
> + }
> + /* Process the incoming frame. */
> + pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen));
> +
> + /* shows data with respect to the data_off field. */
> + index_r = enet_get_bd_index(rx_bdp, &rxq->bd);
> + mbuf = rxq->rx_mbuf[index_r];
> +
> + /* adjust pkt_len */
> + rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4);
> + if (rxq->fep->quirks & QUIRK_RACC)
> + rte_pktmbuf_adj(mbuf, 2);
> +
> + /* Replace Buffer in BD */
> + rxq->rx_mbuf[index_r] = new_mbuf;
> + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
> + &rx_bdp->bd_bufaddr);
> +
> +rx_processing_done:
> + /* when rx_processing_done clear the status flags
> + * for this buffer
> + */
> + status &= ~RX_BD_STATS;
> +
> + /* Mark the buffer empty */
> + status |= RX_BD_EMPTY;
> +
> + /* Make sure the updates to rest of the descriptor are
> + * performed before transferring ownership.
> + */
> + rte_wmb();
> + rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc);
> +
> + /* Update BD pointer to next entry */
> + rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd);
> +
> + /* Doing this here will keep the FEC running while we process
> + * incoming frames.
> + */
> + rte_write32(0, rxq->bd.active_reg_desc);
> +
> + /* TX begins: First clean the ring then process packet */
> + index_t = enet_get_bd_index(tx_bdp, &txq->bd);
> + status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc));
> + if (status & TX_BD_READY)
> + stats->oerrors++;
> + break;
> + if (txq->tx_mbuf[index_t]) {
> + rte_pktmbuf_free(txq->tx_mbuf[index_t]);
> + txq->tx_mbuf[index_t] = NULL;
> + }
> +
> + if (mbuf == NULL)
> + continue;
> +
> + /* Fill in a Tx ring entry */
> + status &= ~TX_BD_STATS;
> +
> + /* Set buffer length and buffer pointer */
> + pkt_len = rte_pktmbuf_pkt_len(mbuf);
> + status |= (TX_BD_LAST);
> + data = rte_pktmbuf_mtod(mbuf, void *);
> +
> + for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE)
> + dcbf(data + i);
> + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
> + &tx_bdp->bd_bufaddr);
> + rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen);
> +
> + /* Make sure the updates to rest of the descriptor are performed
> + * before transferring ownership.
> + */
> + status |= (TX_BD_READY | TX_BD_TC);
> + rte_wmb();
> + rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc);
> +
> + /* Trigger transmission start */
> + rte_write32(0, txq->bd.active_reg_desc);
> +
> + /* Save mbuf pointer to clean later */
> + txq->tx_mbuf[index_t] = mbuf;
> +
> + /* If this was the last BD in the ring, start at the
> + * beginning again.
> + */
> + tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd);
> + }
> +}
> +#endif
> +
> +/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue
> + * When update through the ring, just set the empty indicator.
> + */
> +uint16_t
> +enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts)
> +{
> + struct rte_mempool *pool;
> + struct bufdesc *bdp;
> + struct rte_mbuf *mbuf, *new_mbuf = NULL;
> + unsigned short status;
> + unsigned short pkt_len;
> + int pkt_received = 0, index = 0;
> + void *data, *mbuf_data;
> + uint16_t vlan_tag;
> + struct bufdesc_ex *ebdp = NULL;
> + bool vlan_packet_rcvd = false;
> + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1;
> + struct rte_eth_stats *stats = &rxq->fep->stats;
> + struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf;
> + uint64_t rx_offloads = eth_conf->rxmode.offloads;
> + pool = rxq->pool;
> + bdp = rxq->bd.cur;
> +#if ENETFEC_LOOPBACK
> + enetfec_lb_rxtx(rxq1);
> +#endif
> + /* Process the incoming packet */
> + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
> + while (!(status & RX_BD_EMPTY)) {
> + if (pkt_received >= nb_pkts)
> + break;
> +
> + new_mbuf = rte_pktmbuf_alloc(pool);
> + if (unlikely(!new_mbuf)) {
Compare vs NULL. Please handle similar issues at all places.
> + stats->ierrors++;
> + break;
> + }
> + /* Check for errors. */
> + status ^= RX_BD_LAST;
> + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
> + RX_BD_CR | RX_BD_OV | RX_BD_LAST |
> + RX_BD_TR)) {
> + stats->ierrors++;
> + if (status & RX_BD_OV) {
> + /* FIFO overrun */
> + /* enet_dump_rx(rxq); */
> + ENET_PMD_ERR("rx_fifo_error\n");
> + goto rx_processing_done;
> + }
> + if (status & (RX_BD_LG | RX_BD_SH
> + | RX_BD_LAST)) {
> + /* Frame too long or too short. */
> + ENET_PMD_ERR("rx_length_error\n");
> + if (status & RX_BD_LAST)
> + ENET_PMD_ERR("rcv is not +last\n");
> + }
> + if (status & RX_BD_CR) { /* CRC Error */
> + ENET_PMD_ERR("rx_crc_errors\n");
> + }
> + /* Report late collisions as a frame error. */
> + if (status & (RX_BD_NO | RX_BD_TR))
> + ENET_PMD_ERR("rx_frame_error\n");
> + goto rx_processing_done;
> + }
> +
> + /* Process the incoming frame. */
> + stats->ipackets++;
> + pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
> + stats->ibytes += pkt_len;
> +
> + /* shows data with respect to the data_off field. */
> + index = enet_get_bd_index(bdp, &rxq->bd);
> + mbuf = rxq->rx_mbuf[index];
> +
> + data = rte_pktmbuf_mtod(mbuf, uint8_t *);
> + mbuf_data = data;
> + rte_prefetch0(data);
> + rte_pktmbuf_append((struct rte_mbuf *)mbuf,
> + pkt_len - 4);
> +
> + if (rxq->fep->quirks & QUIRK_RACC)
> + data = rte_pktmbuf_adj(mbuf, 2);
> +
> + rx_pkts[pkt_received] = mbuf;
> + pkt_received++;
> +
> + /* Extract the enhanced buffer descriptor */
> + ebdp = NULL;
> + if (rxq->fep->bufdesc_ex)
> + ebdp = (struct bufdesc_ex *)bdp;
> +
> + /* If this is a VLAN packet remove the VLAN Tag */
> + vlan_packet_rcvd = false;
> + if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) &&
> + rxq->fep->bufdesc_ex &&
> + (rte_read32(&ebdp->bd_esc) &
> + rte_cpu_to_le_32(BD_ENET_RX_VLAN))) {
> + /* Push and remove the vlan tag */
> + struct rte_vlan_hdr *vlan_header =
> + (struct rte_vlan_hdr *)(data + ETH_HLEN);
> + vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci);
> +
> + vlan_packet_rcvd = true;
> + memmove(mbuf_data + VLAN_HLEN, data, ETH_ALEN * 2);
> + rte_pktmbuf_adj(mbuf, VLAN_HLEN);
> + }
> +
> + /* Get receive timestamp from the mbuf */
> + if (rxq->fep->hw_ts_rx_en && rxq->fep->bufdesc_ex)
> + mbuf->timestamp =
> + rte_le_to_cpu_32(rte_read32(&ebdp->ts));
> +
> + if (rxq->fep->bufdesc_ex &&
> + (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) {
> + if (!(rte_read32(&ebdp->bd_esc) &
> + rte_cpu_to_le_32(RX_FLAG_CSUM_ERR))) {
> + /* don't check it */
> + mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD;
> + } else {
> + mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD;
> + }
> + }
> +
> + /* Handle received VLAN packets */
> + if (vlan_packet_rcvd) {
> + mbuf->vlan_tci = vlan_tag;
> + mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
> + }
> +
> + rxq->rx_mbuf[index] = new_mbuf;
> + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)),
> + &bdp->bd_bufaddr);
> +rx_processing_done:
> + /* when rx_processing_done clear the status flags
> + * for this buffer
> + */
> + status &= ~RX_BD_STATS;
> +
> + /* Mark the buffer empty */
> + status |= RX_BD_EMPTY;
> +
> + if (rxq->fep->bufdesc_ex) {
> + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
> + rte_write32(rte_cpu_to_le_32(RX_BD_INT),
> + &ebdp->bd_esc);
> + rte_write32(0, &ebdp->bd_prot);
> + rte_write32(0, &ebdp->bd_bdu);
> + }
> +
> + /* Make sure the updates to rest of the descriptor are
> + * performed before transferring ownership.
> + */
> + rte_wmb();
> + rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
> +
> + /* Update BD pointer to next entry */
> + bdp = enet_get_nextdesc(bdp, &rxq->bd);
> +
> + /* Doing this here will keep the FEC running while we process
> + * incoming frames.
> + */
> + rte_write32(0, rxq->bd.active_reg_desc);
> + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
> + }
> + rxq->bd.cur = bdp;
> + return pkt_received;
> +}
> +
> +uint16_t
> +enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> +{
> + struct enetfec_priv_tx_q *txq =
> + (struct enetfec_priv_tx_q *)tx_queue;
> + struct rte_eth_stats *stats = &txq->fep->stats;
> + struct bufdesc *bdp, *last_bdp;
> + struct rte_mbuf *mbuf;
> + unsigned short status;
> + unsigned short buflen;
> + unsigned int index, estatus = 0;
> + unsigned int i, pkt_transmitted = 0;
> + u8 *data;
> + int tx_st = 1;
> +
> + while (tx_st) {
> + if (pkt_transmitted >= nb_pkts) {
> + tx_st = 0;
> + break;
> + }
> + bdp = txq->bd.cur;
> + /* First clean the ring */
> + index = enet_get_bd_index(bdp, &txq->bd);
> + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
> +
> + if (status & TX_BD_READY) {
> + stats->oerrors++;
> + break;
> + }
> + if (txq->tx_mbuf[index]) {
Compare vs NULL
> + rte_pktmbuf_free(txq->tx_mbuf[index]);
> + txq->tx_mbuf[index] = NULL;
> + }
> +
> + mbuf = *(tx_pkts);
> + tx_pkts++;
> +
> + /* Fill in a Tx ring entry */
> + last_bdp = bdp;
> + status &= ~TX_BD_STATS;
> +
> + /* Set buffer length and buffer pointer */
> + buflen = rte_pktmbuf_pkt_len(mbuf);
> + stats->opackets++;
> + stats->obytes += buflen;
> +
> + if (mbuf->nb_segs > 1) {
> + ENET_PMD_DEBUG("SG not supported");
> + return -1;
> + }
> + status |= (TX_BD_LAST);
> + data = rte_pktmbuf_mtod(mbuf, void *);
> + for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
> + dcbf(data + i);
> +
> + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)),
> + &bdp->bd_bufaddr);
> + rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen);
> +
> + if (txq->fep->bufdesc_ex) {
> + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
> +
> + if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD)
> + estatus |= TX_BD_PINS | TX_BD_IINS;
> +
> + rte_write32(0, &ebdp->bd_bdu);
> + rte_write32(rte_cpu_to_le_32(estatus),
> + &ebdp->bd_esc);
> + }
> +
> + index = enet_get_bd_index(last_bdp, &txq->bd);
> + /* Save mbuf pointer */
> + txq->tx_mbuf[index] = mbuf;
> +
> + /* Make sure the updates to rest of the descriptor are performed
> + * before transferring ownership.
> + */
> + status |= (TX_BD_READY | TX_BD_TC);
> + rte_wmb();
> + rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc);
> +
> + /* Trigger transmission start */
> + rte_write32(0, txq->bd.active_reg_desc);
> + pkt_transmitted++;
> +
> + /* If this was the last BD in the ring, start at the
> + * beginning again.
> + */
> + bdp = enet_get_nextdesc(last_bdp, &txq->bd);
> +
> + /* Make sure the update to bdp and tx_skbuff are performed
> + * before txq->bd.cur.
> + */
> + txq->bd.cur = bdp;
> + }
> + return nb_pkts;
> +}
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2021-07-05 8:48 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-30 4:34 [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Apeksha Gupta
2021-04-30 4:34 ` [dpdk-dev] [PATCH 1/4] drivers/net/enetfec: Introduce " Apeksha Gupta
2021-06-08 13:10 ` Andrew Rybchenko
2021-07-02 13:55 ` David Marchand
2021-07-04 2:57 ` Sachin Saxena (OSS)
2021-04-30 4:34 ` [dpdk-dev] [PATCH 2/4] drivers/net/enetfec: UIO support added Apeksha Gupta
2021-06-08 13:21 ` Andrew Rybchenko
2021-07-04 4:27 ` Sachin Saxena (OSS)
2021-04-30 4:34 ` [dpdk-dev] [PATCH 3/4] drivers/net/enetfec: queue configuration Apeksha Gupta
2021-06-08 13:38 ` Andrew Rybchenko
2021-07-04 6:46 ` Sachin Saxena (OSS)
2021-04-30 4:34 ` [dpdk-dev] [PATCH 4/4] drivers/net/enetfec: add enqueue and dequeue support Apeksha Gupta
2021-06-08 13:42 ` Andrew Rybchenko
2021-06-21 9:14 ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-07-05 8:48 ` [dpdk-dev] " Sachin Saxena (OSS)
2021-05-04 15:40 ` [dpdk-dev] [PATCH 0/4] drivers/net: add NXP ENETFEC driver Ferruh Yigit
2021-07-04 2:55 ` Sachin Saxena (OSS)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).