DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Apeksha Gupta <apeksha.gupta@nxp.com>,
	"david.marchand@redhat.com" <david.marchand@redhat.com>,
	"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Sachin Saxena <sachin.saxena@nxp.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>
Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v3 1/5] net/enetfec: introduce NXP ENETFEC driver
Date: Fri, 1 Oct 2021 15:45:23 +0100	[thread overview]
Message-ID: <f3c7e440-bc61-6ab3-3de4-b79753f77b4b@intel.com> (raw)
In-Reply-To: <VI1PR04MB68160BB7E47A570F1BBA060DEFAB9@VI1PR04MB6816.eurprd04.prod.outlook.com>

On 10/1/2021 11:22 AM, Apeksha Gupta wrote:
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Thursday, September 23, 2021 3:40 PM
>> To: Apeksha Gupta <apeksha.gupta@nxp.com>;
>> david.marchand@redhat.com; andrew.rybchenko@oktetlabs.ru;
>> ferruh.yigit@intel.com
>> Cc: dev@dpdk.org; Sachin Saxena <sachin.saxena@nxp.com>; Hemant
>> Agrawal <hemant.agrawal@nxp.com>
>> Subject: [EXT] Re: [dpdk-dev] [PATCH v3 1/5] net/enetfec: introduce NXP
>> ENETFEC driver
>>
>> Caution: EXT Email
>>
>> On 9/9/2021 9:43 PM, Apeksha Gupta wrote:
>>> ENETFEC (Fast Ethernet Controller) is a network poll mode driver
>>> for NXP SoC i.MX 8M Mini.
>>>
>>
>> Hi Apeksha,
>>
>> Before going into details, I have some high level comments to start with,
>> please
>> find comments below.
>>
>>> This patch adds skeleton for enetfec driver with probe function.
>>>
>>> Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
>>> Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>>
>> <...>
>>
>>> +++ b/doc/guides/nics/enetfec.rst
>>> @@ -0,0 +1,122 @@
>>> +.. SPDX-License-Identifier: BSD-3-Clause
>>> +   Copyright 2021 NXP
>>> +
>>> +ENETFEC Poll Mode Driver
>>> +========================
>>> +
>>> +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode
>> driver
>>> +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC.
>>> +
>>> +More information can be found at NXP Official Website
>>>
>> +<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fw
>> ww.nxp.com%2Fproducts%2Fprocessors-and-microcontrollers%2Farm-
>> processors%2Fi-mx-applications-processors%2Fi-mx-8-processors%2Fi-mx-
>> 8m-mini-arm-cortex-a53-cortex-m4-audio-voice-
>> video%3Ai.MX8MMINI&amp;data=04%7C01%7Capeksha.gupta%40nxp.com
>> %7C74fdc59c47574d828f7608d97e7a4df3%7C686ea1d3bc2b4c6fa92cd99c5c30
>> 1635%7C0%7C1%7C637679886024819097%7CUnknown%7CTWFpbGZsb3d8ey
>> JWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
>> 7C3000&amp;sdata=TjuqIfC8yXwBa3DLCgA7djItfV0UmZ6lA3uLDQ7TFwY%3D
>> &amp;reserved=0>
>>> +
>>> +ENETFEC
>>> +-------
>>> +
>>> +This section provides an overview of the NXP ENETFEC and how it is
>>> +integrated into the DPDK.
>>> +
>>> +Contents summary
>>> +
>>> +- ENETFEC overview
>>> +- ENETFEC features
>>> +- Supported ENETFEC SoCs
>>> +- Prerequisites
>>> +- Driver compilation and testing
>>> +- Limitations
>>> +
>>> +ENETFEC Overview
>>> +~~~~~~~~~~~~~~~~
>>> +The i.MX 8M Mini Media Applications Processor is built to achieve both
>> high
>>> +performance and low power consumption. ENETFEC is a hardware
>> programmable
>>> +packet forwarding engine to provide high performance Ethernet interface.
>>
>> It has 1Gbps interface, right? It can be good to give more details on the
>> Ethernet interface.
> [Apeksha] Okay.
>>
>>> +The diagram below shows a system level overview of ENETFEC:
>>> +
>>> +
>> ====================================================+=====
>> ==========
>>> +   US   +-----------------------------------------+    | Kernel Space
>>> +     |                                         |    |
>>> +     |               ENETFEC Driver            |    |
>>> +     +-----------------------------------------+    |
>>> +                       ^   |                        |
>>> +   ENETFEC         RXQ |   | TXQ                    |
>>> +   PMD                         |   |                        |
>>> +                       |   v                        |   +----------+
>>> +                  +-------------+                   |   | fec-uio  |
>>> +                  | net_enetfec |                   |   +----------+
>>> +                  +-------------+                   |
>>> +                       ^   |                        |
>>> +                   TXQ |   | RXQ                    |
>>> +                       |   |                        |
>>> +                       |   v                        |
>>> +
>> ===================================================+======
>> =========
>>> +      +----------------------------------------+
>>> +      |                                        |       HW
>>> +      |           i.MX 8M MINI EVK             |
>>> +      |               +-----+                  |
>>> +      |               | MAC |                  |
>>> +      +---------------+-----+------------------+
>>> +                      | PHY |
>>> +                      +-----+
>>> +
>>> +ENETFEC Ethernet driver is traditional DPDK PMD driver running in the
>> userspace.
>>> +The MAC and PHY are the hardware blocks. 'fec-uio' is the UIO driver,
>> ENETFEC PMD
>>> +uses UIO interface to interact with kernel for PHY initialisation and for
>> mapping
>>> +the allocated memory of register & BD in kernel with DPDK which gives
>> access to
>>> +non-cacheable memory for BD.
>>
>> Why a specific uio driver, 'fec-uio', is required? I think this is the major
>> issue to clarify to proceed.
>>
>> In DPDK we have full framework to support uio, to do the all memory
>> mapping,
>> interrupt configuration etc..., common to all drivers.
>> But in this case driver is implemented as virtual driver and it handles its own
>> uoi handling itself. Why the driver can't use existing support and
>> implemented
>> as physical driver?
> [Apeksha] Yes you are correct. As per our knowledge, UIO framework is there for VM & PCI bus devices and not for vdev bus devices. 

That is part of the comment, why driver implemented as vdev instead of physical
device?
What is the actual device bus?

  reply	other threads:[~2021-10-01 14:50 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-09 20:43 [dpdk-dev] [PATCH v3 0/5] drivers/net: add " Apeksha Gupta
2021-09-09 20:43 ` [dpdk-dev] [PATCH v3 1/5] net/enetfec: introduce " Apeksha Gupta
2021-09-23 10:09   ` Ferruh Yigit
2021-10-01 10:22     ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-10-01 14:45       ` Ferruh Yigit [this message]
2021-10-05  5:24         ` Apeksha Gupta
2021-10-05  9:29           ` Ferruh Yigit
2021-10-06  9:36             ` Apeksha Gupta
2021-09-09 20:43 ` [dpdk-dev] [PATCH v3 2/5] net/enetfec: add UIO support Apeksha Gupta
2021-09-23 10:13   ` Ferruh Yigit
2021-10-01 10:30     ` [dpdk-dev] [EXT] " Apeksha Gupta
2021-09-09 20:43 ` [dpdk-dev] [PATCH v3 3/5] net/enetfec: support queue configuration Apeksha Gupta
2021-09-09 20:43 ` [dpdk-dev] [PATCH v3 4/5] net/enetfec: add enqueue and dequeue support Apeksha Gupta
2021-09-09 20:43 ` [dpdk-dev] [PATCH v3 5/5] net/enetfec: add features Apeksha Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f3c7e440-bc61-6ab3-3de4-b79753f77b4b@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=apeksha.gupta@nxp.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=sachin.saxena@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).