DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Mattias Rönnblom" <mattias.ronnblom@ericsson.com>
To: Thomas Monjalon <thomas@monjalon.net>,
	Jerin Jacob <jerinjacobk@gmail.com>
Cc: Jerin Jacob <jerinj@marvell.com>, Ray Kinsella <mdr@ashroe.eu>,
	dpdk-dev <dev@dpdk.org>, Prasun Kapoor <pkapoor@marvell.com>,
	Nithin Dabilpuram <ndabilpuram@marvell.com>,
	Kiran Kumar K <kirankumark@marvell.com>,
	Pavan Nikhilesh <pbhagavatula@marvell.com>,
	Narayana Prasad <pathreya@marvell.com>,
	 "nsaxena@marvell.com" <nsaxena@marvell.com>,
	"sshankarnara@marvell.com" <sshankarnara@marvell.com>,
	Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
	David Marchand <david.marchand@redhat.com>,
	Ferruh Yigit <ferruh.yigit@intel.com>,
	Andrew Rybchenko <arybchenko@solarflare.com>,
	Ajit Khaparde <ajit.khaparde@broadcom.com>,
	"Ye,  Xiaolong" <xiaolong.ye@intel.com>,
	Raslan Darawsheh <rasland@mellanox.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>,
	Akhil Goyal <akhil.goyal@nxp.com>,
	Cristian Dumitrescu <cristian.dumitrescu@intel.com>,
	John McNamara <john.mcnamara@intel.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	Anatoly Burakov <anatoly.burakov@intel.com>,
	Gavin Hu <gavin.hu@arm.com>,
	David Christensen <drc@linux.vnet.ibm.com>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	Pallavi Kadam <pallavi.kadam@intel.com>,
	Olivier Matz <olivier.matz@6wind.com>,
	Gage Eads <gage.eads@intel.com>,
	"Rao, Nikhil" <nikhil.rao@intel.com>,
	Erik Gabriel Carrillo <erik.g.carrillo@intel.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>,
	"Artem V. Andreev" <artem.andreev@oktetlabs.ru>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Shahaf Shuler <shahafs@mellanox.com>,
	"Wiles, Keith" <keith.wiles@intel.com>,
	Jasvinder Singh <jasvinder.singh@intel.com>,
	Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
	"techboard@dpdk.org" <techboard@dpdk.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	"dave@barachs.net" <dave@barachs.net>
Subject: Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
Date: Fri, 21 Feb 2020 15:38:54 +0000	[thread overview]
Message-ID: <2433be82-b18a-3de2-35aa-35a5d06d481c@ericsson.com> (raw)
In-Reply-To: <8553959.CDJkKcVGEf@xps>

On 2020-02-21 12:10, Thomas Monjalon wrote:
> 21/02/2020 11:30, Jerin Jacob:
>> On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>>> On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>> Thanks for starting this discussion now. It is an interesting
>>> discussion.  Some thoughts below.
>>> We can decide based on community consensus and follow a single rule
>>> across the components.
>> Thomas,
>>
>> No feedback yet on the below questions.
> Indeed. I was waiting for opininons from others.
>
>> If there no consensus in the email, I would like to propose this topic
>> to the 26th Feb TB meeting.
> I gave my opinion below.
> If a consensus cannot be reached, I agree with the request to the techboard.
>
>
>>>> 17/02/2020 08:19, Jerin Jacob:
>>>>> I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
>>>>> the comments.
>>>>>
>>>>> Is anyone else planning to have an architecture level or API usage
>>>>> level review or any review of other top-level aspects?
>>>> If we add rte_graph to DPDK, we will have 2 similar libraries.
>>>>
>>>> I already proposed several times to move rte_pipeline in a separate
>>>> repository for two reasons:
>>>>          1/ it is acting at a higher API layer level
>>> We need to define what is the higher layer API. Is it processing beyond L2?
> My opinion is that any API which is implemented differently
> for different hardware should be in DPDK.
> Hardware devices can offload protocol processing higher than L2,
> so L2 does not look to be a good limit from my point of view.
>
If you assume the capability of networking hardware will grow, and you 
want to unify different networking hardware with varying capabilities 
(and also include software-only implementations) under one API, then you 
might well end up growing DPDK into the software stack you mention 
below. Soft implementations of complex protocols will require operating 
system-like support services like timers, RCU, various lock-less data 
structures, deferred work mechanism, counter handling frameworks, 
control plane interfaces, etc. Coupling should always be avoided of 
course, but DPDK would inevitably no longer be a pick-and-choose 
smörgåsbord library - at least as long as the consumer wants to utilize 
this higher-layer functionality.

This would make DPDK more of a packet processing run-time or a 
special-purpose, networking operating system than the "a bunch of 
Ethernet drivers in user space" as it started out as.

I'm not saying that's a bad thing. In fact, I think it sounds like an 
interesting option, although also a very challenging one. From what I 
can see, DPDK has already set out along this route already. If this is a 
conscious decision or not, I don't know. Add to this, if Linux expands 
further with AF_XDP-like features, beyond simply packet I/O, it might 
not only try to take over DPDK's original concerns, but also more of the 
current ones.

>>> In the context of Graph library, it is a framework, not using any of
>>> the substem API
>>> other than EAL and it is under lib/librte_graph.
>>> Nodes library using graph and other subsystem components such as ethdev and
>>> it is under lib/lib_node/
>>>
>>>
>>> Another interesting question would what would be an issue in DPDK supporting
>>> beyond L2. Or higher level protocols?
> Definitely higher than L2 is OK in DPDK as long as it is related to hardware
> capabilities, not software stack (which can be a DPDK application).
>
>
>>>>          2/ there can be different solutions in this layer
>>> Is there any issue with that?
>>> There is overlap with the distributor library and eventdev as well.
>>> ethdev and SW traffic manager libraries as well. That list goes on.
> I don't know how much it is an issue.
> But I think it shows that at least one implementation is not generic enough.
>
>
>>>> I think 1/ was commonly agreed in the community.
>>>> Now we see one more proof of the reason 2/.
>>>>
>>>> I believe it is time to move rte_pipeline (Packet Framework)
>>>> in a separate repository, and welcome rte_graph as well in another
>>>> separate repository.
>>> What would be gain out of this?
> The gain is to be clear about what should be the focus for contributors
> working on the main DPDK repository.
> What is expected to be maintained, tested, etc.
>
>
>>> My concerns are:
>>> # Like packet-gen, The new code will be filled with unnecessary DPDK
>>> version checks
>>> and unnecessary compatibility issues.
>>> # Anything is not in main dpdk repo, it is a second class citizen.
>>> # Customer has the pain to use two repos and two releases. Internally,
>>> it can be two different
>>> repo but release needs to go through one repo.
>>>
>>> If we are focusing ONLY on the driver API then how can DPDK grow
>>> further? If linux kernel
>>> would be thought only have just the kernel and networking/storage as
>>> different repo it would
>>> not have grown up?
> Linux kernel is selecting what can enter in the focus or not.
> And I wonder what is the desire of extending/growing the scope of a library?
>
>
>>> What is the real concern? Maintenance?
>>>
>>>> I think the original DPDK repository should focus on low-level features
>>>> which offer hardware offloads and optimizations.
>>> The nodes can be vendor-specific to optimize the specific use cases.
>>> As I mentioned in the cover letter,
>>>
>>> "
>>> 2) Based on our experience, NPU HW accelerates are so different than one vendor
>>> to another vendor. Going forward, We believe, API abstraction may not be enough
>>> abstract the difference in HW. The Vendor-specific nodes can abstract the HW
>>> differences and reuse generic the nodes as needed.
>>> This would help both the silicon vendors and DPDK end users.
>>> "
>>>
>>> Thoughts from other folks?
>>>
>>>
>>>> Consuming the low-level API in different abstractions,
>>>> and building applications, should be done on top of dpdk.git.
>
>


  reply	other threads:[~2020-02-21 15:38 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-31 17:01 jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 1/5] " jerinj
2020-02-02 10:34   ` Stephen Hemminger
2020-02-02 10:35   ` Stephen Hemminger
2020-02-02 11:08     ` Jerin Jacob
2020-02-02 10:38   ` Stephen Hemminger
2020-02-02 11:21     ` Jerin Jacob
2020-02-03  9:14       ` Gaetan Rivet
2020-02-03  9:49         ` Jerin Jacob
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 2/5] node: add packet processing nodes jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 3/5] test: add graph functional tests jerinj
2020-01-31 17:02 ` [dpdk-dev] [RFC PATCH 4/5] test: add graph performance test cases jerinj
2020-01-31 17:02 ` [dpdk-dev] [RFC PATCH 5/5] example/l3fwd_graph: l3fwd using graph architecture jerinj
2020-01-31 18:34 ` [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem Ray Kinsella
2020-02-01  5:44   ` Jerin Jacob
2020-02-17  7:19     ` Jerin Jacob
2020-02-17  8:38       ` Thomas Monjalon
2020-02-17 10:58         ` Jerin Jacob
2020-02-21 10:30           ` Jerin Jacob
2020-02-21 11:10             ` Thomas Monjalon
2020-02-21 15:38               ` Mattias Rönnblom [this message]
2020-02-21 15:53                 ` dave
2020-02-21 16:04                   ` Thomas Monjalon
2020-02-21 15:56               ` Jerin Jacob
2020-02-21 16:14                 ` Thomas Monjalon
2020-02-22  9:05                   ` Jerin Jacob
2020-02-22  9:52                     ` Thomas Monjalon
2020-02-22 10:24                       ` Jerin Jacob
2020-02-24 10:59                         ` Ray Kinsella
2020-02-25  5:22 ` Honnappa Nagarahalli
2020-02-25  6:14   ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2433be82-b18a-3de2-35aa-35a5d06d481c@ericsson.com \
    --to=mattias.ronnblom@ericsson.com \
    --cc=ajit.khaparde@broadcom.com \
    --cc=akhil.goyal@nxp.com \
    --cc=anatoly.burakov@intel.com \
    --cc=artem.andreev@oktetlabs.ru \
    --cc=arybchenko@solarflare.com \
    --cc=bruce.richardson@intel.com \
    --cc=cristian.dumitrescu@intel.com \
    --cc=dave@barachs.net \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=drc@linux.vnet.ibm.com \
    --cc=erik.g.carrillo@intel.com \
    --cc=ferruh.yigit@intel.com \
    --cc=gage.eads@intel.com \
    --cc=gavin.hu@arm.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=jasvinder.singh@intel.com \
    --cc=jerinj@marvell.com \
    --cc=jerinjacobk@gmail.com \
    --cc=john.mcnamara@intel.com \
    --cc=keith.wiles@intel.com \
    --cc=kirankumark@marvell.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mdr@ashroe.eu \
    --cc=ndabilpuram@marvell.com \
    --cc=nikhil.rao@intel.com \
    --cc=nsaxena@marvell.com \
    --cc=olivier.matz@6wind.com \
    --cc=pallavi.kadam@intel.com \
    --cc=pathreya@marvell.com \
    --cc=pbhagavatula@marvell.com \
    --cc=pkapoor@marvell.com \
    --cc=rasland@mellanox.com \
    --cc=shahafs@mellanox.com \
    --cc=sshankarnara@marvell.com \
    --cc=stephen@networkplumber.org \
    --cc=sthemmin@microsoft.com \
    --cc=techboard@dpdk.org \
    --cc=thomas@monjalon.net \
    --cc=vladimir.medvedkin@intel.com \
    --cc=xiaolong.ye@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).