DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Yan, Zhirun" <zhirun.yan@intel.com>
To: Jerin Jacob <jerinjacobk@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"jerinj@marvell.com" <jerinj@marvell.com>,
	 "kirankumark@marvell.com" <kirankumark@marvell.com>,
	"Liang, Cunming" <cunming.liang@intel.com>,
	"Wang, Haiyue" <haiyue.wang@intel.com>
Subject: RE: [RFC, v1 0/6] graph enhancement for multi-core dispatch
Date: Fri, 30 Sep 2022 06:41:43 +0000	[thread overview]
Message-ID: <SJ0PR11MB5022B9C6CF77D81CFB78083385569@SJ0PR11MB5022.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CALBAE1OAd2U7eA-Rb-uTCDDU_OoD8i2RTJnX8HZ+fVFqEjcbGA@mail.gmail.com>



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, September 20, 2022 5:33 PM
> To: Yan, Zhirun <zhirun.yan@intel.com>
> Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com; Liang,
> Cunming <cunming.liang@intel.com>; Wang, Haiyue
> <haiyue.wang@intel.com>
> Subject: Re: [RFC, v1 0/6] graph enhancement for multi-core dispatch
> 
> On Thu, Sep 8, 2022 at 7:40 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
> >
> > Currently, the rte_graph_walk() and rte_node_enqueue* fast path API
> > functions in graph lib implementation are designed to work on single-core.
> >
> > This solution(RFC) proposes usage of cross-core dispatching mechanism
> > to enhance the graph scaling strategy. We introduce Scheduler
> > Workqueue then we could directly dispatch streams to another worker
> > core which is affinity with a specific node.
> >
> > This RFC:
> >   1. Introduce core affinity API and graph clone API.
> >   2. Introduce key functions to enqueue/dequeue for dispatching streams.
> >   3. Enhance rte_graph_walk by cross-core dispatch.
> >   4. Add l2fwd-graph example and stats for cross-core dispatching.
> >
> > With this patch set, it could easily plan and orchestrate stream on
> > multi-core systems.
> >
> > Future work:
> >   1. Support to affinity lcore set for one node.
> >   2. Use l3fwd-graph instead of l2fwd-graph as example in patch 06.
> >   3. Add new parameter, like --node(nodeid, lcoreid) to config node for
> core
> >   affinity.
> >
> > Comments and suggestions are welcome. Thanks!
> 
> Some top level comments.
> 
> 1)Yes it makes sense to not create the l2fwd-graph, Please enhance the
> l3fwd-graph and compare the performance with multi core scenarios.
> 
Thanks for your comments.
Yes, I will use l3fwd-graph and compare the performance in next version.


> 2) It is good to have multiple graph walk schemes like the one you have
> introduced now.
> Though I am not sure about performance aspects, specifically, it is used with
> multiple producers and multi consumers with node.
> 
> If you have a use case for the new worker scheme, then we can add it.
> I think, it would call for
> 
> a) We need to have separate rte_graph_worker.h for each implementation to
> avoid the performance impact for each other.
> That may boils down to
> i) Create lib/graph/rte_graph_worker_common.h
> ii) Treat existing rte_graph_worker.h as default scheme and include
> rte_graph_worker_common.h
> iii) Add new rte_graph_worker_xxx.h for the new scheme(diff between
> default worker) with leveraging te_graph_worker_common.h
> 
> Application can select the worker by
> 
> #define RTE_GRAPH_WORKER_MODEL_XXX
> //#define RTE_GRAPH_WORKER_MODEL_YYY
> #include <rte_graph_worker.h>
> 
Yes, I will break it down in next version.

> b) Introduce a new enum rte_graph_model or so to express this  new model
> and other models in feature
> 
> c) Each core has its own node instance so we don't need explicit critical
> section management when dealing with node instances.
> In this new scheme, Can we leverage the existing node implementation?
> If not, we need to have separate node
> implementation for different graph models. It will be a maintenance issue.
> But if we really need to take this path, Probably on each node's capability,
> the node needs to declare the models supported(Use enum
> rte_graph_model).
> This can be used for sanity checking when we clone the graph etc and check
> the compatibility for creating the graph etc.
> I think this is the biggest issue with adding a new model. Where nodes need
> to be written based on the model. I think this could be the reason for VPP
> not adding other models.
> 
I am agree with you. We should leverage the existing node implementation.
Also, I think the node should be agnostic of graph model.

There’re different kinds of data (tables, state, etc.) being referred in the node.
Some of them are private data of node, which has no constraint to any graph model.
For other shared data within graph, it does have thread-safe prerequisite of data access.
Instead of declaring graph model, actually it makes more sense for the node to declare the visibility of data.
It can still be transparent to the node. When running on a ‘single-core’ model, it can simplify fallback to a zero cost data access.


> d) All new slowpath APIs like rte_node_set_lcore_affinity, rte_graph_clone,
> We need to fix the namespace by
> rte_graph_model_<model_name>_<operation> or so to make sure
> application writer understand this APIs are only for this model.(Also we can
> use "enum rte_graph_model" for sanity check etc)
> 
Yes, If the operation is binding with one model, we need add the namespace.

But for rte_node_set_lcore_affinity(), it could be treated as a common API for
all models. 

For current single-core model, one graph is binding with one lcore. And it actually
makes an implicit call to set all nodes to current lcore. So I think
rte_node_set_lcore_affinity() could be a common API.
> 
> 
> >

      reply	other threads:[~2022-09-30  6:42 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-08  2:09 Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 1/6] graph: introduce core affinity API into graph Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 2/6] graph: introduce graph clone API for other worker core Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 3/6] graph: enable stream moving cross cores Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 4/6] graph: enhance graph walk by cross-core dispatch Zhirun Yan
2022-09-08  5:27   ` [EXT] " Pavan Nikhilesh Bhagavatula
2022-09-15  1:52     ` Yan, Zhirun
2022-09-08  2:09 ` [RFC, v1 5/6] graph: add stats for corss-core dispatching Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 6/6] examples: add l2fwd-graph Zhirun Yan
2022-09-20  9:33 ` [RFC, v1 0/6] graph enhancement for multi-core dispatch Jerin Jacob
2022-09-30  6:41   ` Yan, Zhirun [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR11MB5022B9C6CF77D81CFB78083385569@SJ0PR11MB5022.namprd11.prod.outlook.com \
    --to=zhirun.yan@intel.com \
    --cc=cunming.liang@intel.com \
    --cc=dev@dpdk.org \
    --cc=haiyue.wang@intel.com \
    --cc=jerinj@marvell.com \
    --cc=jerinjacobk@gmail.com \
    --cc=kirankumark@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).