* [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
@ 2020-01-31 17:01 jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 1/5] " jerinj
` (6 more replies)
0 siblings, 7 replies; 31+ messages in thread
From: jerinj @ 2020-01-31 17:01 UTC (permalink / raw)
To: dev
Cc: pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard, Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
This RFC is targeted for v20.05 release.
This RFC patch includes an implementation of graph architecture for packet
processing using DPDK primitives.
Using graph traversal for packet processing is a proven architecture
that has been implemented in various open source libraries.
Graph architecture for packet processing enables abstracting the data
processing functions as “nodes” and “links” them together to create a complex
“graph” to create reusable/modular data processing functions.
The RFC patch further includes performance enhancements and modularity
to the DPDK as discussed in more detail below.
What this RFC patch contains:
-----------------------------
1) The API definition to "create" nodes and "link" together to create a "graph"
for packet processing. See, lib/librte_graph/rte_graph.h
2) The Fast path API definition for the graph walker and enqueue function
used by the workers. See, lib/librte_graph/rte_graph_worker.h
3) Optimized SW implementation for (1) and (2). See, lib/librte_graph/
4) Test case to verify the graph infrastructure functionality
See, app/test/test_graph.c
5) Performance test cases to evaluate the cost of graph walker and nodes
enqueue fast-path function for various combinations.
See app/test/test_graph_perf.c
6) Packet processing nodes(Null, Rx, Tx, Pkt drop, IPV4 rewrite, IPv4 lookup)
using graph infrastructure. See lib/librte_node/*
7) An example application to showcase l3fwd
(functionality same as existing examples/l3fwd) using graph infrastructure and
use packets processing nodes (item (6)). See examples/l3fwd-graph/.
Performance
-----------
1) Graph walk and node enqueue overhead can be tested with performance test
case application [1]
# If all packets go from a node to another node (we call it as "homerun") then
it will be just a pointer swap for a burst of packets.
# In the worst case, a couple of handful cycles to move an object from a node
to another node.
2) Performance comparison with existing l3fwd (The complete static code with out
any nodes) vs modular l3fwd-graph with 5 nodes
(ip4_lookup, ip4_rewrite, ethdev_tx, ethdev_rx, pkt_drop).
Here is graphical representation of the l3fwd-graph as Graphviz dot file:
http://bit.ly/39UPPGm
# l3fwd-graph performance is -2.5% wrt static l3fwd.
# We have simulated the similar test with existing librte_pipeline application [4].
ip_pipline application is -48.62% wrt static l3fwd.
The above results are on octeontx2. It may vary on other platforms.
The platforms with higher L1 and L2 caches will have further better performance.
Tested architectures:
--------------------
1) AArch64
2) X86
Graph library Features
----------------------
1) Nodes as plugins
2) Support for out of tree nodes
3) Multi-process support.
4) Low overhead graph walk and node enqueue
5) Low overhead statistics collection infrastructure
6) Support to export the graph as a Graphviz dot file.
See rte_graph_export()
Example of exported graph: http://bit.ly/2PqbqOy
7) Allow having another graph walk implementation
in the future by segregating the fast path and slow path code.
Advantages of Graph architecture:
---------------------------------
1) Memory latency is the enemy for high-speed packet processing,
moving the similar packet processing code to a node will reduce
the I cache and D caches misses.
2) Exploits the probability that most packets will follow the same nodes in the graph.
3) Allow SIMD instructions for packet processing of the node.
4) The modular scheme allows having reusable nodes for the consumers.
5) The modular scheme allows us to abstract the vendor HW specific
optimizations as a node.
What is different than existing libpipeline library
---------------------------------------------------
At a very high level, libpipeline created to allow modular plugin interface.
Based on our analysis the performance is better in the graph model.
Check the details under the Performance section, Item (2).
This rte_graph implementation has taken care of fixing some of the
architecture/implementations limitations with libpipeline.
1) Use cases like IP fragmentation, TCP ACK processing
(with new TCP data sent out in the same context)
have a problem as rte_pipeline_run() passes just pkt_mask of 64 bits to different
tables and packet pointers are stored in the single array in struct rte_pipeline_run.
In Graph architecture, The node has complete control of how many packets are
output to next node seamlessly.
2) Since pktmask is passed to different tables, it takes multiple for loops to
extract pkts out of fragmented pkts_mask. This makes it difficult to prefetch
ahead a set of packets. This issue does not exist in Graph architecture.
3) Every table have two/three function pointers unlike graph architecture that
has a single function pointer for node.
4) The current libpipeline main fast-path function doesn't support tree-like
topology where 64 packets can be redirected to 64 different tables.
It is currently limited to table-based next table id instead of per-packet
action based next table id. So in a typical case, we need to cascade tables and
sequentially go through all the tables to reach the last table.
5) pkt_mask limit is 64 bits which is the max burst size possible.
The graph library supports up to 256.
In short, both are significantly different architectures.
Allowing the end-user to choose the model would be a more appropriate decision
by keeping both in DPDK.
Why this RFC
------------
1) We believe, Graph architecture provides the best performance for
reusable/modular packet processing framework.
Since DPDK does not have it, it is good to have it in DPDK.
2) Based on our experience, NPU HW accelerates are so different than one vendor
to another vendor. Going forward, We believe, API abstraction may not be enough
abstract the difference in HW. The Vendor-specific nodes can abstract the HW
differences and reuse generic the nodes as needed.
This would help both the silicon vendors and DPDK end users.
3) The framework enables the protocol stack as use native mbuf for
graph processing to avoid any conversion between the formats for
better performance.
4) DPDK becomes the "goto library" for userspace HW acceleration.
It is good to have native Graph packet processing library in DPDK.
5) Obviously, Our customers are interested in Graph library in DPDK :-)
Identified tweaking for better performance on different targets
---------------------------------------------------------------
1) Test with various burst size values (256, 128, 64, 32) using
CONFIG_RTE_GRAPH_BURST_SIZE config option.
Based on our testing, on x86 and arm64 servers, The sweet spot is 256 burst size.
While on arm64 embedded SoCs, it is either 64 or 128.
2) Disable node statistics (use CONFIG_RTE_LIBRTE_GRAPH_STATS config option)
if not needed.
3) Use arm64 optimized memory copy for arm64 architecture by
selecting CONFIG_RTE_ARCH_ARM64_MEMCPY.
Commands to run tests
---------------------
[1]
perf test:
echo "graph_perf_autotest" | sudo ./build/app/test/dpdk-test -c 0x30
[2]
functionality test:
echo "graph_autotest" | sudo ./build/app/test/dpdk-test -c 0x30
[3]
l3fwd-graph:
./l3fwd-graph -c 0x100 -- -p 0x3 --config="(0, 0, 8)" -P
[4]
# ./ip_pipeline --c 0xff0000 -- -s route.cli
Route.cli: (Copy paste to the shell to avoid dos format issues)
https://pastebin.com/raw/B4Ktx7TT
Next steps
-----------------------------
1) Feedback from the community on the library.
2) Collect the API requirements from the community.
3) Sending the next version by addressing the community initial.
feedback and fixing the following identified "pending items".
Pending items (Will be addressed in next revision)
-------------------------------------------------
1) Add documentation as a patch
2) Add Doxygen API documentation
3) Split the patches at a more logical level for a better review.
4) code cleanup
5) more optimizations in the nodes and graph infrastructure.
Programming guide and API walk-through
--------------------------------------
# Anatomy of Node:
~~~~~~~~~~~~~~~~~
See the https://github.com/jerinjacobk/share/blob/master/Anatomy_of_a_node.svg
The above diagram depicts the anatomy of a node.
The node is the basic building block of the graph framework.
A node consists of:
a) process():
The callback function will be invoked by worker thread using
rte_graph_walk() function when there is data to be processed by the node.
A graph node process the function using process() and enqueue to next
downstream node using rte_node_enqueue*() function.
b) Context memory:
It is memory allocated by the library to store the node-specific context
information. which will be used by process(), init(), fini() callbacks.
c) init():
The callback function which will be invoked by rte_graph_create() on when a node
gets attached to a graph.
d) fini():
The callback function which will be invoked by rte_graph_destroy() on when a node
gets detached to a graph.
e) Node name:
It is the name of the node. When a node registers to graph library, the library
gives the ID as rte_node_t type. Both ID or Name shall be used lookup the node.
rte_node_from_name(), rte_node_id_to_name() are the node lookup functions.
f) nb_edges:
Number of downstream nodes connected to this node. The next_nodes[] stores the
downstream nodes objects. rte_node_edge_update() and rte_node_edge_shrink()
functions shall be used to update the next_node[] objects. Consumers of the node
APIs are free to update the next_node[] objects till rte_graph_create() invoked.
g) next_node[]:
The dynamic array to store the downstream nodes connected to this node.
# Node creation and registration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
a) Node implementer creates the node by implementing ops and attributes of
'struct rte_node_register'
b) The library registers the node by invoking RTE_NODE_REGISTER on library load
using the constructor scheme.
The constructor scheme used here to support multi-process.
# Link the Nodes to create the graph topology
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
See the https://github.com/jerinjacobk/share/blob/master/Link_the_nodes.svg
The above diagram shows a graph topology after linking the N nodes.
Once nodes are available to the program, Application or node public API functions
can links them together to create a complex packet processing graph.
There are multiple different types of strategies to link the nodes.
Method a) Provide the next_nodes[] at the node registration time.
See 'struct rte_node_register::nb_edges'. This is a use case to address the static
node scheme where one knows upfront the next_nodes[] of the node.
Method b) Use rte_node_edge_get(), rte_node_edge_update(), rte_node_edge_shrink() to
Update the next_nodes[] links for the node dynamically.
Method c) Use rte_node_clone() to clone a already existing node.
When rte_node_clone() invoked, The library, would clone all the attributes
of the node and creates a new one. The name for cloned node shall be
"parent_node_name-user_provided_name". This method enables the use case of Rx and Tx
nodes where multiple of those nodes need to be cloned based on the number of CPU
available in the system. The cloned nodes will be identical, except the "context memory".
Context memory will have information of port, queue pair incase of Rx and Tx ethdev nodes.
# Create the graph object
~~~~~~~~~~~~~~~~~~~~~~~~~
Now that the nodes are linked, Its time to create a graph by including
the required nodes. The application can provide a set of node patterns to
form a graph object.
The fnmatch() API used underneath for the pattern matching to include
the required nodes.
The rte_graph_create() API shall be used to create the graph.
Example of a graph object creation:
{"ethdev_rx_0_0", ipv4-*, ethdev_tx_0_*"}
In the above example, A graph object will be created with ethdev Rx
node of port 0 and queue 0, all ipv4* nodes in the system,
and ethdev tx node of port 0 with all queues.
# Multi core graph processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the current graph library implementation, specifically,
rte_graph_walk() and rte_node_enqueue* fast path API functions
are designed to work on single-core to have better performance.
The fast path API works on graph object, So the multi-core graph
processing strategy would be to create graph object PER WORKER.
# In fast path:
~~~~~~~~~~~~~~~
Typical fast-path code looks like below, where the application
gets the fast-path graph object through rte_graph_lookup()
on the worker thread and run the rte_graph_walk() in a tight loop.
struct rte_graph *graph = rte_graph_lookup("worker0");
while (!done) {
rte_graph_walk(graph);
}
# Context update when graph walk in action
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The fast-path object for the node is `struct rte_node`.
It may be possible that in slow-path or after the graph walk-in action,
the user needs to update the context of the node hence access to
struct rte_node * memory.
rte_graph_foreach_node(), rte_graph_node_get(), rte_graph_node_get_by_name()
APIs can be used to to get the struct rte_node*. rte_graph_foreach_node() iterator
function works on struct rte_graph * fast-path graph object while others
works on graph ID or name.
# Get the node statistics using graph cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The user may need to know the aggregate stats of the node across
multiple graph objects. Especially the situation where each
graph object bound to a worker thread.
Introduced a graph cluster object for statistics. rte_graph_cluster_stats_create()
shall be used for creating a graph cluster with multiple graph objects and
rte_graph_cluster_stats_get() to get the aggregate node statistics.
An example statistics output from rte_graph_cluster_stats_get()
+-----------+------------+-------------+---------------+------------+---------------+-----------+
|Node |calls |objs |realloc_count |objs/call |objs/sec(10E6) |cycles/call|
+------------------------+-------------+---------------+------------+---------------+-----------+
|node0 |12977424 |3322220544 |5 |256.000 |3047.151872 |20.0000 |
|node1 |12977653 |3322279168 |0 |256.000 |3047.210496 |17.0000 |
|node2 |12977696 |3322290176 |0 |256.000 |3047.221504 |17.0000 |
|node3 |12977734 |3322299904 |0 |256.000 |3047.231232 |17.0000 |
|node4 |12977784 |3322312704 |1 |256.000 |3047.243776 |17.0000 |
|node5 |12977825 |3322323200 |0 |256.000 |3047.254528 |17.0000 |
+-----------+------------+-------------+---------------+------------+---------------+-----------+
# Node writing guide lines
~~~~~~~~~~~~~~~~~~~~~~~~~~
The process() function of a node is fast-path function and that needs to be written
carefully to achieve max performance.
Broadly speaking, there are two different types of nodes.
1) First kind of nodes are those that have a fixed next_nodes[] for the
complete burst (like ethdev_rx, ethdev_tx) and it is simple to write.
Process() function can move the obj burst to the next node either using
rte_node_next_stream_move() or using rte_node_next_stream_get() and
rte_node_next_stream_put().
2) The second kind of such node is `intermediate nodes` that decide what is the next_node[]
to send to on a per-packet basis. In these nodes,
a) Firstly, there has to be the best possible packet processing logic.
b) Secondly, each packet needs to be queued to its next node.
At least on some architectures, we get around ~10% more performance if we can avoid copying of
packet pointers from one node to next as it is ~= memcpy(BURST_SIZE x sizeof(void *)) x NODE_COUNT.
This can be avoided only in the case where all the packets are destined to the same
next node. We call this as home run case and we use rte_node_next_stream_move() to
just move burst of object array by swapping the pointer. a.k.a move stream from one node to next node
with least number of cycles.
Example of intermediate node implementation with home run:
a) Start with speculation that next_node = ctx->next_node.
This could be the next_node application used in the previous function call of this node.
b) Get the next_node stream array and space using
rte_node_next_stream_get(next_node, &space)
c) while space != 0 and n_pkts_left != 0,
prefetch next pkt_set and process current pkt_set to find their next node
d) if all the next nodes of the current pkt_set match speculated next node,
just count them as successfully speculated(last_spec) till now and
continue the loop without actually moving them to the next node.
else if there is a mismatch,
copy all the pkt_set pointers that were last_spec and
move the current pkt_set to their respective next's nodes using
rte_enqueue_next_x1(). Also one of the next_node can be updated as
speculated next_node if it is more probable. Also set last_spec = 0
e) if n_pkts_left != 0 and space != 0
goto c) as there is space in the speculated next_node.
f) if last_spec == n_pkts_left,
then we successfully speculated all the packets to right next node.
Just call rte_node_next_stream_move(node, next_node) to just move the
stream/obj array to next node. This is home run where we avoided
memcpy of buffer pointers to next node.
g) if space = 0 and n_pkts_left != 0
goto b)
h) Update the ctx->next_node with more probable next node.
# In-tree node documentation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
a) librte_node/ethdev_rx.c:
This node does rte_eth_rx_burst() into stream buffer acquired using
rte_node_next_stream_get() and does rte_node_next_stream_put(count)
only when there are packets received. Each rte_node works on only on
one rx port and queue that it gets from node->context.
For each (port X, rx_queue Y), a rte_node is cloned from ethdev_rx_base_node
as "ethdev_rx-X-Y" in rte_node_eth_config() along with updating
node->context. Each graph needs to be associated with a unique
rte_node for a (port, rx_queue).
b) librte_node/ethdev_tx.c:
This node does rte_eth_tx_burst() for a burst of objs received by it.
It sends the burst to a fixed Tx Port and Queue information from
node->context. For each (port X), this rte_node is cloned from
ethdev_tx_node_base as "ethdev_tx-X" in rte_node_eth_config()
along with updating node->context.
Since each graph doesn't need more than one Txq, per port,
a Txq is assigned based on graph id to each rte_node instance.
Each graph needs to be associated with a rte_node for each (port).
c) librte_node/pkt_drop.c:
This node frees all the objects that are passed to it.
d) librte_node/ip4_lookup.c:
This node is an intermediate node that does lpm lookup for the received
ipv4 packets and the result determines each packets next node.
a) On successful lpm lookup, the result contains the nex_node id and
next-hop id with which the packet needs to be further processed.
b) On lpm lookup failure, objects are redirected to pkt_drop node.
rte_node_ip4_route_add() is control path API to add ipv4 routes.
To achieve home run, we use rte_node_stream_move() as mentioned in above
sections.
e) librte_node/ip4_rewrite.c:
This node gets packets from ip4_lookup node with next-hop id for each
packet is embedded in rte_node_mbuf_priv1(mbuf)->nh. This id is used
to determine the L2 header to be written to the pkt before sending
the pkt out to a particular ethdev_tx node.
rte_node_ip4_rewrite_add() is control path API to add next-hop info.
Jerin Jacob (1):
graph: introduce graph subsystem
Kiran Kumar K (1):
test: add graph functional tests
Nithin Dabilpuram (2):
node: add packet processing nodes
example/l3fwd_graph: l3fwd using graph architecture
Pavan Nikhilesh (1):
test: add graph performance test cases.
app/test/Makefile | 5 +
app/test/meson.build | 10 +-
app/test/test_graph.c | 820 +++++++++++++++++
app/test/test_graph_perf.c | 888 +++++++++++++++++++
config/common_base | 13 +
config/rte_config.h | 4 +
examples/Makefile | 3 +
examples/l3fwd-graph/Makefile | 58 ++
examples/l3fwd-graph/main.c | 1131 ++++++++++++++++++++++++
examples/l3fwd-graph/meson.build | 13 +
examples/meson.build | 6 +-
lib/Makefile | 6 +
lib/librte_graph/Makefile | 28 +
lib/librte_graph/graph.c | 578 ++++++++++++
lib/librte_graph/graph_debug.c | 81 ++
lib/librte_graph/graph_ops.c | 163 ++++
lib/librte_graph/graph_populate.c | 224 +++++
lib/librte_graph/graph_private.h | 113 +++
lib/librte_graph/graph_stats.c | 396 +++++++++
lib/librte_graph/meson.build | 11 +
lib/librte_graph/node.c | 419 +++++++++
lib/librte_graph/rte_graph.h | 277 ++++++
lib/librte_graph/rte_graph_version.map | 46 +
lib/librte_graph/rte_graph_worker.h | 280 ++++++
lib/librte_node/Makefile | 30 +
lib/librte_node/ethdev_ctrl.c | 106 +++
lib/librte_node/ethdev_rx.c | 218 +++++
lib/librte_node/ethdev_rx.h | 17 +
lib/librte_node/ethdev_rx_priv.h | 45 +
lib/librte_node/ethdev_tx.c | 74 ++
lib/librte_node/ethdev_tx_priv.h | 33 +
lib/librte_node/ip4_lookup.c | 657 ++++++++++++++
lib/librte_node/ip4_lookup_priv.h | 17 +
lib/librte_node/ip4_rewrite.c | 340 +++++++
lib/librte_node/ip4_rewrite_priv.h | 44 +
lib/librte_node/log.c | 14 +
lib/librte_node/meson.build | 8 +
lib/librte_node/node_private.h | 61 ++
lib/librte_node/null.c | 23 +
lib/librte_node/pkt_drop.c | 26 +
lib/librte_node/rte_node_eth_api.h | 31 +
lib/librte_node/rte_node_ip4_api.h | 33 +
lib/librte_node/rte_node_version.map | 9 +
lib/meson.build | 5 +-
meson.build | 1 +
mk/rte.app.mk | 2 +
46 files changed, 7362 insertions(+), 5 deletions(-)
create mode 100644 app/test/test_graph.c
create mode 100644 app/test/test_graph_perf.c
create mode 100644 examples/l3fwd-graph/Makefile
create mode 100644 examples/l3fwd-graph/main.c
create mode 100644 examples/l3fwd-graph/meson.build
create mode 100644 lib/librte_graph/Makefile
create mode 100644 lib/librte_graph/graph.c
create mode 100644 lib/librte_graph/graph_debug.c
create mode 100644 lib/librte_graph/graph_ops.c
create mode 100644 lib/librte_graph/graph_populate.c
create mode 100644 lib/librte_graph/graph_private.h
create mode 100644 lib/librte_graph/graph_stats.c
create mode 100644 lib/librte_graph/meson.build
create mode 100644 lib/librte_graph/node.c
create mode 100644 lib/librte_graph/rte_graph.h
create mode 100644 lib/librte_graph/rte_graph_version.map
create mode 100644 lib/librte_graph/rte_graph_worker.h
create mode 100644 lib/librte_node/Makefile
create mode 100644 lib/librte_node/ethdev_ctrl.c
create mode 100644 lib/librte_node/ethdev_rx.c
create mode 100644 lib/librte_node/ethdev_rx.h
create mode 100644 lib/librte_node/ethdev_rx_priv.h
create mode 100644 lib/librte_node/ethdev_tx.c
create mode 100644 lib/librte_node/ethdev_tx_priv.h
create mode 100644 lib/librte_node/ip4_lookup.c
create mode 100644 lib/librte_node/ip4_lookup_priv.h
create mode 100644 lib/librte_node/ip4_rewrite.c
create mode 100644 lib/librte_node/ip4_rewrite_priv.h
create mode 100644 lib/librte_node/log.c
create mode 100644 lib/librte_node/meson.build
create mode 100644 lib/librte_node/node_private.h
create mode 100644 lib/librte_node/null.c
create mode 100644 lib/librte_node/pkt_drop.c
create mode 100644 lib/librte_node/rte_node_eth_api.h
create mode 100644 lib/librte_node/rte_node_ip4_api.h
create mode 100644 lib/librte_node/rte_node_version.map
--
2.24.1
^ permalink raw reply [flat|nested] 31+ messages in thread
* [dpdk-dev] [RFC PATCH 1/5] graph: introduce graph subsystem
2020-01-31 17:01 [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem jerinj
@ 2020-01-31 17:01 ` jerinj
2020-02-02 10:34 ` Stephen Hemminger
` (2 more replies)
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 2/5] node: add packet processing nodes jerinj
` (5 subsequent siblings)
6 siblings, 3 replies; 31+ messages in thread
From: jerinj @ 2020-01-31 17:01 UTC (permalink / raw)
To: dev
Cc: pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard, Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
Abstracting the data processing functions as “nodes” and “link” them
together to create complex “graph” to enable reusable data processing functions
has been identified as proven architecture.
Introducing the graph infrastructure for packet processing.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
app/test/meson.build | 1 +
config/common_base | 7 +
config/rte_config.h | 4 +
lib/Makefile | 2 +
lib/librte_graph/Makefile | 28 ++
lib/librte_graph/graph.c | 578 +++++++++++++++++++++++++
lib/librte_graph/graph_debug.c | 81 ++++
lib/librte_graph/graph_ops.c | 163 +++++++
lib/librte_graph/graph_populate.c | 224 ++++++++++
lib/librte_graph/graph_private.h | 113 +++++
lib/librte_graph/graph_stats.c | 396 +++++++++++++++++
lib/librte_graph/meson.build | 11 +
lib/librte_graph/node.c | 419 ++++++++++++++++++
lib/librte_graph/rte_graph.h | 277 ++++++++++++
lib/librte_graph/rte_graph_version.map | 46 ++
lib/librte_graph/rte_graph_worker.h | 280 ++++++++++++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 2632 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_graph/Makefile
create mode 100644 lib/librte_graph/graph.c
create mode 100644 lib/librte_graph/graph_debug.c
create mode 100644 lib/librte_graph/graph_ops.c
create mode 100644 lib/librte_graph/graph_populate.c
create mode 100644 lib/librte_graph/graph_private.h
create mode 100644 lib/librte_graph/graph_stats.c
create mode 100644 lib/librte_graph/meson.build
create mode 100644 lib/librte_graph/node.c
create mode 100644 lib/librte_graph/rte_graph.h
create mode 100644 lib/librte_graph/rte_graph_version.map
create mode 100644 lib/librte_graph/rte_graph_worker.h
diff --git a/app/test/meson.build b/app/test/meson.build
index 22b0cefaa..e1cdae3cb 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -160,6 +160,7 @@ test_deps = ['acl',
'ring',
'stack',
'timer'
+ 'graph',
]
fast_test_names = [
diff --git a/config/common_base b/config/common_base
index c897dd0ae..badcc0be5 100644
--- a/config/common_base
+++ b/config/common_base
@@ -1070,6 +1070,13 @@ CONFIG_RTE_LIBRTE_BPF_ELF=n
#
CONFIG_RTE_LIBRTE_IPSEC=y
+#
+# Compile librte_graph
+#
+CONFIG_RTE_LIBRTE_GRAPH=y
+CONFIG_RTE_GRAPH_BURST_SIZE=256
+CONFIG_RTE_LIBRTE_GRAPH_STATS=y
+CONFIG_RTE_LIBRTE_GRAPH_DEBUG=n
#
# Compile the test application
#
diff --git a/config/rte_config.h b/config/rte_config.h
index d30786bc0..e9201fd46 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -98,6 +98,10 @@
/* KNI defines */
#define RTE_KNI_PREEMPT_DEFAULT 1
+/* rte_graph defines */
+#define RTE_GRAPH_BURST_SIZE 256
+#define RTE_LIBRTE_GRAPH_STATS 1
+
/****** driver defines ********/
/* QuickAssist device */
diff --git a/lib/Makefile b/lib/Makefile
index 46b91ae1a..495f572bf 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -119,6 +119,8 @@ DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu
DEPDIRS-librte_rcu := librte_eal
+DIRS-$(CONFIG_RTE_LIBRTE_GRAPH) += librte_graph
+DEPDIRS-librte_graph := librte_eal librte_mbuf
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
endif
diff --git a/lib/librte_graph/Makefile b/lib/librte_graph/Makefile
new file mode 100644
index 000000000..967c8d9bc
--- /dev/null
+++ b/lib/librte_graph/Makefile
@@ -0,0 +1,28 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Marvell International Ltd.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_graph.a
+
+CFLAGS += -O3 -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS)
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_graph_version.map
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_GRAPH) += node.c
+SRCS-$(CONFIG_RTE_LIBRTE_GRAPH) += graph.c
+SRCS-$(CONFIG_RTE_LIBRTE_GRAPH) += graph_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_GRAPH) += graph_debug.c
+SRCS-$(CONFIG_RTE_LIBRTE_GRAPH) += graph_stats.c
+SRCS-$(CONFIG_RTE_LIBRTE_GRAPH) += graph_populate.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_GRAPH)-include += rte_graph.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_GRAPH)-include += rte_graph_worker.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_graph/graph.c b/lib/librte_graph/graph.c
new file mode 100644
index 000000000..0bdd6e1c0
--- /dev/null
+++ b/lib/librte_graph/graph.c
@@ -0,0 +1,578 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <fnmatch.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_errno.h>
+#include <rte_graph.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+
+#include "graph_private.h"
+
+static struct graph_head graph_list = STAILQ_HEAD_INITIALIZER(graph_list);
+static rte_spinlock_t graph_lock = RTE_SPINLOCK_INITIALIZER;
+static rte_graph_t graph_id;
+int rte_graph_logtype;
+
+#define graph_id_check(id) id_check(id, graph_id)
+
+/* Private functions */
+struct graph_head *
+graph_list_head_get(void)
+{
+ return &graph_list;
+}
+
+void
+graph_spinlock_lock(void)
+{
+ rte_spinlock_lock(&graph_lock);
+}
+
+void
+graph_spinlock_unlock(void)
+{
+ rte_spinlock_unlock(&graph_lock);
+}
+
+static int
+graph_node_add(struct graph *graph, struct node *node)
+{
+ struct graph_node *graph_node;
+ size_t sz;
+
+ /* Skip the duplicate nodes */
+ STAILQ_FOREACH(graph_node, &graph->node_list, next)
+ if (strncmp(node->name, graph_node->node->name,
+ RTE_NODE_NAMESIZE) == 0)
+ return 0;
+
+ /* Allocate new graph node object */
+ sz = sizeof(*graph_node) + node->nb_edges * sizeof(struct node *);
+ graph_node = calloc(1, sz);
+
+ if (graph_node == NULL)
+ set_err(ENOMEM, free, "failed to calloc %s object", node->name);
+
+ /* Initialize the graph node */
+ graph_node->node = node;
+
+ /* Add to graph node list */
+ STAILQ_INSERT_TAIL(&graph->node_list, graph_node, next);
+ return 0;
+
+free:
+ free(graph_node);
+ return -rte_errno;
+}
+
+static struct graph_node *
+node_to_graph_node(struct graph *graph, struct node *node)
+{
+ struct graph_node *graph_node;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next)
+ if (graph_node->node == node)
+ return graph_node;
+
+ set_err(ENODEV, fail, "found isolated node %s", node->name);
+fail:
+ return NULL;
+}
+
+static int
+graph_node_edges_add(struct graph *graph)
+{
+ struct graph_node *graph_node;
+ struct node *adjacency;
+ const char *next;
+ rte_edge_t i;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next) {
+ for (i = 0; i < graph_node->node->nb_edges; i++) {
+ next = graph_node->node->next_nodes[i];
+ adjacency = node_from_name(next);
+ if (adjacency == NULL)
+ set_err(EINVAL, fail, "node %s not registered",
+ next);
+ if (graph_node_add(graph, adjacency))
+ goto fail;
+ }
+ }
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+static int
+graph_adjacency_list_update(struct graph *graph)
+{
+ struct graph_node *graph_node, *tmp;
+ struct node *adjacency;
+ const char *next;
+ rte_edge_t i;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next) {
+ for (i = 0; i < graph_node->node->nb_edges; i++) {
+ next = graph_node->node->next_nodes[i];
+ adjacency = node_from_name(next);
+ if (adjacency == NULL)
+ set_err(EINVAL, fail, "node %s not registered",
+ next);
+ tmp = node_to_graph_node(graph, adjacency);
+ if (tmp == NULL)
+ goto fail;
+ graph_node->adjacency_list[i] = tmp;
+ }
+ }
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+static int
+expand_pattern_to_node(struct graph *graph, const char *pattern)
+{
+ struct node_head *node_head = node_list_head_get();
+ bool found = false;
+ struct node *node;
+
+ /* Check for pattern match */
+ STAILQ_FOREACH(node, node_head, next) {
+ if (fnmatch(pattern, node->name, 0) == 0) {
+ if (graph_node_add(graph, node))
+ goto fail;
+ found = true;
+ }
+ }
+ if (found == false)
+ set_err(EFAULT, fail, "pattern %s node not found", pattern);
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+static void
+graph_cleanup(struct graph *graph)
+{
+ struct graph_node *graph_node;
+
+ while (!STAILQ_EMPTY(&graph->node_list)) {
+ graph_node = STAILQ_FIRST(&graph->node_list);
+ STAILQ_REMOVE_HEAD(&graph->node_list, next);
+ free(graph_node);
+ }
+}
+
+static int
+graph_node_init(struct graph *graph)
+{
+ struct graph_node *graph_node;
+ const char *name;
+ int rc;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next) {
+ if (graph_node->node->init) {
+ name = graph_node->node->name;
+ rc = graph_node->node->init(graph->graph,
+ graph_node_name_to_ptr(graph->graph, name));
+ if (rc)
+ set_err(rc, err, "node %s init() failed", name);
+ }
+ }
+
+ return 0;
+err:
+ return -rte_errno;
+}
+
+static void
+graph_node_fini(struct graph *graph)
+{
+ struct graph_node *graph_node;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next)
+ if (graph_node->node->fini)
+ graph_node->node->fini(graph->graph,
+ graph_node_name_to_ptr(graph->graph,
+ graph_node->node->name));
+}
+
+static struct rte_graph *
+graph_mem_fixup_node_ctx(struct rte_graph *graph)
+{
+ rte_node_t count; rte_graph_off_t off; struct rte_node *node;
+ struct node *node_db;
+ const char *name;
+
+ rte_graph_foreach_node(count, off, graph, node) {
+ if (node->parent_id == RTE_NODE_ID_INVALID) /* Static node */
+ name = node->name;
+ else /* Cloned node */
+ name = node->parent;
+
+ node_db = node_from_name(name);
+ if (node_db == NULL)
+ set_err(ENOLINK, fail, "node %s not found", name);
+ node->process = node_db->process;
+ }
+
+ return graph;
+fail:
+ return NULL;
+}
+
+static struct rte_graph *
+graph_mem_fixup_secondray(struct rte_graph *graph)
+{
+ if (graph == NULL || rte_eal_process_type() == RTE_PROC_PRIMARY)
+ return graph;
+
+ return graph_mem_fixup_node_ctx(graph);
+}
+
+struct rte_graph *
+rte_graph_lookup(const char *name)
+{
+ const struct rte_memzone *mz;
+ struct rte_graph *rc = NULL;
+
+ mz = rte_memzone_lookup(name);
+ if (mz)
+ rc = mz->addr;
+
+ return graph_mem_fixup_secondray(rc);
+}
+
+rte_graph_t
+rte_graph_create(const char *name, struct rte_graph_param *prm)
+{
+ struct graph *graph;
+ const char *pattern;
+ uint16_t i;
+
+ graph_spinlock_lock();
+
+ /* Check arguments sanity */
+ if (prm == NULL)
+ set_err(EINVAL, fail, "param should not be NULL");
+
+ if (name == NULL)
+ set_err(EINVAL, fail, "graph name should not be NULL");
+
+ /* Check for existence of duplicate graph */
+ STAILQ_FOREACH(graph, &graph_list, next)
+ if (strncmp(name, graph->name, RTE_GRAPH_NAMESIZE) == 0)
+ set_err(EEXIST, fail, "found duplicate graph %s", name);
+
+ /* Create graph object */
+ graph = calloc(1, sizeof(*graph));
+ if (graph == NULL)
+ set_err(ENOMEM, fail, "failed to calloc graph object");
+
+ /* Initilze the graph object */
+ STAILQ_INIT(&graph->node_list);
+ if (rte_strscpy(graph->name, name, RTE_GRAPH_NAMESIZE) < 0)
+ set_err(E2BIG, free, "too big name=%s", name);
+
+ /* Expand node pattern and add the nodes to the graph */
+ for (i = 0; i < prm->nb_node_patterns; i++) {
+ pattern = prm->node_patterns[i];
+ if (expand_pattern_to_node(graph, pattern))
+ goto graph_cleanup;
+ }
+
+ /* Go over all the nodes edges and add them to the graph */
+ if (graph_node_edges_add(graph))
+ goto graph_cleanup;
+
+ /* Update adjacency list of all nodes in the graph */
+ if (graph_adjacency_list_update(graph))
+ goto graph_cleanup;
+
+ /* Make sure atleast a source node present in the graph */
+ if (!graph_src_nodes_count(graph))
+ goto graph_cleanup;
+
+ /* Make sure no node is pointing to source node */
+ if (graph_node_has_edge_to_src_node(graph))
+ goto graph_cleanup;
+
+ /* Don't allow node has loop to self */
+ if (graph_node_has_loop_edge(graph))
+ goto graph_cleanup;
+
+ /* Do BFS from src nodes on the graph to find isolated nodes */
+ if (graph_has_isolated_node(graph))
+ goto graph_cleanup;
+
+ /* Initlize graph object */
+ graph->socket = prm->socket_id;
+ graph->src_node_count = graph_src_nodes_count(graph);
+ graph->node_count = graph_nodes_count(graph);
+ graph->id = graph_id;
+
+ /* Allocate the Graph fast path memory and populate the data */
+ if (graph_fp_mem_create(graph))
+ goto graph_cleanup;
+
+ /* Call init() of the all the nodes in the graph */
+ if (graph_node_init(graph))
+ goto graph_mem_destroy;
+
+ /* All good, Lets add the graph to the list */
+ graph_id++;
+ STAILQ_INSERT_TAIL(&graph_list, graph, next);
+
+ graph_spinlock_unlock();
+ return graph->id;
+
+graph_mem_destroy:
+ graph_fp_mem_destroy(graph);
+graph_cleanup:
+ graph_cleanup(graph);
+free:
+ free(graph);
+fail:
+ graph_spinlock_unlock();
+ return RTE_GRAPH_ID_INVALID;
+}
+
+rte_graph_t
+rte_graph_destroy(const char *graph_name)
+{
+ rte_graph_t rc = RTE_GRAPH_ID_INVALID;
+ struct graph *graph, *tmp;
+ const char *name;
+
+ graph_spinlock_lock();
+
+ graph = STAILQ_FIRST(&graph_list);
+ while (graph != NULL) {
+ tmp = STAILQ_NEXT(graph, next);
+ name = graph->name;
+ if (strncmp(name, graph_name, RTE_GRAPH_NAMESIZE) == 0) {
+ /* Call fini() of the all the nodes in the graph */
+ graph_node_fini(graph);
+ /* Destroy graph fast path memory */
+ rc = graph_fp_mem_destroy(graph);
+ if (rc)
+ set_err(rc, done, "mz %s free failed", name);
+
+ graph_cleanup(graph);
+ STAILQ_REMOVE(&graph_list, graph, graph, next);
+ rc = graph->id;
+ free(graph);
+ graph_id--;
+ goto done;
+ }
+ graph = tmp;
+ }
+done:
+ graph_spinlock_unlock();
+ return rc;
+}
+
+rte_graph_t
+rte_graph_from_name(const char *name)
+{
+ struct graph *graph;
+
+ STAILQ_FOREACH(graph, &graph_list, next)
+ if (strncmp(graph->name, name, RTE_GRAPH_NAMESIZE) == 0)
+ return graph->id;
+
+ return RTE_GRAPH_ID_INVALID;
+}
+
+char *
+rte_graph_id_to_name(rte_graph_t id)
+{
+ struct graph *graph;
+
+ graph_id_check(id);
+ STAILQ_FOREACH(graph, &graph_list, next)
+ if (graph->id == id)
+ return graph->name;
+
+fail:
+ return NULL;
+}
+
+struct rte_node *rte_graph_node_get(rte_graph_t gid, uint32_t nid)
+{
+ struct rte_node *node;
+ struct graph *graph;
+ rte_graph_off_t off;
+ rte_node_t count;
+
+ graph_id_check(gid);
+ STAILQ_FOREACH(graph, &graph_list, next)
+ if (graph->id == gid) {
+ rte_graph_foreach_node(count, off, graph->graph, node) {
+ if (node->id == nid)
+ return node;
+ }
+ break;
+ }
+fail:
+ return NULL;
+}
+
+struct rte_node *rte_graph_node_get_by_name(const char *graph_name,
+ const char *node_name)
+{
+ struct rte_node *node;
+ struct graph *graph;
+ rte_graph_off_t off;
+ rte_node_t count;
+
+ STAILQ_FOREACH(graph, &graph_list, next)
+ if (!strncmp(graph->name, graph_name, RTE_GRAPH_NAMESIZE)) {
+ rte_graph_foreach_node(count, off, graph->graph, node) {
+ if (!strncmp(node->name, node_name,
+ RTE_NODE_NAMESIZE))
+ return node;
+ }
+ break;
+ }
+
+ return NULL;
+}
+
+__rte_experimental void __rte_noinline
+__rte_node_stream_alloc(struct rte_graph *graph, struct rte_node *node)
+{
+ uint16_t size = node->size;
+
+ RTE_VERIFY(size != UINT16_MAX);
+ /* Allocate double amount of size to avoid immediate realloc */
+ size = RTE_MIN(UINT16_MAX, RTE_MAX(RTE_GRAPH_BURST_SIZE, size * 2));
+ node->objs = rte_realloc_socket(node->objs, size * sizeof(void *),
+ RTE_CACHE_LINE_SIZE, graph->socket);
+ RTE_VERIFY(node->objs);
+ node->size = size;
+ node->realloc_count++;
+}
+
+__rte_experimental void __rte_noinline
+__rte_node_stream_alloc_size(struct rte_graph *graph, struct rte_node *node,
+ uint16_t req_size)
+{
+ uint16_t size = node->size;
+
+ RTE_VERIFY(size != UINT16_MAX);
+ /* Allocate double amount of size to avoid immediate realloc */
+ size = RTE_MIN(UINT16_MAX, RTE_MAX(RTE_GRAPH_BURST_SIZE, req_size * 2));
+ node->objs = rte_realloc_socket(node->objs, size * sizeof(void *),
+ RTE_CACHE_LINE_SIZE, graph->socket);
+ RTE_VERIFY(node->objs);
+ node->size = size;
+ node->realloc_count++;
+}
+
+static int
+graph_to_dot(FILE *f, struct graph *graph)
+{
+ const char *src_edge_color = " [color=blue]\n";
+ const char *edge_color = "\n";
+ struct graph_node *graph_node;
+ char *node_name;
+ rte_edge_t i;
+ int rc;
+
+ rc = fprintf(f, "digraph %s {\n\trankdir=LR;\n", graph->name);
+ if (rc < 0)
+ goto end;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next) {
+ node_name = graph_node->node->name;
+ for (i = 0; i < graph_node->node->nb_edges; i++) {
+ rc = fprintf(f, "\t\"%s\"->\"%s\"%s", node_name,
+ graph_node->adjacency_list[i]->node->name,
+ graph_node->node->flags & RTE_NODE_SOURCE_F ?
+ src_edge_color : edge_color);
+ if (rc < 0)
+ goto end;
+ }
+ }
+ rc = fprintf(f, "}\n");
+ if (rc < 0)
+ goto end;
+
+ return 0;
+end:
+ rte_errno = EBADF;
+ return -rte_errno;
+}
+
+rte_graph_t
+rte_graph_export(const char *name, FILE *f)
+{
+ rte_graph_t rc = RTE_GRAPH_ID_INVALID;
+ struct graph *graph;
+
+ STAILQ_FOREACH(graph, &graph_list, next) {
+ if (strncmp(graph->name, name, RTE_GRAPH_NAMESIZE) == 0) {
+ rc = graph_to_dot(f, graph);
+ goto end;
+ }
+ }
+ rte_errno = ENOENT;
+end:
+ return rc;
+}
+
+static void
+graph_scan_dump(FILE *f, rte_graph_t id, bool all)
+{
+ struct graph *graph;
+
+ RTE_VERIFY(f);
+ graph_id_check(id);
+
+ STAILQ_FOREACH(graph, &graph_list, next) {
+ if (all == true) {
+ graph_dump(f, graph);
+ } else if (graph->id == id) {
+ graph_dump(f, graph);
+ return;
+ }
+ }
+fail:
+ return;
+}
+
+void
+rte_graph_dump(FILE *f, rte_graph_t id)
+{
+ graph_scan_dump(f, id, false);
+}
+
+void
+rte_graph_list_dump(FILE *f)
+{
+ graph_scan_dump(f, 0, true);
+}
+
+RTE_INIT(rte_graph_init_log)
+{
+ rte_graph_logtype = rte_log_register("lib.graph");
+ if (rte_graph_logtype >= 0)
+ rte_log_set_level(rte_graph_logtype, RTE_LOG_INFO);
+}
+
+rte_graph_t
+rte_graph_max_count(void)
+{
+ return graph_id;
+}
diff --git a/lib/librte_graph/graph_debug.c b/lib/librte_graph/graph_debug.c
new file mode 100644
index 000000000..7d42889fb
--- /dev/null
+++ b/lib/librte_graph/graph_debug.c
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_debug.h>
+
+#include "graph_private.h"
+
+void
+graph_dump(FILE *f, struct graph *g)
+{
+ struct graph_node *graph_node;
+ rte_edge_t i = 0;
+
+ fprintf(f, "graph <%s>\n", g->name);
+ fprintf(f, " id=%"PRIu32"\n", g->id);
+ fprintf(f, " cir_start=%"PRIu32"\n", g->cir_start);
+ fprintf(f, " cir_mask=%"PRIu32"\n", g->cir_mask);
+ fprintf(f, " addr=%p\n", g);
+ fprintf(f, " graph=%p\n", g->graph);
+ fprintf(f, " mem_sz=%zu\n", g->mem_sz);
+ fprintf(f, " node_count=%"PRIu32"\n", g->node_count);
+ fprintf(f, " src_node_count=%"PRIu32"\n", g->src_node_count);
+
+ STAILQ_FOREACH(graph_node, &g->node_list, next)
+ fprintf(f, " node[%d] <%s>\n", i++, graph_node->node->name);
+}
+
+void
+node_dump(FILE *f, struct node *n)
+{
+ rte_edge_t i;
+
+ fprintf(f, "node <%s>\n", n->name);
+ fprintf(f, " id=%"PRIu32"\n", n->id);
+ fprintf(f, " flags=0x%" PRIx64 "\n", n->flags);
+ fprintf(f, " addr=%p\n", n);
+ fprintf(f, " process=%p\n", n->process);
+ fprintf(f, " nb_edges=%d\n", n->nb_edges);
+
+ for (i = 0; i < n->nb_edges; i++)
+ fprintf(f, " edge[%d] <%s>\n", i, n->next_nodes[i]);
+}
+
+void
+rte_graph_obj_dump(FILE *f, struct rte_graph *g, bool all)
+{
+ rte_node_t count; rte_graph_off_t off; struct rte_node *n; rte_edge_t i;
+
+ fprintf(f, "graph <%s> @ %p\n", g->name, g);
+ fprintf(f, " id=%"PRIu32"\n", g->id);
+ fprintf(f, " head=%"PRId32"\n", (int32_t)g->head);
+ fprintf(f, " tail=%"PRId32"\n", (int32_t)g->tail);
+ fprintf(f, " cir_mask=0x%"PRIx32"\n", g->cir_mask);
+ fprintf(f, " nb_nodes=%"PRId32"\n", g->nb_nodes);
+ fprintf(f, " socket=%d\n", g->socket);
+ fprintf(f, " fence=0x%" PRIx64 "\n", g->fence);
+ fprintf(f, " nodes_start=0x%"PRIx32"\n", g->nodes_start);
+ fprintf(f, " cir_start=%p\n", g->cir_start);
+
+ rte_graph_foreach_node(count, off, g, n) {
+ if (!all && n->idx == 0)
+ continue;
+ fprintf(f, " node[%d] <%s>\n", count, n->name);
+ fprintf(f, " fence=0x%" PRIx64 "\n", n->fence);
+ fprintf(f, " objs=%p\n", n->objs);
+ fprintf(f, " process=%p\n", n->process);
+ fprintf(f, " id=0x%" PRIx32 "\n", n->id);
+ fprintf(f, " offset=0x%" PRIx32 "\n", n->off);
+ fprintf(f, " nb_edges=%" PRId32 "\n", n->nb_edges);
+ fprintf(f, " realloc_count=%d\n", n->realloc_count);
+ fprintf(f, " size=%d\n", n->size);
+ fprintf(f, " idx=%d\n", n->idx);
+ fprintf(f, " total_objs=%" PRId64 "\n", n->total_objs);
+ fprintf(f, " total_calls=%" PRId64 "\n", n->total_calls);
+ for (i = 0; i < n->nb_edges; i++)
+ fprintf(f, " edge[%d] <%s>\n",
+ i, n->nodes[i]->name);
+ }
+}
diff --git a/lib/librte_graph/graph_ops.c b/lib/librte_graph/graph_ops.c
new file mode 100644
index 000000000..6f5c69599
--- /dev/null
+++ b/lib/librte_graph/graph_ops.c
@@ -0,0 +1,163 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_errno.h>
+
+#include "graph_private.h"
+
+static inline int
+node_has_loop_edge(struct node *node)
+{
+ rte_edge_t i;
+ char *name;
+ int rc = 0;
+
+ for (i = 0; i < node->nb_edges; i++) {
+ if (strncmp(node->name, node->next_nodes[i],
+ RTE_NODE_NAMESIZE) == 0) {
+ name = node->name;
+ rc = 1;
+ set_err(EINVAL, fail, "node %s has loop to self", name);
+ }
+ }
+fail:
+ return rc;
+}
+
+int
+graph_node_has_loop_edge(struct graph *graph)
+{
+ struct graph_node *graph_node;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next)
+ if (node_has_loop_edge(graph_node->node))
+ return 1;
+
+ return 0;
+}
+
+rte_node_t
+graph_src_nodes_count(struct graph *graph)
+{
+ struct graph_node *graph_node;
+ rte_node_t rc = 0;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next)
+ if (graph_node->node->flags & RTE_NODE_SOURCE_F)
+ rc++;
+
+ if (rc == 0)
+ set_err(EINVAL, fail, "graph needs atleast a source node");
+fail:
+ return rc;
+}
+
+int
+graph_node_has_edge_to_src_node(struct graph *graph)
+{
+ struct graph_node *graph_node;
+ struct node *node;
+ rte_edge_t i;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next) {
+ for (i = 0; i < graph_node->node->nb_edges; i++) {
+ node = graph_node->adjacency_list[i]->node;
+ if (node->flags & RTE_NODE_SOURCE_F)
+ set_err(EEXIST, fail,
+ "node %s points to the source node %s",
+ graph_node->node->name, node->name);
+ }
+ }
+
+ return 0;
+fail:
+ return 1;
+}
+
+rte_node_t
+graph_nodes_count(struct graph *graph)
+{
+ struct graph_node *graph_node;
+ rte_node_t count = 0;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next)
+ count++;
+
+ return count;
+}
+
+void
+graph_mark_nodes_as_not_visited(struct graph *graph)
+{
+ struct graph_node *graph_node;
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next)
+ graph_node->visited = false;
+}
+
+int
+graph_bfs(struct graph *graph, struct graph_node *start)
+{
+ struct graph_node **queue, *v, *tmp;
+ uint16_t head = 0, tail = 0;
+ rte_edge_t i;
+ size_t sz;
+
+ sz = sizeof(struct graph_node *) * graph_nodes_count(graph);
+ queue = malloc(sz);
+ if (queue == NULL)
+ set_err(ENOMEM, fail, "failed to alloc bfs queue of %zu", sz);
+
+ /* BFS algorithm */
+ queue[tail++] = start;
+ start->visited = true;
+ while (head != tail) {
+ v = queue[head++];
+ for (i = 0; i < v->node->nb_edges; i++) {
+ tmp = v->adjacency_list[i];
+ if (tmp->visited == false) {
+ queue[tail++] = tmp;
+ tmp->visited = true;
+ }
+ }
+ }
+
+ free(queue);
+ return 0;
+
+fail:
+ return -rte_errno;
+}
+
+int
+graph_has_isolated_node(struct graph *graph)
+{
+ struct graph_node *graph_node;
+
+ graph_mark_nodes_as_not_visited(graph);
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next) {
+ if (graph_node->node->flags & RTE_NODE_SOURCE_F) {
+ if (graph_node->node->nb_edges == 0)
+ set_err(EINVAL, fail,
+ "%s node needs minimum one edge",
+ graph_node->node->name);
+ if (graph_bfs(graph, graph_node))
+ goto fail;
+ }
+ }
+
+ STAILQ_FOREACH(graph_node, &graph->node_list, next)
+ if (graph_node->visited == false)
+ set_err(EINVAL, fail, "found isolated node %s",
+ graph_node->node->name);
+
+ return 0;
+fail:
+ return 1;
+}
diff --git a/lib/librte_graph/graph_populate.c b/lib/librte_graph/graph_populate.c
new file mode 100644
index 000000000..72476db11
--- /dev/null
+++ b/lib/librte_graph/graph_populate.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <fnmatch.h>
+
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+
+#include "graph_private.h"
+
+static size_t
+graph_fp_mem_calc_size(struct graph *graph)
+{
+ struct graph_node *graph_node;
+ rte_node_t val;
+ size_t sz;
+
+ /* Graph header */
+ sz = sizeof(struct rte_graph);
+ /* Source nodes list */
+ sz += sizeof(rte_graph_off_t) * graph->src_node_count;
+ /* Circular buffer for pending streams of size number of nodes */
+ val = rte_align32pow2(graph->node_count * sizeof(rte_graph_off_t));
+ sz = RTE_ALIGN(sz, val);
+ graph->cir_start = sz;
+ graph->cir_mask = rte_align32pow2(graph->node_count) - 1;
+ sz += val;
+ /* Fence */
+ sz += sizeof(RTE_GRAPH_FENCE);
+ sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+ graph->nodes_start = sz;
+ /* For 0..N node objects with fence */
+ STAILQ_FOREACH(graph_node, &graph->node_list, next) {
+ sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+ sz += sizeof(struct rte_node);
+ /* Pointer to next nodes(edges) */
+ sz += sizeof(struct rte_node *) * graph_node->node->nb_edges;
+ }
+
+ graph->mem_sz = sz;
+ return sz;
+}
+
+static void
+graph_header_popluate(struct graph *_graph)
+{
+ struct rte_graph *graph = _graph->graph;
+
+ graph->tail = 0;
+ graph->head = (int32_t)-_graph->src_node_count;
+ graph->cir_mask = _graph->cir_mask;
+ graph->nb_nodes = _graph->node_count;
+ graph->cir_start = RTE_PTR_ADD(graph, _graph->cir_start);
+ graph->nodes_start = _graph->nodes_start;
+ graph->socket = _graph->socket;
+ graph->id = _graph->id;
+ memcpy(graph->name, _graph->name, RTE_GRAPH_NAMESIZE);
+ graph->fence = RTE_GRAPH_FENCE;
+}
+
+static void
+graph_nodes_populate(struct graph *_graph)
+{
+ rte_graph_off_t off = _graph->nodes_start;
+ struct rte_graph *graph = _graph->graph;
+ struct graph_node *graph_node;
+ rte_edge_t count, nb_edges;
+ const char *parent;
+ rte_node_t pid;
+
+ STAILQ_FOREACH(graph_node, &_graph->node_list, next) {
+ struct rte_node *node = RTE_PTR_ADD(graph, off);
+ memset(node, 0, sizeof(*node));
+ node->fence = RTE_GRAPH_FENCE;
+ node->off = off;
+ node->process = graph_node->node->process;
+ memcpy(node->name, graph_node->node->name, RTE_GRAPH_NAMESIZE);
+ pid = graph_node->node->parent_id;
+ if (pid != RTE_NODE_ID_INVALID) { /* Cloned node */
+ parent = rte_node_id_to_name(pid);
+ memcpy(node->parent, parent, RTE_GRAPH_NAMESIZE);
+ }
+ node->id = graph_node->node->id;
+ node->parent_id = pid;
+ nb_edges = graph_node->node->nb_edges;
+ node->nb_edges = nb_edges;
+ off += sizeof(struct rte_node);
+ /* Copy the name in first pass to replace with rte_node* later*/
+ for (count = 0; count < nb_edges; count++)
+ node->nodes[count] = (struct rte_node *)
+ &graph_node->adjacency_list[count]->node->name[0];
+
+ off += sizeof(struct rte_node *) * nb_edges;
+ off = RTE_ALIGN(off, RTE_CACHE_LINE_SIZE);
+ node->next = off;
+ __rte_node_stream_alloc(graph, node);
+ }
+}
+
+struct rte_node *
+graph_node_id_to_ptr(const struct rte_graph *graph, rte_node_t id)
+{
+ rte_node_t count; rte_graph_off_t off; struct rte_node *node;
+
+ rte_graph_foreach_node(count, off, graph, node)
+ if (unlikely(node->id == id))
+ return node;
+
+ return NULL;
+}
+
+struct rte_node *
+graph_node_name_to_ptr(const struct rte_graph *graph, const char *name)
+{
+ rte_node_t count; rte_graph_off_t off; struct rte_node *node;
+
+ rte_graph_foreach_node(count, off, graph, node)
+ if (strncmp(name, node->name, RTE_NODE_NAMESIZE) == 0)
+ return node;
+
+ return NULL;
+}
+
+static int
+graph_node_nexts_populate(struct graph *_graph)
+{
+ rte_node_t count, val; rte_graph_off_t off; struct rte_node *node;
+ const struct rte_graph *graph = _graph->graph;
+ const char *name;
+
+ rte_graph_foreach_node(count, off, graph, node) {
+ for (val = 0; val < node->nb_edges; val++) {
+ name = (const char *)node->nodes[val];
+ node->nodes[val] = graph_node_name_to_ptr(graph, name);
+ if (node->nodes[val] == NULL)
+ set_err(EINVAL, fail, "%s not found", name);
+ }
+ }
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+static int
+graph_src_nodes_populate(struct graph *_graph)
+{
+ struct rte_graph *graph = _graph->graph;
+ struct graph_node *graph_node;
+ struct rte_node *node;
+ int32_t head = -1;
+ const char *name;
+
+ STAILQ_FOREACH(graph_node, &_graph->node_list, next) {
+ if (graph_node->node->flags & RTE_NODE_SOURCE_F) {
+ name = graph_node->node->name;
+ node = graph_node_name_to_ptr(graph, name);
+ if (node == NULL)
+ set_err(EINVAL, fail, "%s not found", name);
+
+ __rte_node_stream_alloc(graph, node);
+ graph->cir_start[head--] = node->off;
+ }
+ }
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+static int
+graph_fp_mem_populate(struct graph *graph)
+{
+ int rc;
+
+ graph_header_popluate(graph);
+ graph_nodes_populate(graph);
+ rc = graph_node_nexts_populate(graph);
+ rc |= graph_src_nodes_populate(graph);
+
+ return rc;
+}
+
+int
+graph_fp_mem_create(struct graph *graph)
+{
+ const struct rte_memzone *mz;
+ size_t sz;
+
+ sz = graph_fp_mem_calc_size(graph);
+ mz = rte_memzone_reserve(graph->name, sz, graph->socket, 0);
+ if (mz == NULL)
+ set_err(ENOMEM, fail, "memzone %s reserve failed", graph->name);
+
+ graph->graph = mz->addr;
+ graph->mz = mz;
+
+ return graph_fp_mem_populate(graph);
+fail:
+ return -rte_errno;
+}
+
+static void
+graph_nodes_mem_destroy(struct rte_graph *graph)
+{
+ rte_node_t count; rte_graph_off_t off; struct rte_node *node;
+
+ if (graph == NULL)
+ return;
+
+ rte_graph_foreach_node(count, off, graph, node)
+ rte_free(node->objs);
+}
+
+int
+graph_fp_mem_destroy(struct graph *graph)
+{
+ graph_nodes_mem_destroy(graph->graph);
+ return rte_memzone_free(graph->mz);
+}
diff --git a/lib/librte_graph/graph_private.h b/lib/librte_graph/graph_private.h
new file mode 100644
index 000000000..47c84bb15
--- /dev/null
+++ b/lib/librte_graph/graph_private.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _RTE_GRAPH_PRIVATE_H_
+#define _RTE_GRAPH_PRIVATE_H_
+
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_graph.h>
+#include <rte_graph_worker.h>
+
+extern int rte_graph_logtype;
+
+#define GRAPH_LOG(level, ...) \
+ rte_log(RTE_LOG_ ## level, rte_graph_logtype, \
+ RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__,) \
+ "\n", __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define graph_err(...) GRAPH_LOG(ERR, __VA_ARGS__)
+#define graph_info(...) GRAPH_LOG(INFO, __VA_ARGS__)
+#define graph_dbg(...) GRAPH_LOG(DEBUG, __VA_ARGS__)
+
+#define id_check(id, id_max) do { \
+ if ((id) >= (id_max)) { \
+ rte_errno = EINVAL; \
+ goto fail; \
+ } \
+} while (0)
+
+#define set_err(err, where, fmt, ...) do { \
+ graph_err(fmt, ##__VA_ARGS__); \
+ rte_errno = err; \
+ goto where; \
+} while (0)
+
+struct node {
+ STAILQ_ENTRY(node) next;
+ char name[RTE_NODE_NAMESIZE]; /* Name of the node. */
+ uint64_t flags; /* Node configuration flag */
+ rte_node_process_t process; /* Node process function */
+ rte_node_init_t init; /* Node init function */
+ rte_node_fini_t fini; /* Node fini function */
+ rte_node_t id; /* Allocated ID for this node */
+ rte_node_t parent_id; /* Id of parent node */
+ rte_edge_t nb_edges; /* Number of edges from this node */
+ char next_nodes[][RTE_NODE_NAMESIZE]; /* Names of next nodes. */
+};
+
+struct graph_node {
+ STAILQ_ENTRY(graph_node) next;
+ struct node *node;
+ bool visited;
+ struct graph_node *adjacency_list[];
+};
+
+struct graph {
+ STAILQ_ENTRY(graph) next; /* List of graphs */
+ char name[RTE_GRAPH_NAMESIZE]; /* Name of the graph. */
+ const struct rte_memzone *mz;
+ rte_graph_off_t nodes_start;
+ rte_node_t src_node_count;
+ struct rte_graph *graph;
+ rte_node_t node_count;
+ uint32_t cir_start;
+ uint32_t cir_mask;
+ rte_graph_t id;
+ size_t mem_sz;
+ int socket;
+ STAILQ_HEAD(gnode_list, graph_node) node_list;/* Nodes in a graph */
+};
+
+/* Node functions */
+STAILQ_HEAD(node_head, node);
+struct node_head *node_list_head_get(void);
+struct node *node_from_name(const char *name);
+
+/* Graph list functions */
+STAILQ_HEAD(graph_head, graph);
+struct graph_head *graph_list_head_get(void);
+
+/* Lock functions */
+void graph_spinlock_lock(void);
+void graph_spinlock_unlock(void);
+
+/* Graph operations */
+int graph_bfs(struct graph *graph, struct graph_node *start);
+int graph_has_isolated_node(struct graph *graph);
+int graph_node_has_edge_to_src_node(struct graph *graph);
+int graph_node_has_loop_edge(struct graph *graph);
+rte_node_t graph_src_nodes_count(struct graph *graph);
+rte_node_t graph_nodes_count(struct graph *graph);
+void graph_mark_nodes_as_not_visited(struct graph *graph);
+
+/* Fast path graph memory populate functions */
+int graph_fp_mem_create(struct graph *graph);
+int graph_fp_mem_destroy(struct graph *graph);
+
+/* Lookup functions */
+struct rte_node *
+graph_node_id_to_ptr(const struct rte_graph *graph, rte_node_t id);
+struct rte_node *
+graph_node_name_to_ptr(const struct rte_graph *graph, const char *node_name);
+
+/* Debug functions */
+void graph_dump(FILE *f, struct graph *g);
+void node_dump(FILE *f, struct node *n);
+
+#endif /* _RTE_GRAPH_PRIVATE_H_ */
diff --git a/lib/librte_graph/graph_stats.c b/lib/librte_graph/graph_stats.c
new file mode 100644
index 000000000..0dcbee8a8
--- /dev/null
+++ b/lib/librte_graph/graph_stats.c
@@ -0,0 +1,396 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <fnmatch.h>
+
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+
+#include "graph_private.h"
+
+/* Capture all graphs of cluster */
+struct cluster {
+ rte_graph_t nb_graphs;
+ rte_graph_t size;
+
+ struct graph **graphs;
+};
+
+/* Capture same node ID across cluster */
+struct cluster_node {
+ struct rte_graph_cluster_node_stats stat;
+ rte_node_t nb_nodes;
+
+ struct rte_node *nodes[];
+};
+
+struct rte_graph_cluster_stats {
+ /* Header */
+ rte_graph_cluster_stats_cb_t fn;
+ uint32_t cluster_node_size; /* Size of struct cluster_node */
+ rte_node_t max_nodes;
+ int socket_id;
+ void *cookie;
+ size_t sz;
+
+ struct cluster_node clusters[];
+} __rte_cache_aligned;
+
+#define boarder() fprintf(f, "+-------------------------------+---------------+---------------+---------------+---------------+---------------+-----------+\n")
+
+static inline void
+print_banner(FILE *f)
+{
+ boarder();
+ fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s\n", "|Node", "|calls", "|objs", "|realloc_count", "|objs/call", "|objs/sec(10E6)", "|cycles/call|");
+ boarder();
+}
+
+static inline void
+print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat)
+{
+ double objs_per_call, objs_per_sec, cycles_per_call, ts_per_hz;
+ const uint64_t prev_calls = stat->prev_calls;
+ const uint64_t prev_objs = stat->prev_objs;
+ const uint64_t cycles = stat->cycles;
+ const uint64_t calls = stat->calls;
+ const uint64_t objs = stat->objs;
+ uint64_t call_delta;
+
+ call_delta = calls - prev_calls;
+ objs_per_call = call_delta ?
+ (double)((objs - prev_objs)/call_delta) : 0;
+ cycles_per_call = call_delta ?
+ (double)((cycles - stat->prev_cycles)/call_delta) : 0;
+ ts_per_hz = (double)((stat->ts - stat->prev_ts)/stat->hz);
+ objs_per_sec = ts_per_hz ? (objs - prev_objs) / ts_per_hz : 0;
+ objs_per_sec /= 1000000;
+
+ fprintf(f, "|%-31s|%-15"PRIu64"|%-15"PRIu64"|%-15"PRIu64"|%-15.3f|%-15.6f|%-11.4f|\n",
+ stat->name, calls, objs, stat->realloc_count, objs_per_call,
+ objs_per_sec, cycles_per_call);
+}
+
+static int
+graph_cluster_stats_cb(bool is_first, bool is_last, void *cookie,
+ const struct rte_graph_cluster_node_stats *stat)
+{
+ FILE *f = cookie;
+
+ if (unlikely(is_first))
+ print_banner(f);
+ if (stat->objs)
+ print_node(f, stat);
+ if (unlikely(is_last))
+ boarder();
+
+ return 0;
+};
+
+static struct rte_graph_cluster_stats *
+stats_mem_init(struct cluster *cluster,
+ const struct rte_graph_cluster_stats_param *prm)
+{
+ size_t sz = sizeof(struct rte_graph_cluster_stats);
+ struct rte_graph_cluster_stats *stats;
+ rte_graph_cluster_stats_cb_t fn;
+ int socket_id = prm->socket_id;
+ uint32_t cluster_node_size;
+
+ /* Fixup callback */
+ fn = prm->fn;
+ if (fn == NULL)
+ fn = graph_cluster_stats_cb;
+
+ cluster_node_size = sizeof(struct cluster_node);
+ /* For a given cluster, max nodes will be the max number of graphs */
+ cluster_node_size += cluster->nb_graphs * sizeof(struct rte_node *);
+ cluster_node_size = RTE_ALIGN(cluster_node_size, RTE_CACHE_LINE_SIZE);
+
+ stats = realloc(NULL, sz);
+ memset(stats, 0, sz);
+ if (stats) {
+ stats->fn = fn;
+ stats->cluster_node_size = cluster_node_size;
+ stats->max_nodes = 0;
+ stats->socket_id = socket_id;
+ stats->cookie = prm->cookie;
+ stats->sz = sz;
+ }
+
+ return stats;
+}
+
+static int
+stats_mem_populate(struct rte_graph_cluster_stats **stats_in,
+ struct rte_graph *graph, struct graph_node *graph_node)
+{
+ struct rte_graph_cluster_stats *stats = *stats_in;
+ rte_node_t id = graph_node->node->id;
+ struct cluster_node *cluster;
+ struct rte_node *node;
+ rte_node_t count;
+
+ cluster = stats->clusters;
+
+ /* Iterate over cluster node array to find node ID match */
+ for (count = 0; count < stats->max_nodes; count++) {
+ /* Found an existing node in the reel */
+ if (cluster->stat.id == id) {
+ node = graph_node_id_to_ptr(graph, id);
+ if (node == NULL)
+ set_err(ENOKEY, err,
+ "failed to find node %s in graph %s",
+ graph_node->node->name, graph->name);
+
+ cluster->nodes[cluster->nb_nodes++] = node;
+ return 0;
+ }
+ cluster = RTE_PTR_ADD(cluster, stats->cluster_node_size);
+ }
+
+ /* Hey, it is a new node, allocate space for it in the reel */
+ stats = realloc(stats, stats->sz + stats->cluster_node_size);
+ if (stats == NULL)
+ set_err(ENOMEM, err, "realloc failed");
+
+ /* Clear the new struct cluster_node area */
+ cluster = RTE_PTR_ADD(stats, stats->sz),
+ memset(cluster, 0, stats->cluster_node_size);
+ memcpy(cluster->stat.name, graph_node->node->name, RTE_NODE_NAMESIZE);
+ cluster->stat.id = graph_node->node->id;
+ cluster->stat.hz = rte_get_timer_hz();
+ node = graph_node_id_to_ptr(graph, id);
+ if (node == NULL)
+ set_err(ENOKEY, err, "failed to find node %s in graph %s",
+ graph_node->node->name, graph->name);
+ cluster->nodes[cluster->nb_nodes++] = node;
+
+ stats->sz += stats->cluster_node_size;
+ stats->max_nodes++;
+ *stats_in = stats;
+
+ return 0;
+err:
+ return -rte_errno;
+}
+
+static void
+stats_mem_fini(struct rte_graph_cluster_stats *stats)
+{
+ free(stats);
+}
+
+static void
+cluster_init(struct cluster *cluster)
+{
+ memset(cluster, 0, sizeof(*cluster));
+}
+
+static int
+cluster_add(struct cluster *cluster, struct graph *graph)
+{
+ rte_graph_t count;
+ size_t sz;
+
+ /* Skip the if graph is already added to cluster */
+ for (count = 0; count < cluster->nb_graphs; count++)
+ if (cluster->graphs[count] == graph)
+ return 0;
+
+ /* Exapand the cluster if required to store graph objects */
+ if (cluster->nb_graphs + 1 > cluster->size) {
+ cluster->size = RTE_MAX(1, cluster->size * 2);
+ sz = sizeof(struct graph *) * cluster->size;
+ cluster->graphs = realloc(cluster->graphs, sz);
+ if (cluster->graphs == NULL)
+ set_err(ENOMEM, free, "failed to realloc");
+ }
+
+ /* Add graph to cluster */
+ cluster->graphs[cluster->nb_graphs++] = graph;
+ return 0;
+
+free:
+ return -rte_errno;
+}
+
+static void
+cluster_fini(struct cluster *cluster)
+{
+ if (cluster->graphs)
+ free(cluster->graphs);
+}
+
+static int
+expand_pattern_to_cluster(struct cluster *cluster, const char *pattern)
+{
+ struct graph_head *graph_head = graph_list_head_get();
+ struct graph *graph;
+ bool found = false;
+
+ /* Check for pattern match */
+ STAILQ_FOREACH(graph, graph_head, next) {
+ if (fnmatch(pattern, graph->name, 0) == 0) {
+ if (cluster_add(cluster, graph))
+ goto fail;
+ found = true;
+ }
+ }
+ if (found == false)
+ set_err(EFAULT, fail, "pattern %s graph not found", pattern);
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+struct rte_graph_cluster_stats *
+rte_graph_cluster_stats_create(const struct rte_graph_cluster_stats_param *prm)
+{
+ struct rte_graph_cluster_stats *stats, *rc = NULL;
+ struct graph_node *graph_node;
+ struct cluster cluster;
+ struct graph *graph;
+ const char *pattern;
+ rte_graph_t i;
+
+ /* Sanity checks */
+ if (!rte_graph_has_stats_feature())
+ set_err(EINVAL, fail, "stats feature is not enabled");
+
+ if (prm == NULL)
+ set_err(EINVAL, fail, "invalid param");
+
+ if (prm->graph_patterns == NULL || prm->nb_graph_patterns == 0)
+ set_err(EINVAL, fail, "invalid graph param");
+
+ cluster_init(&cluster);
+
+ graph_spinlock_lock();
+ /* Expand graph pattern and add the graph to the cluster */
+ for (i = 0; i < prm->nb_graph_patterns; i++) {
+ pattern = prm->graph_patterns[i];
+ if (expand_pattern_to_cluster(&cluster, pattern))
+ goto bad_pattern;
+ }
+
+ /* Alloc the stats memory */
+ stats = stats_mem_init(&cluster, prm);
+ if (stats == NULL)
+ set_err(ENOMEM, bad_pattern, "failed alloc stats memory");
+
+ /* Iterate over M(Graph) x N (Nodes in graph) */
+ for (i = 0; i < cluster.nb_graphs; i++) {
+ graph = cluster.graphs[i];
+ STAILQ_FOREACH(graph_node, &graph->node_list, next) {
+ struct rte_graph *graph_fp = graph->graph;
+ if (stats_mem_populate(&stats, graph_fp, graph_node))
+ goto realloc_fail;
+ }
+ }
+
+ /* Finally copy to hugepage memory to avoid pressure on rte_realloc */
+ rc = rte_malloc_socket(NULL, stats->sz, 0, stats->socket_id);
+ if (rc)
+ rte_memcpy(rc, stats, stats->sz);
+ else
+ set_err(ENOMEM, realloc_fail, "rte_malloc failed");
+
+realloc_fail:
+ stats_mem_fini(stats);
+bad_pattern:
+ graph_spinlock_unlock();
+ cluster_fini(&cluster);
+fail:
+ return rc;
+}
+
+void
+rte_graph_cluster_stats_destroy(struct rte_graph_cluster_stats *stat)
+{
+ return rte_free(stat);
+}
+
+static inline void
+cluster_node_arregate_stats(struct cluster_node *cluster)
+{
+ uint64_t calls = 0, cycles = 0, objs = 0, realloc_count = 0;
+ struct rte_graph_cluster_node_stats *stat = &cluster->stat;
+ struct rte_node *node;
+ rte_node_t count;
+
+ for (count = 0; count < cluster->nb_nodes; count++) {
+ node = cluster->nodes[count];
+
+ calls += node->total_calls;
+ objs += node->total_objs;
+ cycles += node->total_cycles;
+ realloc_count += node->realloc_count;
+ }
+
+ stat->calls = calls;
+ stat->objs = objs;
+ stat->cycles = cycles;
+ stat->ts = rte_get_timer_cycles();
+ stat->realloc_count = realloc_count;
+}
+
+static inline void
+cluster_node_store_prev_stats(struct cluster_node *cluster)
+{
+ struct rte_graph_cluster_node_stats *stat = &cluster->stat;
+
+ stat->prev_ts = stat->ts;
+ stat->prev_calls = stat->calls;
+ stat->prev_objs = stat->objs;
+ stat->prev_cycles = stat->cycles;
+}
+
+void
+rte_graph_cluster_stats_get(struct rte_graph_cluster_stats *stat, bool skip_cb)
+{
+ struct cluster_node *cluster;
+ rte_node_t count;
+ int rc = 0;
+
+ cluster = stat->clusters;
+
+ for (count = 0; count < stat->max_nodes; count++) {
+ cluster_node_arregate_stats(cluster);
+ if (!skip_cb)
+ rc = stat->fn(!count, (count == stat->max_nodes - 1),
+ stat->cookie, &cluster->stat);
+ cluster_node_store_prev_stats(cluster);
+ if (rc)
+ break;
+ cluster = RTE_PTR_ADD(cluster, stat->cluster_node_size);
+ }
+}
+
+void
+rte_graph_cluster_stats_reset(struct rte_graph_cluster_stats *stat)
+{
+ struct cluster_node *cluster;
+ rte_node_t count;
+
+ cluster = stat->clusters;
+
+ for (count = 0; count < stat->max_nodes; count++) {
+ struct rte_graph_cluster_node_stats *node = &cluster->stat;
+
+ node->ts = 0;
+ node->calls = 0;
+ node->objs = 0;
+ node->cycles = 0;
+ node->prev_ts = 0;
+ node->prev_calls = 0;
+ node->prev_objs = 0;
+ node->prev_cycles = 0;
+ node->realloc_count = 0;
+ cluster = RTE_PTR_ADD(cluster, stat->cluster_node_size);
+ }
+}
diff --git a/lib/librte_graph/meson.build b/lib/librte_graph/meson.build
new file mode 100644
index 000000000..929a17f84
--- /dev/null
+++ b/lib/librte_graph/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Marvell International Ltd.
+#
+
+name = 'graph'
+
+sources = files('node.c', 'graph.c', 'graph_ops.c', 'graph_debug.c', 'graph_stats.c', 'graph_populate.c')
+headers = files('rte_graph.h', 'rte_graph_worker.h')
+allow_experimental_apis = true
+
+deps += ['eal']
diff --git a/lib/librte_graph/node.c b/lib/librte_graph/node.c
new file mode 100644
index 000000000..503721943
--- /dev/null
+++ b/lib/librte_graph/node.c
@@ -0,0 +1,419 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_errno.h>
+#include <rte_string_fns.h>
+
+#include "graph_private.h"
+
+static struct node_head node_list = STAILQ_HEAD_INITIALIZER(node_list);
+static rte_node_t node_id;
+
+#define node_id_check(id) id_check(id, node_id)
+
+/* Private functions */
+struct node_head *node_list_head_get(void)
+{
+ return &node_list;
+}
+
+struct node *
+node_from_name(const char *name)
+{
+ struct node *node;
+
+ STAILQ_FOREACH(node, &node_list, next)
+ if (strncmp(node->name, name, RTE_NODE_NAMESIZE) == 0)
+ return node;
+
+ return NULL;
+}
+
+static bool
+node_has_duplicate_entry(const char *name)
+{
+ struct node *node;
+
+ /* Is duplicate name registred */
+ STAILQ_FOREACH(node, &node_list, next) {
+ if (strncmp(node->name, name, RTE_NODE_NAMESIZE) == 0) {
+ rte_errno = EEXIST;
+ return 1;
+ }
+ }
+ return 0;
+}
+
+/* Public functions */
+rte_node_t
+__rte_node_register(const struct rte_node_register *reg)
+{
+ struct node *node;
+ rte_edge_t i;
+ size_t sz;
+
+ RTE_BUILD_BUG_ON((offsetof(struct rte_node, nodes) -
+ offsetof(struct rte_node, ctx))
+ != RTE_CACHE_LINE_MIN_SIZE);
+
+ graph_spinlock_lock();
+
+ /* Check sanity */
+ if (reg == NULL || reg->process == NULL) {
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ /* Check for duplicate name */
+ if (node_has_duplicate_entry(reg->name))
+ goto fail;
+
+ sz = sizeof(struct node) + (reg->nb_edges * RTE_NODE_NAMESIZE);
+ node = calloc(1, sz);
+ if (node == NULL) {
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ /* Initialize the node */
+ if (rte_strscpy(node->name, reg->name, RTE_NODE_NAMESIZE) < 0) {
+ rte_errno = E2BIG;
+ goto free;
+ }
+ node->flags = reg->flags;
+ node->process = reg->process;
+ node->init = reg->init;
+ node->fini = reg->fini;
+ node->nb_edges = reg->nb_edges;
+ node->parent_id = reg->parent_id;
+ for (i = 0; i < reg->nb_edges; i++) {
+ if (rte_strscpy(node->next_nodes[i], reg->next_nodes[i],
+ RTE_NODE_NAMESIZE) < 0) {
+ rte_errno = E2BIG;
+ goto free;
+ }
+ }
+
+ node->id = node_id++;
+
+ /* Add the node at tail */
+ STAILQ_INSERT_TAIL(&node_list, node, next);
+ graph_spinlock_unlock();
+
+ return node->id;
+free:
+ free(node);
+fail:
+ graph_spinlock_unlock();
+ return RTE_NODE_ID_INVALID;
+}
+
+static int
+clone_name(struct rte_node_register *reg, struct node *node, const char *name)
+{
+ ssize_t sz, rc;
+
+#define SZ RTE_NODE_NAMESIZE
+ rc = rte_strscpy(reg->name, node->name, SZ);
+ if (rc < 0)
+ goto fail;
+ sz = rc;
+ rc = rte_strscpy(reg->name + sz, "-", RTE_MAX((int16_t)(SZ - sz), 0));
+ if (rc < 0)
+ goto fail;
+ sz += rc;
+ sz = rte_strscpy(reg->name + sz, name, RTE_MAX((int16_t)(SZ - sz), 0));
+ if (sz < 0)
+ goto fail;
+
+ return 0;
+fail:
+ rte_errno = E2BIG;
+ return -rte_errno;
+}
+
+static rte_node_t
+node_clone(struct node *node, const char *name)
+{
+ rte_node_t rc = RTE_NODE_ID_INVALID;
+ struct rte_node_register *reg;
+ rte_edge_t i;
+
+ /* Don't allow to clone a node from a cloned node */
+ if (node->parent_id != RTE_NODE_ID_INVALID) {
+ rte_errno = EEXIST;
+ goto fail;
+ }
+
+ /* Check for duplicate name */
+ if (node_has_duplicate_entry(name))
+ goto fail;
+
+ reg = calloc(1, sizeof(*reg) + (sizeof(char *) * node->nb_edges));
+ if (reg == NULL) {
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ /* Clone the source node */
+ reg->flags = node->flags;
+ reg->process = node->process;
+ reg->init = node->init;
+ reg->fini = node->fini;
+ reg->nb_edges = node->nb_edges;
+ reg->parent_id = node->id;
+
+ for (i = 0; i < node->nb_edges; i++)
+ reg->next_nodes[i] = node->next_nodes[i];
+
+ /* Naming ceremony of the new node. name is node->name + "-" + name */
+ if (clone_name(reg, node, name))
+ goto free;
+
+ rc = __rte_node_register(reg);
+free:
+ free(reg);
+fail:
+ return rc;
+}
+
+rte_node_t
+rte_node_clone(rte_node_t id, const char *name)
+{
+ struct node *node;
+
+ node_id_check(id);
+ STAILQ_FOREACH(node, &node_list, next)
+ if (node->id == id)
+ return node_clone(node, name);
+
+fail:
+ return RTE_NODE_ID_INVALID;
+}
+
+rte_node_t
+rte_node_from_name(const char *name)
+{
+ struct node *node;
+
+ STAILQ_FOREACH(node, &node_list, next)
+ if (strncmp(node->name, name, RTE_NODE_NAMESIZE) == 0)
+ return node->id;
+
+ return RTE_NODE_ID_INVALID;
+}
+
+char *
+rte_node_id_to_name(rte_node_t id)
+{
+ struct node *node;
+
+ node_id_check(id);
+ STAILQ_FOREACH(node, &node_list, next)
+ if (node->id == id)
+ return node->name;
+
+fail:
+ return NULL;
+}
+
+rte_edge_t
+rte_node_edge_count(rte_node_t id)
+{
+ struct node *node;
+
+ node_id_check(id);
+ STAILQ_FOREACH(node, &node_list, next)
+ if (node->id == id)
+ return node->nb_edges;
+fail:
+ return RTE_EDGE_ID_INVALID;
+}
+
+static rte_edge_t
+edge_update(struct node *node, struct node *prev, rte_edge_t from,
+ const char **next_nodes, rte_edge_t nb_edges)
+{
+ rte_edge_t i, max_edges, count = 0;
+ struct node *new_node;
+ bool need_realloc;
+ size_t sz;
+
+ if (from == RTE_EDGE_ID_INVALID)
+ from = node->nb_edges;
+
+ /* Don't create hole in next_nodes[] list */
+ if (from > node->nb_edges) {
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ /* Remove me from list */
+ STAILQ_REMOVE(&node_list, node, node, next);
+
+ /* Allocate the storage space for new node if required */
+ max_edges = from + nb_edges;
+ need_realloc = max_edges > node->nb_edges;
+ if (need_realloc) {
+ sz = sizeof(struct node) + (max_edges * RTE_NODE_NAMESIZE);
+ new_node = realloc(node, sz);
+ if (new_node == NULL) {
+ rte_errno = ENOMEM;
+ goto restore;
+ } else {
+ node = new_node;
+ }
+ }
+
+ /* Update the new nodes name */
+ for (i = from; i < max_edges; i++, count++) {
+ if (rte_strscpy(node->next_nodes[i], next_nodes[count],
+ RTE_NODE_NAMESIZE) < 0) {
+ rte_errno = E2BIG;
+ goto restore;
+ }
+ }
+restore:
+ /* Update the linked list to point new node address in prev node */
+ if (prev)
+ STAILQ_INSERT_AFTER(&node_list, prev, node, next);
+ else
+ STAILQ_INSERT_HEAD(&node_list, node, next);
+
+ if (need_realloc)
+ node->nb_edges += count;
+
+fail:
+ return count;
+}
+
+rte_edge_t
+rte_node_edge_shrink(rte_node_t id, rte_edge_t size)
+{
+ rte_edge_t rc = RTE_EDGE_ID_INVALID;
+ struct node *node;
+
+ node_id_check(id);
+ graph_spinlock_lock();
+
+ STAILQ_FOREACH(node, &node_list, next) {
+ if (node->id == id) {
+ if (node->nb_edges < size) {
+ rte_errno = E2BIG;
+ goto fail;
+ }
+ node->nb_edges = size;
+ rc = size;
+ break;
+ }
+ }
+
+fail:
+ graph_spinlock_unlock();
+ return rc;
+}
+
+rte_edge_t
+rte_node_edge_update(rte_node_t id, rte_edge_t from,
+ const char **next_nodes, uint16_t nb_edges)
+{
+ rte_edge_t rc = RTE_EDGE_ID_INVALID;
+ struct node *n, *prev;
+
+ node_id_check(id);
+ graph_spinlock_lock();
+
+ prev = NULL;
+ STAILQ_FOREACH(n, &node_list, next) {
+ if (n->id == id) {
+ rc = edge_update(n, prev, from, next_nodes, nb_edges);
+ break;
+ }
+ prev = n;
+ }
+
+ graph_spinlock_unlock();
+fail:
+ return rc;
+}
+
+static rte_node_t
+node_copy_edges(struct node *node, char *next_nodes[])
+{
+ rte_edge_t i;
+
+ for (i = 0; i < node->nb_edges; i++)
+ next_nodes[i] = node->next_nodes[i];
+
+ return i;
+}
+
+rte_node_t
+rte_node_edge_get(rte_node_t id, char *next_nodes[])
+{
+ rte_node_t rc = RTE_NODE_ID_INVALID;
+ struct node *node;
+
+ node_id_check(id);
+ graph_spinlock_lock();
+
+ STAILQ_FOREACH(node, &node_list, next) {
+ if (node->id == id) {
+ if (next_nodes == NULL)
+ rc = sizeof(char *) * node->nb_edges;
+ else
+ rc = node_copy_edges(node, next_nodes);
+ break;
+ }
+ }
+
+ graph_spinlock_unlock();
+fail:
+ return rc;
+}
+
+static void
+node_scan_dump(FILE *f, rte_node_t id, bool all)
+{
+ struct node *node;
+
+ RTE_ASSERT(f != NULL);
+ node_id_check(id);
+
+ STAILQ_FOREACH(node, &node_list, next) {
+ if (all == true) {
+ node_dump(f, node);
+ } else if (node->id == id) {
+ node_dump(f, node);
+ return;
+ }
+ }
+fail:
+ return;
+}
+
+void
+rte_node_dump(FILE *f, rte_node_t id)
+{
+ node_scan_dump(f, id, false);
+}
+
+void
+rte_node_list_dump(FILE *f)
+{
+ node_scan_dump(f, 0, true);
+}
+
+rte_node_t
+rte_node_max_count(void)
+{
+ return node_id;
+}
diff --git a/lib/librte_graph/rte_graph.h b/lib/librte_graph/rte_graph.h
new file mode 100644
index 000000000..aa6df76dd
--- /dev/null
+++ b/lib/librte_graph/rte_graph.h
@@ -0,0 +1,277 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _RTE_GRAPH_H_
+#define _RTE_GRAPH_H_
+
+/**
+ * @file rte_graph.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE GRAPH support.
+ * librte_graph provides a framework to <fill the remaning>
+ * and need to add rte_experimental
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_compat.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_GRAPH_NAMESIZE 64 /**< Max length of graph name. */
+#define RTE_NODE_NAMESIZE 64 /**< Max length of node name. */
+#define RTE_GRAPH_OFF_INVALID UINT32_MAX
+#define RTE_NODE_ID_INVALID UINT32_MAX
+#define RTE_EDGE_ID_INVALID UINT16_MAX
+#define RTE_GRAPH_ID_INVALID UINT16_MAX
+#define RTE_GRAPH_FENCE 0xdeadbeef12345678ULL
+
+typedef uint32_t rte_graph_off_t;
+typedef uint32_t rte_node_t;
+typedef uint16_t rte_edge_t;
+typedef uint16_t rte_graph_t;
+
+/**< Burst size in terms of log2 */
+#if RTE_GRAPH_BURST_SIZE == 1
+#define RTE_GRAPH_BURST_SIZE_LOG2 0
+#elif RTE_GRAPH_BURST_SIZE == 2
+#define RTE_GRAPH_BURST_SIZE_LOG2 1
+#elif RTE_GRAPH_BURST_SIZE == 4
+#define RTE_GRAPH_BURST_SIZE_LOG2 2
+#elif RTE_GRAPH_BURST_SIZE == 8
+#define RTE_GRAPH_BURST_SIZE_LOG2 3
+#elif RTE_GRAPH_BURST_SIZE == 16
+#define RTE_GRAPH_BURST_SIZE_LOG2 4
+#elif RTE_GRAPH_BURST_SIZE == 32
+#define RTE_GRAPH_BURST_SIZE_LOG2 5
+#elif RTE_GRAPH_BURST_SIZE == 64
+#define RTE_GRAPH_BURST_SIZE_LOG2 6
+#elif RTE_GRAPH_BURST_SIZE == 128
+#define RTE_GRAPH_BURST_SIZE_LOG2 7
+#elif RTE_GRAPH_BURST_SIZE == 256
+#define RTE_GRAPH_BURST_SIZE_LOG2 8
+#else
+#error "Unsupported burst size"
+#endif
+
+/* Forward declaration */
+struct rte_node;
+struct rte_graph;
+struct rte_graph_cluster_stats;
+struct rte_graph_cluster_node_stats;
+
+typedef uint16_t (*rte_node_process_t)
+ (struct rte_graph *graph, struct rte_node *node, void **o, uint16_t nb);
+typedef int (*rte_node_init_t)
+ (const struct rte_graph *graph, struct rte_node *node);
+typedef void (*rte_node_fini_t)
+ (const struct rte_graph *graph, struct rte_node *node);
+typedef int (*rte_graph_cluster_stats_cb_t)(bool is_first, bool is_last,
+ void *cookie, const struct rte_graph_cluster_node_stats *);
+
+/**
+ * Configuration parameters for creating the graph.
+ */
+struct rte_graph_param {
+ int socket_id;
+ uint16_t nb_node_patterns;
+ const char **node_patterns;
+};
+
+struct rte_graph_cluster_stats_param {
+ int socket_id;
+ /* NULL value allowed, in that case, default print stat function used */
+ rte_graph_cluster_stats_cb_t fn;
+ RTE_STD_C11
+ union {
+ void *cookie;
+ FILE *f; /* Where to dump the stats when fn == NULL */
+ };
+ uint16_t nb_graph_patterns;
+ const char **graph_patterns;
+};
+
+/* Stats functions */
+struct rte_graph_cluster_node_stats {
+ uint64_t ts;
+ uint64_t calls;
+ uint64_t objs;
+ uint64_t cycles;
+
+ uint64_t prev_ts;
+ uint64_t prev_calls;
+ uint64_t prev_objs;
+ uint64_t prev_cycles;
+
+ uint64_t realloc_count;
+
+ rte_node_t id;
+ uint64_t hz;
+ char name[RTE_NODE_NAMESIZE];
+} __rte_cache_aligned;
+
+/** Structure defines the node registration parameters */
+struct rte_node_register {
+ char name[RTE_NODE_NAMESIZE]; /**< Name of the node. */
+ uint64_t flags; /**< Node configuration flag */
+#define RTE_NODE_SOURCE_F (1ULL << 0)
+ RTE_STD_C11
+ rte_node_process_t process; /**< Node process function */
+ rte_node_init_t init;
+ rte_node_fini_t fini;
+ rte_node_t id; /* out */
+ rte_node_t parent_id; /* Id of parent node */
+ rte_edge_t nb_edges; /**< Number of edges from this node */
+ const char *next_nodes[]; /* Names of next nodes. */
+};
+
+/* Graph create functions */
+__rte_experimental
+rte_graph_t rte_graph_create(const char *name, struct rte_graph_param *prm);
+__rte_experimental
+rte_graph_t rte_graph_destroy(const char *name);
+
+/* Lookup functions */
+__rte_experimental
+rte_graph_t rte_graph_from_name(const char *name);
+__rte_experimental
+char *rte_graph_id_to_name(rte_graph_t id);
+
+/* Export the graph as graphviz dot file */
+__rte_experimental
+rte_graph_t rte_graph_export(const char *name, FILE *f);
+
+/* Get graph object for fast path in worker for walk */
+__rte_experimental
+struct rte_graph *rte_graph_lookup(const char *name);
+
+/* Get graph max count */
+__rte_experimental
+rte_graph_t rte_graph_max_count(void);
+
+/* Debug functions */
+__rte_experimental
+void rte_graph_dump(FILE *f, rte_graph_t id);
+__rte_experimental
+void rte_graph_list_dump(FILE *f);
+__rte_experimental
+void rte_graph_obj_dump(FILE *f, struct rte_graph *graph, bool all);
+
+/* API to get rte_node* after the graph creation */
+#define rte_graph_foreach_node(count, off, graph, node) \
+ for (count = 0, off = graph->nodes_start, \
+ node = RTE_PTR_ADD(graph, off); \
+ count < graph->nb_nodes; \
+ off = node->next, node = RTE_PTR_ADD(graph, off), count++)
+
+/* Lookup functions */
+__rte_experimental
+struct rte_node *rte_graph_node_get(rte_graph_t graph_id, uint32_t node_id);
+__rte_experimental
+struct rte_node *rte_graph_node_get_by_name(const char *graph,
+ const char *name);
+
+__rte_experimental
+struct rte_graph_cluster_stats *
+rte_graph_cluster_stats_create(const struct rte_graph_cluster_stats_param *prm);
+__rte_experimental
+void rte_graph_cluster_stats_destroy(struct rte_graph_cluster_stats *stat);
+__rte_experimental
+void rte_graph_cluster_stats_get(struct rte_graph_cluster_stats *stat,
+ bool skip_cb);
+__rte_experimental
+void rte_graph_cluster_stats_reset(struct rte_graph_cluster_stats *stat);
+
+/* Node creation functions */
+__rte_experimental
+rte_node_t __rte_node_register(const struct rte_node_register *node);
+
+#define RTE_NODE_REGISTER(node) \
+RTE_INIT(rte_node_register_##node) \
+{ \
+ node.parent_id = RTE_NODE_ID_INVALID; \
+ node.id = __rte_node_register(&node); \
+}
+__rte_experimental
+rte_node_t rte_node_clone(rte_node_t id, const char *name);
+
+/* Lookup functions */
+__rte_experimental
+rte_node_t rte_node_from_name(const char *name);
+__rte_experimental
+char *rte_node_id_to_name(rte_node_t id);
+
+/* Edge related functions */
+__rte_experimental
+rte_edge_t rte_node_edge_count(rte_node_t id);
+
+/* If from == RTE_EDGE_ID_INVALID then add end of the list */
+/* return the number of updated number of edges */
+__rte_experimental
+rte_edge_t rte_node_edge_update(rte_node_t id, rte_edge_t from,
+ const char **next_nodes, uint16_t nb_edges);
+__rte_experimental
+rte_edge_t rte_node_edge_shrink(rte_node_t id, rte_edge_t size);
+
+/* if next_nodes == NULL, return the size of array
+ * else number of item copied.
+ */
+__rte_experimental
+rte_node_t rte_node_edge_get(rte_node_t id, char *next_nodes[]);
+
+/* Get node max count */
+__rte_experimental
+rte_node_t rte_node_max_count(void);
+
+/* Debug functions */
+__rte_experimental
+void rte_node_dump(FILE *f, rte_node_t id);
+__rte_experimental
+void rte_node_list_dump(FILE *f);
+
+void __rte_node_stream_alloc(struct rte_graph *graph, struct rte_node *node);
+void __rte_node_stream_alloc_size(struct rte_graph *graph,
+ struct rte_node *node, uint16_t req_size);
+
+/* Helper functions */
+static __rte_always_inline int
+rte_node_is_invalid(rte_node_t id)
+{
+ return (id == RTE_NODE_ID_INVALID);
+}
+
+static __rte_always_inline int
+rte_edge_is_invalid(rte_edge_t id)
+{
+ return (id == RTE_EDGE_ID_INVALID);
+}
+
+static __rte_always_inline int
+rte_graph_is_invalid(rte_graph_t id)
+{
+ return (id == RTE_GRAPH_ID_INVALID);
+}
+
+static __rte_always_inline int
+rte_graph_has_stats_feature(void)
+{
+#ifdef RTE_LIBRTE_GRAPH_STATS
+ return RTE_LIBRTE_GRAPH_STATS;
+#else
+ return 0;
+#endif
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_GRAPH_H_ */
diff --git a/lib/librte_graph/rte_graph_version.map b/lib/librte_graph/rte_graph_version.map
new file mode 100644
index 000000000..e07763786
--- /dev/null
+++ b/lib/librte_graph/rte_graph_version.map
@@ -0,0 +1,46 @@
+EXPERIMENTAL {
+ global:
+
+ rte_graph_create;
+ rte_graph_destroy;
+ rte_graph_from_name;
+ rte_graph_id_to_name;
+ rte_graph_dump;
+ rte_graph_list_dump;
+ rte_graph_node_ctx;
+ rte_graph_node_ctx_by_name;
+ rte_graph_export;
+ rte_graph_lookup;
+ rte_graph_clone;
+ rte_graph_enqueue;
+ rte_graph_free;
+ rte_graph_walk;
+ rte_graph_logtype;
+ rte_graph_obj_dump;
+ rte_graph_max_count;
+ rte_graph_node_get;
+ rte_graph_node_get_by_name;
+
+ __rte_node_register;
+ __rte_node_stream_alloc;
+ __rte_node_stream_alloc_size;
+ rte_node_clone;
+
+ rte_node_from_name;
+ rte_node_id_to_name;
+ rte_node_edge_count;
+ rte_node_edge_update;
+ rte_node_edge_get;
+ rte_node_edge_shrink;
+ rte_node_dump;
+ rte_node_list_dump;
+ rte_node_edge_shrink;
+ rte_node_max_count;
+
+ rte_graph_cluster_stats_create;
+ rte_graph_cluster_stats_destroy;
+ rte_graph_cluster_stats_get;
+ rte_graph_cluster_stats_reset;
+
+ local: *;
+};
diff --git a/lib/librte_graph/rte_graph_worker.h b/lib/librte_graph/rte_graph_worker.h
new file mode 100644
index 000000000..38c418e3d
--- /dev/null
+++ b/lib/librte_graph/rte_graph_worker.h
@@ -0,0 +1,280 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _RTE_GRAPH_WORKER_H_
+#define _RTE_GRAPH_WORKER_H_
+
+/**
+ * @file rte_graph.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE GRAPH support.
+ * librte_graph provides a framework to <fill the remaning>
+ * and need to add rte_experimental
+ */
+
+#include <stdio.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_graph.h>
+#include <rte_prefetch.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_graph {
+ uint32_t tail;
+ uint32_t head;
+ uint32_t cir_mask;
+ rte_node_t nb_nodes;
+ rte_graph_off_t *cir_start;
+ rte_graph_off_t nodes_start;
+ rte_graph_t id;
+ int socket;
+ char name[RTE_GRAPH_NAMESIZE]; /* Name of the graph. */
+ uint64_t fence;
+}__rte_cache_aligned;
+
+struct rte_node {
+ /* Slow path area */
+ uint64_t fence; /* Fence */
+ rte_graph_off_t next; /* Index to next node */
+ rte_node_t id; /* ID */
+ rte_node_t parent_id; /* Parent ID */
+ rte_edge_t nb_edges; /* Number of edges from this node */
+ uint32_t realloc_count; /* Number of times realloced */
+
+ char parent[RTE_NODE_NAMESIZE]; /* Parent node name. */
+ char name[RTE_NODE_NAMESIZE]; /* Name of the node. */
+ /* Fast path area */
+ #define RTE_NODE_CTX_SZ 16
+ uint8_t ctx[RTE_NODE_CTX_SZ] __rte_cache_aligned;
+ uint16_t size;
+ uint16_t idx;
+ rte_graph_off_t off;
+ uint64_t total_cycles;
+ uint64_t total_calls;
+ uint64_t total_objs;
+ RTE_STD_C11
+ union {
+ void **objs;
+ uint64_t objs_u64;
+ };
+ RTE_STD_C11
+ union {
+ rte_node_process_t process;
+ uint64_t process_u64;
+ };
+ struct rte_node *nodes[] __rte_cache_min_aligned;
+} __rte_cache_aligned;
+
+/* Fast path API for Graph walk */
+static inline void
+rte_graph_walk(struct rte_graph *graph)
+{
+ const rte_graph_off_t *cir_start = graph->cir_start;
+ const rte_node_t mask = graph->cir_mask;
+ uint32_t head = graph->head;
+ struct rte_node *node;
+ uint64_t start;
+ uint16_t rc;
+ void **objs;
+
+ /*
+ * Walk on the source node(s) ((cir_start - head) -> cir_start) and then
+ * on the pending streams (cir_start -> (cir_start + mask) -> cir_start)
+ * in a circular buffer fashion.
+ *
+ * +-----+ <= cir_start - head [number of source nodes]
+ * | |
+ * | ... | <= source nodes
+ * | |
+ * +-----+ <= cir_start [head = 0] [tail = 0]
+ * | |
+ * | ... | <= pending streams
+ * | |
+ * +-----+ <= cir_start + mask
+ */
+ while (likely(head != graph->tail)) {
+ node = RTE_PTR_ADD(graph, cir_start[(int32_t)head++]);
+ RTE_ASSERT(node->fence == RTE_GRAPH_FENCE);
+ objs = node->objs;
+ rte_prefetch0(objs);
+
+ if (rte_graph_has_stats_feature()) {
+ start = rte_rdtsc();
+ rc = node->process(graph, node, objs, node->idx);
+ node->total_cycles += rte_rdtsc() - start;
+ node->total_calls++;
+ node->total_objs += rc;
+ } else {
+ node->process(graph, node, objs, node->idx);
+ }
+ node->idx = 0;
+ head = likely((int32_t)head > 0) ? head & mask : head;
+ }
+ graph->tail = 0;
+}
+
+/* Fast path helper functions */
+static __rte_always_inline void
+__rte_node_enqueue_tail_update(struct rte_graph *graph, struct rte_node *node)
+{
+ uint32_t tail;
+
+ tail = graph->tail;
+ graph->cir_start[tail++] = node->off;
+ graph->tail = tail & graph->cir_mask;
+
+}
+
+static __rte_always_inline void
+__rte_node_enqueue_prolouge(struct rte_graph *graph, struct rte_node *node,
+ const uint16_t idx, const uint16_t space)
+{
+
+ /* Add to the pending stream list if the node is new */
+ if (idx == 0)
+ __rte_node_enqueue_tail_update(graph, node);
+
+ if (unlikely(node->size < (idx + space)))
+ __rte_node_stream_alloc(graph, node);
+}
+
+static __rte_always_inline struct rte_node *
+__rte_node_next_node_get(struct rte_node *node, rte_edge_t next)
+{
+ RTE_ASSERT(next < node->nb_edges);
+ RTE_ASSERT(node->fence == RTE_GRAPH_FENCE);
+ node = node->nodes[next];
+ RTE_ASSERT(node->fence == RTE_GRAPH_FENCE);
+
+ return node;
+}
+
+/* Fast path enqueue functions */
+static inline void
+rte_node_enqueue(struct rte_graph *graph, struct rte_node *node,
+ rte_edge_t next, void **objs, uint16_t nb_objs)
+{
+ node = __rte_node_next_node_get(node, next);
+ const uint16_t idx = node->idx;
+
+ __rte_node_enqueue_prolouge(graph, node, idx, nb_objs);
+
+ rte_memcpy(&node->objs[idx], objs, nb_objs * sizeof(void *));
+ node->idx = idx + nb_objs;
+}
+
+static inline void
+rte_node_enqueue_x1(struct rte_graph *graph, struct rte_node *node,
+ rte_edge_t next, void *obj)
+{
+ node = __rte_node_next_node_get(node, next);
+ uint16_t idx = node->idx;
+
+ __rte_node_enqueue_prolouge(graph, node, idx, 1);
+
+ node->objs[idx++] = obj;
+ node->idx = idx;
+}
+
+static inline void
+rte_node_enqueue_x2(struct rte_graph *graph, struct rte_node *node,
+ rte_edge_t next, void *obj0, void *obj1)
+{
+ node = __rte_node_next_node_get(node, next);
+ uint16_t idx = node->idx;
+
+ __rte_node_enqueue_prolouge(graph, node, idx, 2);
+
+ node->objs[idx++] = obj0;
+ node->objs[idx++] = obj1;
+ node->idx = idx;
+}
+
+static inline void
+rte_node_enqueue_x4(struct rte_graph *graph, struct rte_node *node, rte_edge_t
+ next, void *obj0, void *obj1, void *obj2, void *obj3)
+{
+ node = __rte_node_next_node_get(node, next);
+ uint16_t idx = node->idx;
+
+ __rte_node_enqueue_prolouge(graph, node, idx, 4);
+
+ node->objs[idx++] = obj0;
+ node->objs[idx++] = obj1;
+ node->objs[idx++] = obj2;
+ node->objs[idx++] = obj3;
+ node->idx = idx;
+}
+
+static inline void
+rte_node_enqueue_next(struct rte_graph *graph, struct rte_node *node,
+ rte_edge_t *nexts, void **objs, uint16_t nb_objs)
+{
+ uint16_t i;
+
+ /* Non optimizied implementation to get started !!! */
+ for (i = 0; i < nb_objs; i++)
+ rte_node_enqueue_x1(graph, node, nexts[i], objs[i]);
+}
+
+static inline void **
+rte_node_next_stream_get(struct rte_graph *graph, struct rte_node *node,
+ rte_edge_t next, uint16_t nb_objs)
+{
+ node = __rte_node_next_node_get(node, next);
+ const uint16_t idx = node->idx;
+ uint16_t free_space = node->size - idx;
+
+ if (unlikely(free_space < nb_objs))
+ __rte_node_stream_alloc_size(graph, node, nb_objs);
+
+ return &node->objs[idx];
+}
+
+static inline void
+rte_node_next_stream_put(struct rte_graph *graph, struct rte_node *node,
+ rte_edge_t next, uint16_t idx)
+{
+ if (unlikely(!idx))
+ return;
+
+ node = __rte_node_next_node_get(node, next);
+ if (node->idx == 0)
+ __rte_node_enqueue_tail_update(graph, node);
+
+ node->idx += idx;
+}
+
+static inline void
+rte_node_next_stream_move(struct rte_graph *graph, struct rte_node *src,
+ rte_edge_t next)
+{
+ struct rte_node *dst = __rte_node_next_node_get(src, next);
+
+ /* Let swap the pointers if dst don't have valid objs */
+ if (likely(dst->idx == 0)) {
+ void **dobjs = dst->objs;
+ uint16_t dsz = dst->size;
+ dst->objs = src->objs;
+ dst->size = src->size;
+ src->objs = dobjs;
+ src->size = dsz;
+ dst->idx = src->idx;
+ __rte_node_enqueue_tail_update(graph, dst);
+ } else { /* Move the objects from src node to dst node */
+ rte_node_enqueue(graph, src, next, src->objs, src->idx);
+ }
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_GRAPH_WORKER_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index 0af3efab2..4089ce0c3 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -30,7 +30,7 @@ libraries = [
# add pkt framework libs which use other libs from above
'port', 'table', 'pipeline',
# flow_classify lib depends on pkt framework table lib
- 'flow_classify', 'bpf', 'telemetry']
+ 'flow_classify', 'bpf', 'graph', 'telemetry']
if is_windows
libraries = ['kvargs','eal'] # only supported libraries for windows
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 15acf95db..e169d7a7b 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -98,6 +98,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
+_LDLIBS-$(CONFIG_RTE_LIBRTE_GRAPH) += -lrte_graph
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.24.1
^ permalink raw reply [flat|nested] 31+ messages in thread
* [dpdk-dev] [RFC PATCH 2/5] node: add packet processing nodes
2020-01-31 17:01 [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 1/5] " jerinj
@ 2020-01-31 17:01 ` jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 3/5] test: add graph functional tests jerinj
` (4 subsequent siblings)
6 siblings, 0 replies; 31+ messages in thread
From: jerinj @ 2020-01-31 17:01 UTC (permalink / raw)
To: dev
Cc: pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard, Jerin Jacob
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/meson.build | 7 +-
config/common_base | 6 +
lib/Makefile | 4 +
lib/librte_node/Makefile | 30 ++
lib/librte_node/ethdev_ctrl.c | 106 +++++
lib/librte_node/ethdev_rx.c | 218 +++++++++
lib/librte_node/ethdev_rx.h | 17 +
lib/librte_node/ethdev_rx_priv.h | 45 ++
lib/librte_node/ethdev_tx.c | 74 +++
lib/librte_node/ethdev_tx_priv.h | 33 ++
lib/librte_node/ip4_lookup.c | 657 +++++++++++++++++++++++++++
lib/librte_node/ip4_lookup_priv.h | 17 +
lib/librte_node/ip4_rewrite.c | 340 ++++++++++++++
lib/librte_node/ip4_rewrite_priv.h | 44 ++
lib/librte_node/log.c | 14 +
lib/librte_node/meson.build | 8 +
lib/librte_node/node_private.h | 61 +++
lib/librte_node/null.c | 23 +
lib/librte_node/pkt_drop.c | 26 ++
lib/librte_node/rte_node_eth_api.h | 31 ++
lib/librte_node/rte_node_ip4_api.h | 33 ++
lib/librte_node/rte_node_version.map | 9 +
lib/meson.build | 5 +-
meson.build | 1 +
mk/rte.app.mk | 1 +
25 files changed, 1807 insertions(+), 3 deletions(-)
create mode 100644 lib/librte_node/Makefile
create mode 100644 lib/librte_node/ethdev_ctrl.c
create mode 100644 lib/librte_node/ethdev_rx.c
create mode 100644 lib/librte_node/ethdev_rx.h
create mode 100644 lib/librte_node/ethdev_rx_priv.h
create mode 100644 lib/librte_node/ethdev_tx.c
create mode 100644 lib/librte_node/ethdev_tx_priv.h
create mode 100644 lib/librte_node/ip4_lookup.c
create mode 100644 lib/librte_node/ip4_lookup_priv.h
create mode 100644 lib/librte_node/ip4_rewrite.c
create mode 100644 lib/librte_node/ip4_rewrite_priv.h
create mode 100644 lib/librte_node/log.c
create mode 100644 lib/librte_node/meson.build
create mode 100644 lib/librte_node/node_private.h
create mode 100644 lib/librte_node/null.c
create mode 100644 lib/librte_node/pkt_drop.c
create mode 100644 lib/librte_node/rte_node_eth_api.h
create mode 100644 lib/librte_node/rte_node_ip4_api.h
create mode 100644 lib/librte_node/rte_node_version.map
diff --git a/app/test/meson.build b/app/test/meson.build
index e1cdae3cb..7d761c8fa 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -159,8 +159,9 @@ test_deps = ['acl',
'rib',
'ring',
'stack',
- 'timer'
+ 'timer',
'graph',
+ 'node'
]
fast_test_names = [
@@ -382,13 +383,15 @@ endforeach
test_dep_objs += cc.find_library('execinfo', required: false)
link_libs = []
+link_nodes = []
if get_option('default_library') == 'static'
link_libs = dpdk_drivers
+ link_nodes = dpdk_graph_nodes
endif
dpdk_test = executable('dpdk-test',
test_sources,
- link_whole: link_libs,
+ link_whole: link_libs + link_nodes,
dependencies: test_dep_objs,
c_args: [cflags, '-DALLOW_EXPERIMENTAL_API'],
install_rpath: driver_install_path,
diff --git a/config/common_base b/config/common_base
index badcc0be5..f7f3f2607 100644
--- a/config/common_base
+++ b/config/common_base
@@ -1077,6 +1077,12 @@ CONFIG_RTE_LIBRTE_GRAPH=y
CONFIG_RTE_GRAPH_BURST_SIZE=256
CONFIG_RTE_LIBRTE_GRAPH_STATS=y
CONFIG_RTE_LIBRTE_GRAPH_DEBUG=n
+#
+# Compile librte_node
+#
+CONFIG_RTE_LIBRTE_NODE=y
+CONFIG_RTE_LIBRTE_NODE_DEBUG=n
+
#
# Compile the test application
#
diff --git a/lib/Makefile b/lib/Makefile
index 495f572bf..fe3fcc70f 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -121,6 +121,10 @@ DEPDIRS-librte_rcu := librte_eal
DIRS-$(CONFIG_RTE_LIBRTE_GRAPH) += librte_graph
DEPDIRS-librte_graph := librte_eal librte_mbuf
+
+DIRS-$(CONFIG_RTE_LIBRTE_NODE) += librte_node
+DEPDIRS-librte_node := librte_graph librte_lpm librte_ethdev librte_mbuf
+
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
endif
diff --git a/lib/librte_node/Makefile b/lib/librte_node/Makefile
new file mode 100644
index 000000000..aaf041580
--- /dev/null
+++ b/lib/librte_node/Makefile
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Marvell International Ltd.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_node.a
+
+CFLAGS += -O3 -DALLOW_EXPERIMENTAL_API
+CFLAGS += $(WERROR_FLAGS)
+LDLIBS += -lrte_eal -lrte_graph -lrte_mbuf -lrte_lpm -lrte_ethdev -lrte_mempool
+
+EXPORT_MAP := rte_node_version.map
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_NODE) += null.c
+SRCS-$(CONFIG_RTE_LIBRTE_NODE) += log.c
+SRCS-$(CONFIG_RTE_LIBRTE_NODE) += ethdev_rx.c
+SRCS-$(CONFIG_RTE_LIBRTE_NODE) += ethdev_tx.c
+SRCS-$(CONFIG_RTE_LIBRTE_NODE) += ethdev_ctrl.c
+SRCS-$(CONFIG_RTE_LIBRTE_NODE) += ip4_lookup.c
+SRCS-$(CONFIG_RTE_LIBRTE_NODE) += ip4_rewrite.c
+SRCS-$(CONFIG_RTE_LIBRTE_NODE) += pkt_drop.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_NODE)-include += rte_node_ip4_api.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_NODE)-include += rte_node_eth_api.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_node/ethdev_ctrl.c b/lib/librte_node/ethdev_ctrl.c
new file mode 100644
index 000000000..7fc7af7f7
--- /dev/null
+++ b/lib/librte_node/ethdev_ctrl.c
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_graph.h>
+#include <rte_node_eth_api.h>
+
+#include "ethdev_rx_priv.h"
+#include "ethdev_tx_priv.h"
+#include "ip4_rewrite_priv.h"
+#include "node_private.h"
+
+static struct ethdev_ctrl {
+ uint16_t nb_graphs;
+} ctrl;
+
+int
+rte_node_eth_config(struct rte_node_ethdev_config *conf,
+ uint16_t nb_confs, uint16_t nb_graphs)
+{
+ uint16_t tx_q_used, rx_q_used, port_id;
+ char name[RTE_NODE_NAMESIZE];
+ const char *next_nodes = name;
+ struct rte_mempool *mp;
+ int i, j, rc;
+ uint32_t id;
+
+ for (i = 0; i < nb_confs; i++) {
+ port_id = conf[i].port_id;
+
+ if (!rte_eth_dev_is_valid_port(port_id))
+ return -EINVAL;
+
+ /* Check for mbuf minimum private size requirement */
+ for (j = 0; j < conf[i].mp_count; j++) {
+ mp = conf[i].mp[j];
+ if (!mp)
+ continue;
+ /* Check for minimum private space */
+ if (rte_pktmbuf_priv_size(mp) <
+ RTE_NODE_MBUF_PRIV2_SIZE) {
+ node_err("ethdev",
+ "Minimum mbuf priv size "
+ "requirement not met by mp %s",
+ mp->name);
+ return -EINVAL;
+ }
+ }
+
+ rx_q_used = conf[i].num_rx_queues;
+ tx_q_used = conf[i].num_tx_queues;
+ /* Check if we have a txq for each worker */
+ if (tx_q_used < nb_graphs)
+ return -EINVAL;
+
+ /* Create node for each rx port queue pair */
+ for (j = 0; j < rx_q_used; j++) {
+ ethdev_rx_node_elem_t *elem;
+
+ snprintf(name, sizeof(name), "%u-%u", port_id, j);
+ /* Clone a new rx node with same edges as parent */
+ id = rte_node_clone(ethdev_rx_node_base.id, name);
+ if (id == RTE_NODE_ID_INVALID)
+ return -EIO;
+
+ /* Add it to list of ethdev rx nodes for lookup */
+ elem = malloc(sizeof(ethdev_rx_node_elem_t));
+ memset(elem, 0, sizeof(ethdev_rx_node_elem_t));
+ elem->ctx.port_id = port_id;
+ elem->ctx.queue_id = j;
+ elem->nid = id;
+ elem->next = ethdev_rx_main.head;
+ ethdev_rx_main.head = elem;
+
+ node_dbg("ethdev", "rx node %s-%s: is at %u",
+ ethdev_rx_node_base.name, name, id);
+ }
+
+ /* Create a per port tx node from base node */
+ snprintf(name, sizeof(name), "%u", port_id);
+ /* Clone a new node with same edges as parent */
+ id = rte_node_clone(ethdev_tx_node_base.id, name);
+ ethdev_tx_main.nodes[port_id] = id;
+
+ node_dbg("ethdev", "tx node %s-%s: is at %u",
+ ethdev_tx_node_base.name, name, id);
+
+ /* Prepare the actual name of the cloned node */
+ snprintf(name, sizeof(name), "ethdev_tx-%u", port_id);
+
+ /* Add this tx port node as next to ip4_rewrite_node */
+ rte_node_edge_update(ip4_rewrite_node.id,
+ RTE_EDGE_ID_INVALID, &next_nodes, 1);
+ /* Assuming edge id is the last one alloc'ed */
+ rc = ip4_rewrite_set_next(port_id,
+ rte_node_edge_count(ip4_rewrite_node.id) - 1);
+ if (rc < 0)
+ return rc;
+ }
+
+ ctrl.nb_graphs = nb_graphs;
+ return 0;
+}
diff --git a/lib/librte_node/ethdev_rx.c b/lib/librte_node/ethdev_rx.c
new file mode 100644
index 000000000..48cbc5692
--- /dev/null
+++ b/lib/librte_node/ethdev_rx.c
@@ -0,0 +1,218 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_mbuf.h>
+#include <rte_graph.h>
+#include <rte_graph_worker.h>
+
+#include "ethdev_rx.h"
+#include "ethdev_rx_priv.h"
+#include "node_private.h"
+
+struct ethdev_rx_node_main ethdev_rx_main;
+
+static __rte_always_inline uint16_t
+ethdev_rx_node_process_inline(struct rte_graph *graph, struct rte_node *node,
+ uint16_t port, uint16_t queue)
+{
+ uint16_t count, next_index = ETHDEV_RX_NEXT_IP4_LOOKUP;
+
+ /* Get pkts from port */
+ count = rte_eth_rx_burst(port, queue, (struct rte_mbuf **)node->objs,
+ RTE_GRAPH_BURST_SIZE);
+
+ if (!count)
+ return 0;
+ node->idx = count;
+ /* Enqueue to next node */
+ rte_node_next_stream_move(graph, node, next_index);
+
+ return count;
+}
+
+static __rte_always_inline uint16_t
+ethdev_rx_node_process(struct rte_graph *graph, struct rte_node *node,
+ void **objs, uint16_t cnt)
+{
+ ethdev_rx_node_ctx_t *ctx = (ethdev_rx_node_ctx_t *)node->ctx;
+ uint16_t n_pkts = 0;
+
+ RTE_SET_USED(objs);
+ RTE_SET_USED(cnt);
+
+ n_pkts = ethdev_rx_node_process_inline(graph, node, ctx->port_id,
+ ctx->queue_id);
+ return n_pkts;
+}
+
+static inline uint32_t
+l3_ptype(uint16_t etype, uint32_t ptype)
+{
+ ptype = ptype & ~RTE_PTYPE_L3_MASK;
+ if (etype == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4))
+ ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+ else if (etype == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6))
+ ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
+ return ptype;
+}
+
+/* Callback for soft ptype parsing */
+static uint16_t
+eth_pkt_parse_cb(uint16_t port, uint16_t queue,
+ struct rte_mbuf **mbufs, uint16_t nb_pkts,
+ uint16_t max_pkts, void *user_param)
+{
+ struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+ struct rte_ether_hdr *eth_hdr;
+ uint16_t etype, n_left;
+ struct rte_mbuf **pkts;
+
+ RTE_SET_USED(port);
+ RTE_SET_USED(queue);
+ RTE_SET_USED(max_pkts);
+ RTE_SET_USED(user_param);
+
+ pkts = mbufs;
+ n_left = nb_pkts;
+ while (n_left >= 12) {
+
+ /* Prefetch next-next mbufs */
+ rte_prefetch0(pkts[8]);
+ rte_prefetch0(pkts[9]);
+ rte_prefetch0(pkts[10]);
+ rte_prefetch0(pkts[11]);
+
+ /* Prefetch next mbuf data */
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[4],
+ struct rte_ether_hdr *));
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[5],
+ struct rte_ether_hdr *));
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[6],
+ struct rte_ether_hdr *));
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[7],
+ struct rte_ether_hdr *));
+
+ mbuf0 = pkts[0];
+ mbuf1 = pkts[1];
+ mbuf2 = pkts[2];
+ mbuf3 = pkts[3];
+ pkts += 4;
+ n_left -= 4;
+
+ /* Extract ptype of mbuf0 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf0,
+ struct rte_ether_hdr *);
+ etype = eth_hdr->ether_type;
+ mbuf0->packet_type = l3_ptype(etype, 0);
+
+ /* Extract ptype of mbuf1 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf1,
+ struct rte_ether_hdr *);
+ etype = eth_hdr->ether_type;
+ mbuf1->packet_type = l3_ptype(etype, 0);
+
+ /* Extract ptype of mbuf2 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf2,
+ struct rte_ether_hdr *);
+ etype = eth_hdr->ether_type;
+ mbuf2->packet_type = l3_ptype(etype, 0);
+
+ /* Extract ptype of mbuf3 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf3,
+ struct rte_ether_hdr *);
+ etype = eth_hdr->ether_type;
+ mbuf3->packet_type = l3_ptype(etype, 0);
+
+ }
+
+ while (n_left > 0) {
+ mbuf0 = pkts[0];
+
+ pkts += 1;
+ n_left -= 1;
+
+ /* Extract ptype of mbuf0 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf0,
+ struct rte_ether_hdr *);
+ etype = eth_hdr->ether_type;
+ mbuf0->packet_type = l3_ptype(etype, 0);
+ }
+
+ return nb_pkts;
+}
+
+#define MAX_PTYPES 16
+static int
+ethdev_ptype_setup(uint16_t port, uint16_t queue)
+{
+ uint8_t l3_ipv4 = 0, l3_ipv6 = 0;
+ uint32_t ptypes[MAX_PTYPES];
+ int i, rc;
+
+ /* Check IPv4 & IPv6 ptype support */
+ rc = rte_eth_dev_get_supported_ptypes(port, RTE_PTYPE_L3_MASK,
+ ptypes, MAX_PTYPES);
+ for (i = 0; i < rc; i++) {
+ if (ptypes[i] & RTE_PTYPE_L3_IPV4)
+ l3_ipv4 = 1;
+ if (ptypes[i] & RTE_PTYPE_L3_IPV6)
+ l3_ipv6 = 1;
+ }
+
+ if (!l3_ipv4 || !l3_ipv6) {
+ node_info("ethdev_rx",
+ "Enabling ptype callback for required ptypes "
+ "on port %u\n", port);
+
+ if (!rte_eth_add_rx_callback(port, queue,
+ eth_pkt_parse_cb, NULL)) {
+ node_err("ethdev_rx",
+ "Failed to add rx ptype cb: port=%d, queue=%d\n",
+ port, queue);
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
+static int
+ethdev_rx_node_init(const struct rte_graph *graph, struct rte_node *node)
+{
+ ethdev_rx_node_ctx_t *ctx = (ethdev_rx_node_ctx_t *)node->ctx;
+ ethdev_rx_node_elem_t *elem = ethdev_rx_main.head;
+
+ RTE_SET_USED(graph);
+
+ while (elem) {
+ if (elem->nid == node->id) {
+ /* Update node specific context */
+ memcpy(ctx, &elem->ctx, sizeof(ethdev_rx_node_ctx_t));
+ break;
+ }
+ elem = elem->next;
+ }
+
+ RTE_VERIFY(elem != NULL);
+
+ /* Check and setup ptype */
+ return ethdev_ptype_setup(ctx->port_id, ctx->queue_id);
+}
+
+struct rte_node_register ethdev_rx_node_base = {
+ .process = ethdev_rx_node_process,
+ .flags = RTE_NODE_SOURCE_F,
+ .name = "ethdev_rx",
+
+ .init = ethdev_rx_node_init,
+
+ .nb_edges = ETHDEV_RX_NEXT_MAX,
+ .next_nodes = {
+ [ETHDEV_RX_NEXT_IP4_LOOKUP] = "ip4_lookup"
+ },
+};
+
+RTE_NODE_REGISTER(ethdev_rx_node_base);
diff --git a/lib/librte_node/ethdev_rx.h b/lib/librte_node/ethdev_rx.h
new file mode 100644
index 000000000..7ae2923ea
--- /dev/null
+++ b/lib/librte_node/ethdev_rx.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#ifndef __INCLUDE_ETHDEV_RX_H__
+#define __INCLUDE_ETHDEV_RX_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_ETHDEV_RX_H__ */
diff --git a/lib/librte_node/ethdev_rx_priv.h b/lib/librte_node/ethdev_rx_priv.h
new file mode 100644
index 000000000..beb54e739
--- /dev/null
+++ b/lib/librte_node/ethdev_rx_priv.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#ifndef __INCLUDE_ETHDEV_RX_PRIV_H__
+#define __INCLUDE_ETHDEV_RX_PRIV_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+
+struct ethdev_rx_node_elem;
+struct ethdev_rx_node_ctx;
+typedef struct ethdev_rx_node_elem ethdev_rx_node_elem_t;
+typedef struct ethdev_rx_node_ctx ethdev_rx_node_ctx_t;
+
+struct ethdev_rx_node_ctx {
+ uint16_t port_id;
+ uint16_t queue_id;
+};
+
+struct ethdev_rx_node_elem {
+ struct ethdev_rx_node_elem *next;
+ struct ethdev_rx_node_ctx ctx;
+ rte_node_t nid;
+};
+
+enum ethdev_rx_next_nodes {
+ ETHDEV_RX_NEXT_IP4_LOOKUP,
+ ETHDEV_RX_NEXT_MAX,
+};
+
+struct ethdev_rx_node_main {
+ ethdev_rx_node_elem_t *head;
+};
+
+extern struct ethdev_rx_node_main ethdev_rx_main;
+extern struct rte_node_register ethdev_rx_node_base;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_ETHDEV_RX_PRIV_H__ */
diff --git a/lib/librte_node/ethdev_tx.c b/lib/librte_node/ethdev_tx.c
new file mode 100644
index 000000000..06f944d4c
--- /dev/null
+++ b/lib/librte_node/ethdev_tx.c
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <rte_debug.h>
+#include <rte_ethdev.h>
+#include <rte_mbuf.h>
+#include <rte_graph.h>
+#include <rte_graph_worker.h>
+
+#include "ethdev_tx_priv.h"
+
+struct ethdev_tx_node_main ethdev_tx_main;
+
+static uint16_t
+ethdev_tx_node_process(struct rte_graph *graph, struct rte_node *node,
+ void **objs, uint16_t nb_objs)
+{
+ ethdev_tx_node_ctx_t *ctx = (ethdev_tx_node_ctx_t *)node->ctx;
+ uint16_t port, queue;
+ uint16_t count;
+
+ /* Get Tx port id */
+ port = ctx->port;
+ queue = ctx->queue;
+
+ count = rte_eth_tx_burst(port, queue,
+ (struct rte_mbuf **)objs, nb_objs);
+
+ /* Redirect unsent pkts to drop node */
+ if (count < nb_objs) {
+ rte_node_enqueue(graph, node, ETHDEV_TX_NEXT_PKT_DROP,
+ &objs[count], nb_objs - count);
+ }
+
+ return count;
+}
+
+static int
+ethdev_tx_node_init(const struct rte_graph *graph, struct rte_node *node)
+{
+ ethdev_tx_node_ctx_t *ctx = (ethdev_tx_node_ctx_t *)node->ctx;
+ uint64_t port_id = RTE_MAX_ETHPORTS;
+ int i;
+
+ /* Find our port id */
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (ethdev_tx_main.nodes[i] == node->id) {
+ port_id = i;
+ break;
+ }
+ }
+ RTE_VERIFY(port_id < RTE_MAX_ETHPORTS);
+
+ /* Update port and queue */
+ ctx->port = port_id;
+ ctx->queue = graph->id;
+ return 0;
+}
+
+
+struct rte_node_register ethdev_tx_node_base = {
+ .process = ethdev_tx_node_process,
+ .name = "ethdev_tx",
+
+ .init = ethdev_tx_node_init,
+
+ .nb_edges = ETHDEV_TX_NEXT_MAX,
+ .next_nodes = {
+ [ETHDEV_TX_NEXT_PKT_DROP] = "pkt_drop",
+ },
+};
+
+RTE_NODE_REGISTER(ethdev_tx_node_base);
diff --git a/lib/librte_node/ethdev_tx_priv.h b/lib/librte_node/ethdev_tx_priv.h
new file mode 100644
index 000000000..394cccbb1
--- /dev/null
+++ b/lib/librte_node/ethdev_tx_priv.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#ifndef __INCLUDE_ETHDEV_TX_PRIV_H__
+#define __INCLUDE_ETHDEV_TX_PRIV_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+enum ethdev_tx_next_nodes {
+ ETHDEV_TX_NEXT_PKT_DROP,
+ ETHDEV_TX_NEXT_MAX,
+};
+
+typedef struct ethdev_tx_node_ctx {
+ uint16_t port;
+ uint16_t queue;
+} ethdev_tx_node_ctx_t;
+
+struct ethdev_tx_node_main {
+ /* Tx nodes for each ethdev port */
+ uint32_t nodes[RTE_MAX_ETHPORTS];
+};
+
+extern struct ethdev_tx_node_main ethdev_tx_main;
+extern struct rte_node_register ethdev_tx_node_base;
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_ETHDEV_TX_PRIV_H__ */
diff --git a/lib/librte_node/ip4_lookup.c b/lib/librte_node/ip4_lookup.c
new file mode 100644
index 000000000..c7bccad93
--- /dev/null
+++ b/lib/librte_node/ip4_lookup.c
@@ -0,0 +1,657 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ip.h>
+#include <rte_lpm.h>
+#include <rte_mbuf.h>
+#include <rte_graph.h>
+#include <rte_graph_worker.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+
+#include <arpa/inet.h>
+#include <rte_node_ip4_api.h>
+#include "ip4_lookup_priv.h"
+#include "node_private.h"
+
+#define IPV4_L3FWD_LPM_MAX_RULES 1024
+#define IPV4_L3FWD_LPM_NUMBER_TBL8S (1 << 8)
+
+/* IP4 Lookup global data struct */
+struct ip4_lookup_node_main {
+ struct rte_lpm *lpm_tbl[RTE_MAX_NUMA_NODES];
+};
+
+static struct ip4_lookup_node_main ip4_lookup_nm;
+
+#if defined(RTE_MACHINE_CPUFLAG_NEON)
+/* ARM64 NEON */
+static uint16_t
+ip4_lookup_node_process(struct rte_graph *graph,
+ struct rte_node *node,
+ void **objs, uint16_t nb_objs)
+{
+ struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3, **pkts;
+ struct rte_ipv4_hdr *ipv4_hdr;
+ struct rte_ether_hdr *eth_hdr;
+ void **to_next, **from;
+ uint16_t last_spec = 0;
+ rte_edge_t next_index;
+ uint16_t n_left_from;
+ struct rte_lpm *lpm;
+ uint16_t held = 0;
+ uint32_t drop_nh;
+ rte_xmm_t result;
+ rte_xmm_t priv01;
+ rte_xmm_t priv23;
+ int32x4_t dip;
+ int rc, i;
+
+ /* Speculative next */
+ next_index = IP4_LOOKUP_NEXT_REWRITE;
+ drop_nh = ((uint32_t)IP4_LOOKUP_NEXT_PKT_DROP) << 16; /* Drop node */
+
+ /* Get socket specific lpm from ctx */
+ lpm = *((struct rte_lpm **)node->ctx);
+
+ pkts = (struct rte_mbuf **)objs;
+ from = objs;
+ n_left_from = nb_objs;
+
+#define OBJS_PER_CLINE (RTE_CACHE_LINE_SIZE / sizeof (void *))
+ for (i = OBJS_PER_CLINE; i < RTE_GRAPH_BURST_SIZE; i += OBJS_PER_CLINE)
+ rte_prefetch0(&objs[i]);
+
+ for (i = 0; i < 4 && i < n_left_from; i++) {
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[i],
+ struct rte_ether_hdr *) + 1);
+ }
+
+ /* Get stream for the speculated next node */
+ to_next = rte_node_next_stream_get(graph, node,
+ next_index, nb_objs);
+ while (n_left_from >= 4) {
+
+ /* Prefetch next-next mbufs */
+ if (likely(n_left_from >= 11)) {
+ rte_prefetch0(pkts[8]);
+ rte_prefetch0(pkts[9]);
+ rte_prefetch0(pkts[10]);
+ rte_prefetch0(pkts[11]);
+ }
+
+ /* Prefetch next mbuf data */
+ if (likely(n_left_from >= 7 )) {
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[4],
+ struct rte_ether_hdr *) + 1);
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[5],
+ struct rte_ether_hdr *) + 1);
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[6],
+ struct rte_ether_hdr *) + 1);
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[7],
+ struct rte_ether_hdr *) + 1);
+ }
+
+ mbuf0 = pkts[0];
+ mbuf1 = pkts[1];
+ mbuf2 = pkts[2];
+ mbuf3 = pkts[3];
+
+ pkts += 4;
+ n_left_from -= 4;
+
+ /* Extract DIP of mbuf0 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf0,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ dip = vsetq_lane_s32(ipv4_hdr->dst_addr, dip, 0);
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ priv01.u16[1] = ipv4_hdr->time_to_live;
+ priv01.u32[1] = ipv4_hdr->hdr_checksum;
+
+ /* Extract DIP of mbuf1 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf1,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ dip = vsetq_lane_s32(ipv4_hdr->dst_addr, dip, 1);
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ priv01.u16[5] = ipv4_hdr->time_to_live;
+ priv01.u32[3] = ipv4_hdr->hdr_checksum;
+
+ /* Extract DIP of mbuf2 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf2,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ dip = vsetq_lane_s32(ipv4_hdr->dst_addr, dip, 2);
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ priv23.u16[1] = ipv4_hdr->time_to_live;
+ priv23.u32[1] = ipv4_hdr->hdr_checksum;
+
+ /* Extract DIP of mbuf3 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf3,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ dip = vsetq_lane_s32(ipv4_hdr->dst_addr, dip, 3);
+
+ dip = vreinterpretq_s32_u8(vrev32q_u8(
+ vreinterpretq_u8_s32(dip)));
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ priv23.u16[5] = ipv4_hdr->time_to_live;
+ priv23.u32[3] = ipv4_hdr->hdr_checksum;
+
+ /* Perform LPM lookup to get NH and next node */
+ rte_lpm_lookupx4(lpm, dip, result.u32, drop_nh);
+ priv01.u16[0] = result.u16[0];
+ priv01.u16[4] = result.u16[2];
+ priv23.u16[0] = result.u16[4];
+ priv23.u16[4] = result.u16[6];
+
+ rte_node_mbuf_priv1(mbuf0)->u = priv01.u64[0];
+ rte_node_mbuf_priv1(mbuf1)->u = priv01.u64[1];
+ rte_node_mbuf_priv1(mbuf2)->u = priv23.u64[0];
+ rte_node_mbuf_priv1(mbuf3)->u = priv23.u64[1];
+
+ /* Enqueue four to next node */
+ /* TODO: Do we need a macro for this ?*/
+ rte_edge_t fix_spec = (next_index ^ result.u16[1]) |
+ (next_index ^ result.u16[3]) |
+ (next_index ^ result.u16[5]) |
+ (next_index ^ result.u16[7]);
+
+ if (unlikely(fix_spec)) {
+
+ /* Copy things succesfully speculated till now */
+ rte_memcpy(to_next, from,
+ last_spec * sizeof(from[0]));
+ from += last_spec;
+ to_next += last_spec;
+ held += last_spec;
+ last_spec = 0;
+
+ /* next0 */
+ if (next_index == result.u16[1]) {
+ to_next[0] = from[0];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ result.u16[1],
+ from[0]);
+ }
+
+ /* next1 */
+ if (next_index == result.u16[3]) {
+ to_next[0] = from[1];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ result.u16[3],
+ from[1]);
+ }
+
+ /* next2 */
+ if (next_index == result.u16[5]) {
+ to_next[0] = from[2];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ result.u16[5],
+ from[2]);
+ }
+
+ /* next3 */
+ if (next_index == result.u16[7]) {
+ to_next[0] = from[3];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ result.u16[7],
+ from[3]);
+ }
+
+ from += 4;
+ } else {
+ last_spec += 4;
+ }
+ }
+
+ while (n_left_from > 0) {
+ uint32_t next_hop;
+ uint16_t next0;
+
+ mbuf0 = pkts[0];
+
+ pkts += 1;
+ n_left_from -= 1;
+
+ /* Extract DIP of mbuf0 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf0,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ rte_node_mbuf_priv1(mbuf0)->cksum =
+ ipv4_hdr->hdr_checksum;
+ rte_node_mbuf_priv1(mbuf0)->ttl =
+ ipv4_hdr->time_to_live;
+
+ rc = rte_lpm_lookup(lpm,
+ rte_be_to_cpu_32(ipv4_hdr->dst_addr),
+ &next_hop);
+ next_hop = (rc == 0) ? next_hop : drop_nh;
+
+ rte_node_mbuf_priv1(mbuf0)->nh = (uint16_t)next_hop;
+ next_hop = next_hop >> 16;
+ next0 = (uint16_t)next_hop;
+
+ if (unlikely(next_index ^ next0)) {
+ /* Copy things succesfully speculated till now */
+ rte_memcpy(to_next, from,
+ last_spec * sizeof(from[0]));
+ from += last_spec;
+ to_next += last_spec;
+ held += last_spec;
+ last_spec = 0;
+
+ rte_node_enqueue_x1(graph, node,
+ next0, from[0]);
+ from += 1;
+ } else {
+ last_spec += 1;
+ }
+ }
+
+ /* !!! Home run !!! */
+ if (likely(last_spec == nb_objs)) {
+ rte_node_next_stream_move(graph, node, next_index);
+ return nb_objs;
+ }
+ held += last_spec;
+ rte_memcpy(to_next, from,
+ last_spec * sizeof(from[0]));
+ rte_node_next_stream_put(graph, node, next_index, held);
+
+ return nb_objs;
+}
+
+#elif defined(RTE_ARCH_X86)
+/* X86 SSE */
+static uint16_t
+ip4_lookup_node_process(struct rte_graph *graph,
+ struct rte_node *node,
+ void **objs, uint16_t nb_objs)
+{
+ struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3, **pkts;
+ rte_edge_t next0, next1, next2, next3, next_index;
+ struct rte_ipv4_hdr *ipv4_hdr;
+ struct rte_ether_hdr *eth_hdr;
+ uint32_t ip0, ip1, ip2, ip3;
+ void **to_next, **from;
+ uint16_t last_spec = 0;
+ uint16_t n_left_from;
+ struct rte_lpm *lpm;
+ uint16_t held = 0;
+ uint32_t drop_nh;
+ rte_xmm_t dst;
+ __m128i dip; /* SSE register */
+ int rc, i;
+
+ /* Speculative next */
+ next_index = IP4_LOOKUP_NEXT_REWRITE;
+ drop_nh = ((uint32_t)IP4_LOOKUP_NEXT_PKT_DROP) << 16; /* Drop node */
+
+ /* Get socket specific lpm from ctx */
+ lpm = *((struct rte_lpm **)node->ctx);
+
+ pkts = (struct rte_mbuf **)objs;
+ from = objs;
+ n_left_from = nb_objs;
+
+ if (n_left_from >= 4) {
+ for (i = 0; i < 4; i++) {
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[i],
+ struct rte_ether_hdr *) + 1);
+ }
+ }
+
+ /* Get stream for the speculated next node */
+ to_next = rte_node_next_stream_get(graph, node,
+ next_index, nb_objs);
+ while (n_left_from >= 4) {
+
+ /* Prefetch next-next mbufs */
+ if (likely(n_left_from >= 11)) {
+ rte_prefetch0(pkts[8]);
+ rte_prefetch0(pkts[9]);
+ rte_prefetch0(pkts[10]);
+ rte_prefetch0(pkts[11]);
+ }
+
+ /* Prefetch next mbuf data */
+ if (likely(n_left_from >= 7 )) {
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[4],
+ struct rte_ether_hdr *) + 1);
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[5],
+ struct rte_ether_hdr *) + 1);
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[6],
+ struct rte_ether_hdr *) + 1);
+ rte_prefetch0(rte_pktmbuf_mtod(pkts[7],
+ struct rte_ether_hdr *) + 1);
+ }
+
+ mbuf0 = pkts[0];
+ mbuf1 = pkts[1];
+ mbuf2 = pkts[2];
+ mbuf3 = pkts[3];
+
+ pkts += 4;
+ n_left_from -= 4;
+
+ /* Extract DIP of mbuf0 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf0,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ ip0 = ipv4_hdr->dst_addr;
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ rte_node_mbuf_priv1(mbuf0)->cksum =
+ ipv4_hdr->hdr_checksum;
+ rte_node_mbuf_priv1(mbuf0)->ttl =
+ ipv4_hdr->time_to_live;
+
+ /* Extract DIP of mbuf1 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf1,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ ip1 = ipv4_hdr->dst_addr;
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ rte_node_mbuf_priv1(mbuf1)->cksum =
+ ipv4_hdr->hdr_checksum;
+ rte_node_mbuf_priv1(mbuf1)->ttl =
+ ipv4_hdr->time_to_live;
+
+ /* Extract DIP of mbuf2 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf2,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ ip2 = ipv4_hdr->dst_addr;
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ rte_node_mbuf_priv1(mbuf2)->cksum =
+ ipv4_hdr->hdr_checksum;
+ rte_node_mbuf_priv1(mbuf2)->ttl =
+ ipv4_hdr->time_to_live;
+
+ /* Extract DIP of mbuf3 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf3,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ ip3 = ipv4_hdr->dst_addr;
+
+ /* Prepare for lookup x4 */
+ dip = _mm_set_epi32(ip3, ip2, ip1, ip0);
+
+ /* Byte swap 4 IPV4 addresses. */
+ const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15,
+ 8, 9, 10, 11, 4, 5, 6, 7,
+ 0, 1, 2, 3);
+ dip = _mm_shuffle_epi8(dip, bswap_mask);
+
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ rte_node_mbuf_priv1(mbuf3)->cksum =
+ ipv4_hdr->hdr_checksum;
+ rte_node_mbuf_priv1(mbuf3)->ttl =
+ ipv4_hdr->time_to_live;
+
+ /* Perform LPM lookup to get NH and next node */
+ rte_lpm_lookupx4(lpm, dip, dst.u32, drop_nh);
+
+ /* Extract next node id and NH */
+ rte_node_mbuf_priv1(mbuf0)->nh = dst.u32[0] & 0xFFFF;
+ next0 = (dst.u32[0] >> 16);
+
+ rte_node_mbuf_priv1(mbuf1)->nh = dst.u32[1] & 0xFFFF;
+ next1 = (dst.u32[1] >> 16);
+
+ rte_node_mbuf_priv1(mbuf2)->nh = dst.u32[2] & 0xFFFF;
+ next2 = (dst.u32[2] >> 16);
+
+ rte_node_mbuf_priv1(mbuf3)->nh = dst.u32[3] & 0xFFFF;
+ next3 = (dst.u32[3] >> 16);
+
+ /* Enqueue four to next node */
+ /* TODO: Do we need a macro for this ?*/
+ rte_edge_t fix_spec = (next_index ^ next0) |
+ (next_index ^ next1) | (next_index ^ next2) |
+ (next_index ^ next3);
+
+ if (unlikely(fix_spec)) {
+
+ /* Copy things succesfully speculated till now */
+ rte_memcpy(to_next, from,
+ last_spec * sizeof(from[0]));
+ from += last_spec;
+ to_next += last_spec;
+ held += last_spec;
+ last_spec = 0;
+
+ /* next0 */
+ if (next_index == next0) {
+ to_next[0] = from[0];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ next0, from[0]);
+ }
+
+ /* next1 */
+ if (next_index == next1) {
+ to_next[0] = from[1];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ next1, from[1]);
+ }
+
+ /* next2 */
+ if (next_index == next2) {
+ to_next[0] = from[2];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ next2, from[2]);
+ }
+
+ /* next3 */
+ if (next_index == next3) {
+ to_next[0] = from[3];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ next3, from[3]);
+ }
+
+ from += 4;
+
+ } else {
+ last_spec += 4;
+ }
+ }
+
+ while (n_left_from > 0) {
+ uint32_t next_hop;
+
+ mbuf0 = pkts[0];
+
+ pkts += 1;
+ n_left_from -= 1;
+
+ /* Extract DIP of mbuf0 */
+ eth_hdr = rte_pktmbuf_mtod(mbuf0,
+ struct rte_ether_hdr *);
+ ipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);
+ /* Extract cksum, ttl as ipv4 hdr is in cache */
+ rte_node_mbuf_priv1(mbuf0)->cksum =
+ ipv4_hdr->hdr_checksum;
+ rte_node_mbuf_priv1(mbuf0)->ttl =
+ ipv4_hdr->time_to_live;
+
+ rc = rte_lpm_lookup(lpm,
+ rte_be_to_cpu_32(ipv4_hdr->dst_addr),
+ &next_hop);
+ next_hop = (rc == 0) ? next_hop : drop_nh;
+
+ rte_node_mbuf_priv1(mbuf0)->nh = next_hop & 0xFFFF;
+ next0 = (next_hop >> 16);
+
+ if (unlikely(next_index ^ next0)) {
+ /* Copy things succesfully speculated till now */
+ rte_memcpy(to_next, from,
+ last_spec * sizeof(from[0]));
+ from += last_spec;
+ to_next += last_spec;
+ held += last_spec;
+ last_spec = 0;
+
+ rte_node_enqueue_x1(graph, node,
+ next0, from[0]);
+ from += 1;
+ } else {
+ last_spec += 1;
+ }
+ }
+
+ /* !!! Home run !!! */
+ if (likely(last_spec == nb_objs)) {
+ rte_node_next_stream_move(graph, node, next_index);
+ return nb_objs;
+ }
+
+ held += last_spec;
+ /* Copy things successfully speculated till now */
+ rte_memcpy(to_next, from, last_spec * sizeof(from[0]));
+ rte_node_next_stream_put(graph, node, next_index, held);
+
+ return nb_objs;
+}
+#else
+static uint16_t
+ip4_lookup_node_process(struct rte_graph *graph, struct rte_node *node,
+ void **objs, uint16_t nb_objs)
+{
+ RTE_SET_USED(graph);
+ RTE_SET_USED(node);
+ RTE_SET_USED(objs);
+ RTE_SET_USED(nb_objs);
+ return nb_objs;
+}
+#endif
+
+int rte_node_ip4_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop,
+ enum ip4_lookup_next_nodes next_node)
+{
+ char abuf[INET6_ADDRSTRLEN];
+ struct in_addr in;
+ uint8_t socket;
+ uint32_t val;
+ int ret;
+
+ in.s_addr = htonl(ip);
+ inet_ntop(AF_INET, &in, abuf, sizeof(abuf));
+ /* Embedded next node id in next hop */
+ val = (next_node << 16) | next_hop;
+ node_dbg("ip4_lookup", "LPM: Adding route %s / %d nh (0x%x)",
+ abuf, depth, val);
+
+ for (socket = 0; socket < RTE_MAX_NUMA_NODES; socket++) {
+
+ if (!ip4_lookup_nm.lpm_tbl[socket])
+ continue;
+
+ ret = rte_lpm_add(ip4_lookup_nm.lpm_tbl[socket], ip, depth, val);
+
+ if (ret < 0)
+ node_err("ip4_lookup", "Unable to add entry %s / %d nh (%x) "
+ "to LPM table on sock %d, rc=%d\n",
+ abuf, depth, val, socket, ret);
+ }
+
+ return 0;
+}
+
+static int
+setup_lpm(struct ip4_lookup_node_main *nm, int socket)
+{
+ struct rte_lpm_config config_ipv4;
+ char s[64];
+
+ /* One LPM table per socket */
+ if (nm->lpm_tbl[socket])
+ return 0;
+
+ /* create the LPM table */
+ config_ipv4.max_rules = IPV4_L3FWD_LPM_MAX_RULES;
+ config_ipv4.number_tbl8s = IPV4_L3FWD_LPM_NUMBER_TBL8S;
+ config_ipv4.flags = 0;
+ snprintf(s, sizeof(s), "IPV4_L3FWD_LPM_%d", socket);
+ nm->lpm_tbl[socket] = rte_lpm_create(s, socket, &config_ipv4);
+ if (nm->lpm_tbl[socket] == NULL)
+ rte_panic("ip4_lookup: Unable to create LPM table on socket %d\n",
+ socket);
+
+
+ return 0;
+}
+
+static int
+ip4_lookup_node_init(const struct rte_graph *graph, struct rte_node *node)
+{
+ static uint8_t init_once;
+ uint16_t socket, lcore_id;
+ struct rte_lpm **lpm_p = (struct rte_lpm **)&node->ctx;
+ int rc;
+
+ RTE_SET_USED(graph);
+ RTE_SET_USED(node);
+
+ if (!init_once) {
+ /* Setup LPM tables for all sockets */
+ RTE_LCORE_FOREACH(lcore_id) {
+ socket = rte_lcore_to_socket_id(lcore_id);
+ rc = setup_lpm(&ip4_lookup_nm, socket);
+ if (rc)
+ node_err("ip4_lookup",
+ "Failed to setup lpm tbl for sock %u, rc=%d",
+ socket, rc);
+ }
+ init_once = 1;
+ }
+
+ *lpm_p = ip4_lookup_nm.lpm_tbl[graph->socket];
+ node_dbg("ip4_lookup", "Initialized ip4_lookup node");
+ return 0;
+}
+
+
+static struct rte_node_register ip4_lookup_node = {
+ .process = ip4_lookup_node_process,
+ .name = "ip4_lookup",
+
+ .init = ip4_lookup_node_init,
+
+ .nb_edges = IP4_LOOKUP_NEXT_MAX,
+ .next_nodes = {
+ [IP4_LOOKUP_NEXT_REWRITE] = "ip4_rewrite",
+ [IP4_LOOKUP_NEXT_PKT_DROP] = "pkt_drop",
+ },
+};
+
+RTE_NODE_REGISTER(ip4_lookup_node);
diff --git a/lib/librte_node/ip4_lookup_priv.h b/lib/librte_node/ip4_lookup_priv.h
new file mode 100644
index 000000000..898eba301
--- /dev/null
+++ b/lib/librte_node/ip4_lookup_priv.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#ifndef __INCLUDE_IP4_LOOKUP_PRIV_H__
+#define __INCLUDE_IP4_LOOKUP_PRIV_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_IP4_LOOKUP_PRIV_H__ */
diff --git a/lib/librte_node/ip4_rewrite.c b/lib/librte_node/ip4_rewrite.c
new file mode 100644
index 000000000..f0f1d599e
--- /dev/null
+++ b/lib/librte_node/ip4_rewrite.c
@@ -0,0 +1,340 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ip.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_graph.h>
+#include <rte_graph_worker.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+#include <rte_vect.h>
+
+#include <rte_node_ip4_api.h>
+#include "ip4_rewrite_priv.h"
+#include "node_private.h"
+
+static struct ip4_rewrite_node_main *ip4_rewrite_nm;
+
+static uint16_t
+ip4_rewrite_node_process(struct rte_graph *graph, struct rte_node *node,
+ void **objs, uint16_t nb_objs)
+{
+ struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3, **pkts;
+ struct ip4_rewrite_nh_header *nh = ip4_rewrite_nm->nh;
+ uint16_t next0, next1, next2, next3, next_index;
+ struct rte_ipv4_hdr *ip0, *ip1, *ip2, *ip3;
+ uint16_t n_left_from, held = 0, last_spec = 0;
+ void *d0, *d1, *d2, *d3;
+ void **to_next, **from;
+ rte_xmm_t priv01;
+ rte_xmm_t priv23;
+ int i;
+
+ /* Speculative next as last next */
+ next_index = *(uint16_t *)node->ctx;
+ rte_prefetch0(nh);
+
+ pkts = (struct rte_mbuf **)objs;
+ from = objs;
+ n_left_from = nb_objs;
+
+ for (i = 0; i < 4 && i < n_left_from; i++)
+ rte_prefetch0(pkts[i]);
+
+ /* Get stream for the speculated next node */
+ to_next = rte_node_next_stream_get(graph, node,
+ next_index, nb_objs);
+ /* Update ethernet header of pkts */
+ while (n_left_from >= 4) {
+
+ if (likely(n_left_from >= 7)) {
+ /* Prefetch only next-mbuf struct and priv area.
+ * Data need not be prefetched as we only write.
+ */
+ rte_prefetch0(pkts[4]);
+ rte_prefetch0(pkts[5]);
+ rte_prefetch0(pkts[6]);
+ rte_prefetch0(pkts[7]);
+ }
+
+ mbuf0 = pkts[0];
+ mbuf1 = pkts[1];
+ mbuf2 = pkts[2];
+ mbuf3 = pkts[3];
+
+ pkts += 4;
+ n_left_from -= 4;
+ priv01.u64[0] = rte_node_mbuf_priv1(mbuf0)->u;
+ priv01.u64[1] = rte_node_mbuf_priv1(mbuf1)->u;
+ priv23.u64[0] = rte_node_mbuf_priv1(mbuf2)->u;
+ priv23.u64[1] = rte_node_mbuf_priv1(mbuf3)->u;
+
+ priv01.u16[2]++;
+ priv01.u16[6]++;
+ priv23.u16[2]++;
+ priv23.u16[6]++;
+
+ priv01.u16[1]--;
+ priv01.u16[5]--;
+ priv23.u16[1]--;
+ priv23.u16[5]--;
+
+ /* Update ttl, rewrite ethernet hdr on mbuf0 */
+ d0 = rte_pktmbuf_mtod(mbuf0, void *);
+ rte_memcpy(d0,
+ nh[priv01.u16[0]].rewrite_data,
+ nh[priv01.u16[0]].rewrite_len);
+
+ next0 = nh[priv01.u16[0]].tx_node;
+ ip0 = (struct rte_ipv4_hdr *)((uint8_t *)d0 +
+ sizeof(struct rte_ether_hdr));
+ ip0->time_to_live = priv01.u16[1];
+ ip0->hdr_checksum = priv01.u16[2];
+
+
+ /* Update ttl, rewrite ethernet hdr on mbuf1 */
+ d1 = rte_pktmbuf_mtod(mbuf1, void *);
+ rte_memcpy(d1,
+ nh[priv01.u16[4]].rewrite_data,
+ nh[priv01.u16[4]].rewrite_len);
+
+ next1 = nh[priv01.u16[4]].tx_node;
+ ip1 = (struct rte_ipv4_hdr *)((uint8_t *)d1 +
+ sizeof(struct rte_ether_hdr));
+ ip1->time_to_live = priv01.u16[5];
+ ip1->hdr_checksum = priv01.u16[6];
+
+
+ /* Update ttl, rewrite ethernet hdr on mbuf2 */
+ d2 = rte_pktmbuf_mtod(mbuf2, void *);
+ rte_memcpy(d2,
+ nh[priv23.u16[0]].rewrite_data,
+ nh[priv23.u16[0]].rewrite_len);
+ next2 = nh[priv23.u16[0]].tx_node;
+ ip2 = (struct rte_ipv4_hdr *)((uint8_t *)d2 +
+ sizeof(struct rte_ether_hdr));
+ ip2->time_to_live = priv23.u16[1];
+ ip2->hdr_checksum = priv23.u16[2];
+
+
+ /* Update ttl, rewrite ethernet hdr on mbuf3 */
+ d3 = rte_pktmbuf_mtod(mbuf3, void *);
+ rte_memcpy(d3,
+ nh[priv23.u16[4]].rewrite_data,
+ nh[priv23.u16[4]].rewrite_len);
+
+ next3 = nh[priv23.u16[4]].tx_node;
+ ip3 = (struct rte_ipv4_hdr *)((uint8_t *)d3 +
+ sizeof(struct rte_ether_hdr));
+ ip3->time_to_live = priv23.u16[5];
+ ip3->hdr_checksum = priv23.u16[6];
+
+ /* Enqueue four to next node */
+ /* TODO: Do we need a macro for this ?*/
+ rte_edge_t fix_spec = (next_index ^ next0) |
+ (next_index ^ next1) | (next_index ^ next2) |
+ (next_index ^ next3);
+
+ if (unlikely(fix_spec)) {
+ /* Copy things succesfully speculated till now */
+ rte_memcpy(to_next, from,
+ last_spec * sizeof(from[0]));
+ from += last_spec;
+ to_next += last_spec;
+ held += last_spec;
+ last_spec = 0;
+
+ /* next0 */
+ if (next_index == next0) {
+ to_next[0] = from[0];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ next0, from[0]);
+ }
+
+ /* next1 */
+ if (next_index == next1) {
+ to_next[0] = from[1];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ next1, from[1]);
+ }
+
+ /* next2 */
+ if (next_index == next2) {
+ to_next[0] = from[2];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ next2, from[2]);
+ }
+
+ /* next3 */
+ if (next_index == next3) {
+ to_next[0] = from[3];
+ to_next++;
+ held++;
+ } else {
+ rte_node_enqueue_x1(graph, node,
+ next3, from[3]);
+ }
+
+ from += 4;
+
+ /* Change speculation if last two are same */
+ if ((next_index != next3) &&
+ (next2 == next3)) {
+ /* Put the current speculated node */
+ rte_node_next_stream_put(graph,
+ node, next_index,
+ held);
+ held = 0;
+
+ /* Get next speculated stream */
+ next_index = next3;
+ to_next = rte_node_next_stream_get(
+ graph, node,
+ next_index, nb_objs);
+ }
+ } else {
+ last_spec += 4;
+ }
+ }
+
+ while (n_left_from > 0) {
+ uint16_t chksum;
+ mbuf0 = pkts[0];
+
+ pkts += 1;
+ n_left_from -= 1;
+
+ d0 = rte_pktmbuf_mtod(mbuf0, void *);
+ rte_memcpy(d0,
+ nh[rte_node_mbuf_priv1(mbuf0)->nh].rewrite_data,
+ nh[rte_node_mbuf_priv1(mbuf0)->nh].rewrite_len);
+
+ next0 = nh[rte_node_mbuf_priv1(mbuf0)->nh].tx_node;
+ ip0 = (struct rte_ipv4_hdr *)((uint8_t *)d0 +
+ sizeof(struct rte_ether_hdr));
+ chksum = rte_node_mbuf_priv1(mbuf0)->cksum +
+ rte_cpu_to_be_16(0x0100);
+ chksum += chksum >= 0xffff;
+ ip0->hdr_checksum = chksum;
+ ip0->time_to_live = rte_node_mbuf_priv1(mbuf0)->ttl - 1;
+
+ if (unlikely(next_index ^ next0)) {
+ /* Copy things succesfully speculated till now */
+ rte_memcpy(to_next, from,
+ last_spec * sizeof(from[0]));
+ from += last_spec;
+ to_next += last_spec;
+ held += last_spec;
+ last_spec = 0;
+
+ rte_node_enqueue_x1(graph, node,
+ next0, from[0]);
+ from += 1;
+ } else {
+ last_spec += 1;
+ }
+ }
+
+ /* !!! Home run !!! */
+ if (likely(last_spec == nb_objs)) {
+ rte_node_next_stream_move(graph, node, next_index);
+ return nb_objs;
+ }
+
+ held += last_spec;
+ rte_memcpy(to_next, from, last_spec * sizeof(from[0]));
+ rte_node_next_stream_put(graph, node, next_index, held);
+ /* Save the last next used */
+ *(uint16_t *)node->ctx = next_index;
+
+ return nb_objs;
+}
+
+static int
+ip4_rewrite_node_init(const struct rte_graph *graph, struct rte_node *node)
+{
+
+ RTE_SET_USED(graph);
+ RTE_SET_USED(node);
+
+ node_dbg("ip4_rewrite", "Initialized ip4_rewrite node initialized");
+ return 0;
+}
+
+
+int
+ip4_rewrite_set_next(uint16_t port_id, uint16_t next_index)
+{
+ if (ip4_rewrite_nm == NULL) {
+ ip4_rewrite_nm = rte_zmalloc("ip4_rewrite",
+ sizeof(struct ip4_rewrite_node_main),
+ RTE_CACHE_LINE_SIZE);
+ if (ip4_rewrite_nm == NULL)
+ return -ENOMEM;
+ }
+ ip4_rewrite_nm->next_index[port_id] = next_index;
+
+ return 0;
+}
+
+int
+rte_node_ip4_rewrite_add(uint16_t next_hop, uint8_t *rewrite_data,
+ uint8_t rewrite_len, uint16_t dst_port)
+{
+ struct ip4_rewrite_nh_header *nh;
+
+ if (next_hop >= RTE_GRAPH_IP4_REWRITE_MAX_NH)
+ return -EINVAL;
+
+ if (rewrite_len > RTE_GRAPH_IP4_REWRITE_MAX_LEN)
+ return -EINVAL;
+
+ if (ip4_rewrite_nm == NULL) {
+ ip4_rewrite_nm = rte_zmalloc("ip4_rewrite",
+ sizeof(struct ip4_rewrite_node_main),
+ RTE_CACHE_LINE_SIZE);
+ if (ip4_rewrite_nm == NULL)
+ return -ENOMEM;
+ }
+
+ /* Check if dst port doesn't exist as edge */
+ if (!ip4_rewrite_nm->next_index[dst_port])
+ return -EINVAL;
+
+ /* Update next hop */
+ nh = &ip4_rewrite_nm->nh[next_hop];
+
+ memcpy(nh->rewrite_data, rewrite_data, rewrite_len);
+ nh->tx_node = ip4_rewrite_nm->next_index[dst_port];
+ nh->rewrite_len = rewrite_len;
+ nh->enabled = true;
+
+ return 0;
+}
+
+struct rte_node_register ip4_rewrite_node = {
+ .process = ip4_rewrite_node_process,
+ .name = "ip4_rewrite",
+ /* Default edge i.e '0' is pkt drop */
+ .nb_edges = 1,
+ .next_nodes = {
+ [0] = "pkt_drop",
+ },
+ .init = ip4_rewrite_node_init,
+};
+
+RTE_NODE_REGISTER(ip4_rewrite_node);
diff --git a/lib/librte_node/ip4_rewrite_priv.h b/lib/librte_node/ip4_rewrite_priv.h
new file mode 100644
index 000000000..128148faa
--- /dev/null
+++ b/lib/librte_node/ip4_rewrite_priv.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#ifndef __INCLUDE_IP4_REWRITE_PRIV_H__
+#define __INCLUDE_IP4_REWRITE_PRIV_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+
+#define RTE_GRAPH_IP4_REWRITE_MAX_NH 64
+#define RTE_GRAPH_IP4_REWRITE_MAX_LEN 56
+
+struct ip4_rewrite_nh_header {
+ uint16_t rewrite_len;
+ uint16_t tx_node;
+ uint16_t enabled;
+ uint16_t rsvd;
+ union {
+ struct {
+ struct rte_ether_addr dst;
+ struct rte_ether_addr src;
+ };
+ uint8_t rewrite_data[RTE_GRAPH_IP4_REWRITE_MAX_LEN];
+ };
+};
+
+struct ip4_rewrite_node_main {
+ struct ip4_rewrite_nh_header nh[RTE_GRAPH_IP4_REWRITE_MAX_NH];
+ /* Next index of each configured port */
+ uint16_t next_index[RTE_MAX_ETHPORTS];
+};
+
+extern struct rte_node_register ip4_rewrite_node;
+
+int ip4_rewrite_set_next(uint16_t port_id, uint16_t next_index);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_IP4_REWRITE_PRIV_H__ */
diff --git a/lib/librte_node/log.c b/lib/librte_node/log.c
new file mode 100644
index 000000000..f035f91e8
--- /dev/null
+++ b/lib/librte_node/log.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include "node_private.h"
+
+int rte_node_logtype;
+
+RTE_INIT(rte_node_init_log)
+{
+ rte_node_logtype = rte_log_register("lib.node");
+ if (rte_node_logtype >= 0)
+ rte_log_set_level(rte_node_logtype, RTE_LOG_INFO);
+}
diff --git a/lib/librte_node/meson.build b/lib/librte_node/meson.build
new file mode 100644
index 000000000..59e11e5b4
--- /dev/null
+++ b/lib/librte_node/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Marvell International Ltd.
+
+sources = files('null.c', 'log.c', 'ethdev_rx.c', 'ethdev_tx.c', 'ip4_lookup.c',
+ 'ip4_rewrite.c', 'pkt_drop.c', 'ethdev_ctrl.c')
+headers = files('rte_node_ip4_api.h', 'rte_node_eth_api.h')
+allow_experimental_apis = true
+deps += ['graph', 'mbuf', 'lpm', 'ethdev', 'mempool', 'cryptodev']
diff --git a/lib/librte_node/node_private.h b/lib/librte_node/node_private.h
new file mode 100644
index 000000000..9f40ff222
--- /dev/null
+++ b/lib/librte_node/node_private.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#ifndef __NODE_PRIVATE_H__
+#define __NODE_PRIVATE_H__
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+#include <rte_log.h>
+
+extern int rte_node_logtype;
+#define NODE_LOG(level, node_name, ...) \
+ rte_log(RTE_LOG_ ## level, rte_node_logtype, \
+ RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__,)\
+ "\n", node_name, __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define node_err(node_name, ...) NODE_LOG(ERR, node_name, __VA_ARGS__)
+#define node_info(node_name, ...) NODE_LOG(INFO, node_name, __VA_ARGS__)
+#define node_dbg(node_name, ...) NODE_LOG(DEBUG, node_name, __VA_ARGS__)
+
+struct rte_node_mbuf_priv1 {
+ union {
+ /* IP4 rewrite */
+ struct {
+ uint16_t nh;
+ uint16_t ttl;
+ uint32_t cksum;
+ };
+
+ uint64_t u;
+ };
+};
+
+struct rte_node_mbuf_priv2 {
+ union {
+ /* Sym crypto */
+ struct {
+ struct rte_crypto_op op;
+ };
+ };
+} __rte_cache_aligned;
+
+#define RTE_NODE_MBUF_PRIV2_SIZE sizeof(struct rte_node_mbuf_priv2)
+
+static __rte_always_inline struct rte_node_mbuf_priv1 *
+rte_node_mbuf_priv1(struct rte_mbuf *m)
+{
+ return (struct rte_node_mbuf_priv1 *)&m->udata64;
+}
+
+static __rte_always_inline struct rte_node_mbuf_priv2 *
+rte_node_mbuf_priv2(struct rte_mbuf *m)
+{
+ return (struct rte_node_mbuf_priv2 *)rte_mbuf_to_priv(m);
+}
+
+
+#endif /* __NODE_PRIVATE_H__ */
diff --git a/lib/librte_node/null.c b/lib/librte_node/null.c
new file mode 100644
index 000000000..5359f958f
--- /dev/null
+++ b/lib/librte_node/null.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <rte_graph.h>
+
+static uint16_t
+null(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ RTE_SET_USED(node);
+ RTE_SET_USED(objs);
+ RTE_SET_USED(graph);
+
+ return nb_objs;
+}
+
+static struct rte_node_register null_node = {
+ .name = "null",
+ .process = null,
+};
+
+RTE_NODE_REGISTER(null_node);
diff --git a/lib/librte_node/pkt_drop.c b/lib/librte_node/pkt_drop.c
new file mode 100644
index 000000000..643af6d75
--- /dev/null
+++ b/lib/librte_node/pkt_drop.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <rte_debug.h>
+#include <rte_mbuf.h>
+#include <rte_graph.h>
+
+static uint16_t
+pkt_drop_process(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ RTE_SET_USED(node);
+ RTE_SET_USED(graph);
+
+ rte_pktmbuf_free_bulk((struct rte_mbuf **)objs, nb_objs);
+
+ return nb_objs;
+}
+
+static struct rte_node_register pkt_drop_node = {
+ .process = pkt_drop_process,
+ .name = "pkt_drop",
+};
+
+RTE_NODE_REGISTER(pkt_drop_node);
diff --git a/lib/librte_node/rte_node_eth_api.h b/lib/librte_node/rte_node_eth_api.h
new file mode 100644
index 000000000..80c69fa66
--- /dev/null
+++ b/lib/librte_node/rte_node_eth_api.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#ifndef __INCLUDE_RTE_NODE_ETH_API_H__
+#define __INCLUDE_RTE_NODE_ETH_API_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdbool.h>
+#include <rte_common.h>
+#include <rte_mempool.h>
+#include <rte_graph.h>
+
+struct rte_node_ethdev_config {
+ uint16_t port_id;
+ uint16_t num_rx_queues;
+ uint16_t num_tx_queues;
+ struct rte_mempool **mp;
+ uint16_t mp_count;
+};
+
+__rte_experimental int
+rte_node_eth_config(struct rte_node_ethdev_config *cfg,
+ uint16_t cnt, uint16_t nb_graphs);
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_NODE_ETH_API_H__ */
diff --git a/lib/librte_node/rte_node_ip4_api.h b/lib/librte_node/rte_node_ip4_api.h
new file mode 100644
index 000000000..7cbb018c2
--- /dev/null
+++ b/lib/librte_node/rte_node_ip4_api.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#ifndef __INCLUDE_RTE_NODE_IP4_API_H__
+#define __INCLUDE_RTE_NODE_IP4_API_H__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+
+/* IP4_LOOKUP node defines */
+enum ip4_lookup_next_nodes {
+ IP4_LOOKUP_NEXT_REWRITE,
+ IP4_LOOKUP_NEXT_PKT_DROP,
+ IP4_LOOKUP_NEXT_MAX,
+};
+
+__rte_experimental
+int rte_node_ip4_route_add(uint32_t ip, uint8_t depth,
+ uint16_t next_hop,
+ enum ip4_lookup_next_nodes next_node);
+
+__rte_experimental
+int rte_node_ip4_rewrite_add(uint16_t next_hop, uint8_t *rewrite_data,
+ uint8_t rewrite_len, uint16_t dst_port);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_NODE_IP4_API_H__ */
diff --git a/lib/librte_node/rte_node_version.map b/lib/librte_node/rte_node_version.map
new file mode 100644
index 000000000..a799b0d38
--- /dev/null
+++ b/lib/librte_node/rte_node_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_node_eth_config;
+ rte_node_ip4_route_add;
+ rte_node_ip4_rewrite_add;
+ rte_node_logtype;
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 4089ce0c3..60d7e5560 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -30,7 +30,7 @@ libraries = [
# add pkt framework libs which use other libs from above
'port', 'table', 'pipeline',
# flow_classify lib depends on pkt framework table lib
- 'flow_classify', 'bpf', 'graph', 'telemetry']
+ 'flow_classify', 'bpf', 'graph', 'node', 'telemetry']
if is_windows
libraries = ['kvargs','eal'] # only supported libraries for windows
@@ -182,6 +182,9 @@ foreach l:libraries
dpdk_libraries = [shared_lib] + dpdk_libraries
dpdk_static_libraries = [static_lib] + dpdk_static_libraries
+ if libname == 'rte_node'
+ dpdk_graph_nodes = [static_lib]
+ endif
endif # sources.length() > 0
set_variable('shared_rte_' + name, shared_dep)
diff --git a/meson.build b/meson.build
index b7ae9c8d9..811c96421 100644
--- a/meson.build
+++ b/meson.build
@@ -16,6 +16,7 @@ cc = meson.get_compiler('c')
dpdk_conf = configuration_data()
dpdk_libraries = []
dpdk_static_libraries = []
+dpdk_graph_nodes = []
dpdk_driver_classes = []
dpdk_drivers = []
dpdk_extra_ldflags = []
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index e169d7a7b..72e8f81cc 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -99,6 +99,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
_LDLIBS-$(CONFIG_RTE_LIBRTE_RCU) += -lrte_rcu
_LDLIBS-$(CONFIG_RTE_LIBRTE_GRAPH) += -lrte_graph
+_LDLIBS-$(CONFIG_RTE_LIBRTE_NODE) += -lrte_node
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
--
2.24.1
^ permalink raw reply [flat|nested] 31+ messages in thread
* [dpdk-dev] [RFC PATCH 3/5] test: add graph functional tests
2020-01-31 17:01 [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 1/5] " jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 2/5] node: add packet processing nodes jerinj
@ 2020-01-31 17:01 ` jerinj
2020-01-31 17:02 ` [dpdk-dev] [RFC PATCH 4/5] test: add graph performance test cases jerinj
` (3 subsequent siblings)
6 siblings, 0 replies; 31+ messages in thread
From: jerinj @ 2020-01-31 17:01 UTC (permalink / raw)
To: dev
Cc: pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard
From: Kiran Kumar K <kirankumark@marvell.com>
Example command to execute the test:
echo "graph_autotest" | sudo ./build/app/test/dpdk-test -c 0x30
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
app/test/Makefile | 4 +
app/test/meson.build | 1 +
app/test/test_graph.c | 820 ++++++++++++++++++++++++++++++++++++++++++
3 files changed, 825 insertions(+)
create mode 100644 app/test/test_graph.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 57930c00b..e1dbe297e 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -220,6 +220,10 @@ SRCS-y += test_event_timer_adapter.c
SRCS-y += test_event_crypto_adapter.c
endif
+ifeq ($(CONFIG_RTE_LIBRTE_GRAPH), y)
+SRCS-y += test_graph.c
+endif
+
ifeq ($(CONFIG_RTE_LIBRTE_RAWDEV),y)
SRCS-y += test_rawdev.c
endif
diff --git a/app/test/meson.build b/app/test/meson.build
index 7d761c8fa..d5d0c2173 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -54,6 +54,7 @@ test_sources = files('commands.c',
'test_fib6_perf.c',
'test_func_reentrancy.c',
'test_flow_classify.c',
+ 'test_graph.c',
'test_hash.c',
'test_hash_functions.c',
'test_hash_multiwriter.c',
diff --git a/app/test/test_graph.c b/app/test/test_graph.c
new file mode 100644
index 000000000..16a1373e8
--- /dev/null
+++ b/app/test/test_graph.c
@@ -0,0 +1,820 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#include <stdio.h>
+#include <string.h>
+#include <inttypes.h>
+#include <signal.h>
+#include <unistd.h>
+#include <assert.h>
+
+#include <rte_errno.h>
+#include <rte_graph.h>
+#include <rte_graph_worker.h>
+#include <rte_mbuf.h>
+
+#include "test.h"
+
+
+uint16_t
+test_node_worker_source(struct rte_graph *graph, struct rte_node *node,
+ void **objs, uint16_t nb_objs);
+
+uint16_t
+test_node0_worker(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs);
+
+uint16_t
+test_node1_worker(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs);
+
+uint16_t
+test_node2_worker(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs);
+
+uint16_t
+test_node3_worker(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs);
+
+#define MBUFF_SIZE 512
+#define MAX_NODES 4
+
+static struct rte_mbuf mbuf[MAX_NODES + 1][MBUFF_SIZE];
+static void *mbuf_p[MAX_NODES + 1][MBUFF_SIZE];
+static rte_graph_t graph_id;
+static uint64_t obj_stats[MAX_NODES + 1];
+static uint64_t fn_calls[MAX_NODES + 1];
+
+const char *node_patterns[] = {
+ "test_node_source1",
+ "test_node00",
+ "test_node00-test_node11",
+ "test_node00-test_node22",
+ "test_node00-test_node33",
+};
+
+const char *node_names[] = {
+ "test_node00",
+ "test_node00-test_node11",
+ "test_node00-test_node22",
+ "test_node00-test_node33",
+};
+
+struct test_node_register {
+#define NODE_NAMESIZE 64
+ char name[NODE_NAMESIZE];
+ rte_node_process_t process;
+ uint16_t nb_edges;
+ const char *next_nodes[MAX_NODES];
+};
+
+typedef struct {
+ uint32_t idx;
+ struct test_node_register node;
+} test_node_t;
+
+typedef struct {
+ test_node_t test_node[MAX_NODES];
+} test_main_t;
+
+static test_main_t test_main = {
+ .test_node = {
+ {
+ .node = {
+ .name = "test_node00",
+ .process = test_node0_worker,
+ .nb_edges = 2,
+ .next_nodes = {"test_node00-test_node11",
+ "test_node00-test_node22"},
+ },
+ },
+ {
+ .node = {
+ .name = "test_node11",
+ .process = test_node1_worker,
+ .nb_edges = 1,
+ .next_nodes = {"test_node00-test_node22"},
+ },
+ },
+ {
+ .node = {
+ .name = "test_node22",
+ .process = test_node2_worker,
+ .nb_edges = 1,
+ .next_nodes = {"test_node00-test_node33"},
+ },
+ },
+ {
+ .node = {
+ .name = "test_node33",
+ .process = test_node3_worker,
+ .nb_edges = 1,
+ .next_nodes = {"test_node00"},
+ },
+ },
+ },
+};
+
+static int node_init(const struct rte_graph *graph, struct rte_node *node)
+{
+ RTE_SET_USED(graph);
+
+ *(uint32_t*)node->ctx = node->id;
+
+ return 0;
+}
+
+static struct rte_node_register test_node_source = {
+ .name = "test_node_source1",
+ .process = test_node_worker_source,
+ .flags = RTE_NODE_SOURCE_F,
+ .nb_edges = 2,
+ .init = node_init,
+ .next_nodes = {"test_node00","test_node00-test_node11"},
+};
+RTE_NODE_REGISTER(test_node_source);
+
+static struct rte_node_register test_node0 = {
+ .name = "test_node00",
+ .process = test_node0_worker,
+ .init = node_init,
+};
+RTE_NODE_REGISTER(test_node0);
+
+uint16_t
+test_node_worker_source(struct rte_graph *graph, struct rte_node *node,
+ void **objs, uint16_t nb_objs)
+{
+ uint32_t obj_node0 = rand()%100, obj_node1;
+ test_main_t *tm = &test_main;
+ struct rte_mbuf *data;
+ void **next_stream;
+ rte_node_t next;
+ uint32_t i;
+
+ RTE_SET_USED(objs);
+
+ nb_objs = RTE_GRAPH_BURST_SIZE;
+ obj_node0 = nb_objs * obj_node0 * 0.01;
+ next = 0;
+ next_stream = rte_node_next_stream_get(graph, node, next, obj_node0);
+ for (i = 0; i < obj_node0; i++) {
+ data = &mbuf[0][i];
+ data->udata64 = ((uint64_t)tm->test_node[0].idx << 32) | i;
+ if ((i + 1) == obj_node0)
+ data->udata64 |= (1 << 16);
+ next_stream[i] = &mbuf[0][i];
+ }
+ rte_node_next_stream_put(graph, node, next, obj_node0);
+
+ obj_node1 = nb_objs - obj_node0;
+ next = 1;
+ next_stream = rte_node_next_stream_get(graph, node, next, obj_node1);
+ for (i = 0; i < obj_node1; i++) {
+ data = &mbuf[0][obj_node0 + i];
+ data->udata64 = ((uint64_t)tm->test_node[1].idx << 32) | i;
+ if ((i + 1) == obj_node1)
+ data->udata64 |= (1 << 16);
+ next_stream[i] = &mbuf[0][obj_node0 + i];
+ }
+ rte_node_next_stream_put(graph, node, next, obj_node1);
+
+ obj_stats[0] += nb_objs;
+ fn_calls[0] += 1;
+ return nb_objs;
+}
+
+uint16_t
+test_node0_worker(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ test_main_t *tm = &test_main;
+
+ if (*(uint32_t*)node->ctx == test_node0.id) {
+ uint32_t obj_node0 = rand()%100, obj_node1;
+ struct rte_mbuf *data;
+ uint8_t second_pass = 0;
+ uint32_t count = 0;
+ uint32_t i;
+
+ obj_stats[1] += nb_objs;
+ fn_calls[1] += 1;
+
+ for (i = 0; i < nb_objs; i++) {
+ data = (struct rte_mbuf*)objs[i];
+ if ((data->udata64 >> 32) != tm->test_node[0].idx) {
+ printf("Data idx miss match at node 0, expected"
+ " = %u got = %u\n",tm->test_node[0].idx,
+ (uint32_t)(data->udata64 >> 32));
+ goto end;
+ }
+
+ if ((data->udata64 & 0xffff) != (i - count)) {
+ printf("Expected buff count miss match at "
+ "node 0\n");
+ goto end;
+ }
+
+ if (data->udata64 & (0x1 << 16))
+ count = i + 1;
+ if (data->udata64 & (0x1 << 17))
+ second_pass = 1;
+ }
+
+ if (count != i) {
+ printf("Count missmatch at node 0\n");
+ goto end;
+ }
+
+ obj_node0 = nb_objs * obj_node0 * 0.01;
+ for (i = 0; i < obj_node0; i++) {
+ data = &mbuf[1][i];
+ data->udata64 = ((uint64_t)tm->test_node[1].idx << 32)
+ | i;
+ if ((i + 1) == obj_node0)
+ data->udata64 |= (1 << 16);
+ if (second_pass)
+ data->udata64 |= (1 << 17);
+ }
+ rte_node_enqueue(graph, node, 0, (void**)&mbuf_p[1][0],
+ obj_node0);
+
+ obj_node1 = nb_objs - obj_node0;
+ for (i = 0; i < obj_node1; i++) {
+ data = &mbuf[1][obj_node0 + i];
+ data->udata64 = ((uint64_t)tm->test_node[2].idx << 32)
+ | i;
+ if ((i + 1) == obj_node1)
+ data->udata64 |= (1 << 16);
+ if (second_pass)
+ data->udata64 |= (1 << 17);
+ }
+ rte_node_enqueue(graph, node, 1,
+ (void**)&mbuf_p[1][obj_node0], obj_node1);
+
+ } else if (*(uint32_t*)node->ctx == tm->test_node[1].idx)
+ test_node1_worker(graph, node, objs, nb_objs);
+ else if (*(uint32_t*)node->ctx == tm->test_node[2].idx)
+ test_node2_worker(graph, node, objs, nb_objs);
+ else if (*(uint32_t*)node->ctx == tm->test_node[3].idx)
+ test_node3_worker(graph, node, objs, nb_objs);
+ else
+ assert(0);
+end:
+ return nb_objs;
+}
+
+uint16_t
+test_node1_worker(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ test_main_t *tm = &test_main;
+ uint8_t second_pass = 0;
+ uint32_t obj_node0 = 0;
+ struct rte_mbuf *data;
+ uint32_t count = 0;
+ uint32_t i;
+
+ obj_stats[2] += nb_objs;
+ fn_calls[2] += 1;
+ for (i = 0; i < nb_objs; i++) {
+ data = (struct rte_mbuf*)objs[i];
+ if ((data->udata64 >> 32) != tm->test_node[1].idx) {
+ printf("Data idx miss match at node 1, expected = %u"
+ " got = %u\n",tm->test_node[1].idx,
+ (uint32_t)(data->udata64 >> 32));
+ goto end;
+ }
+
+ if ((data->udata64 & 0xffff) != (i - count)) {
+ printf("Expected buff count miss match at node 1\n");
+ goto end;
+ }
+
+ if (data->udata64 & (0x1 << 16))
+ count = i + 1;
+ if (data->udata64 & (0x1 << 17))
+ second_pass = 1;
+ }
+
+ if (count != i) {
+ printf("Count missmatch at node 1\n");
+ goto end;
+ }
+
+ obj_node0 = nb_objs;
+ for (i = 0; i < obj_node0; i++) {
+ data = &mbuf[2][i];
+ data->udata64 = ((uint64_t)tm->test_node[2].idx << 32) | i;
+ if ((i + 1) == obj_node0)
+ data->udata64 |= (1 << 16);
+ if (second_pass)
+ data->udata64 |= (1 << 17);
+ }
+ rte_node_enqueue(graph, node, 0,
+ (void**)&mbuf_p[2][0], obj_node0);
+end:
+ return nb_objs;
+}
+
+uint16_t
+test_node2_worker(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ test_main_t *tm = &test_main;
+ uint8_t second_pass = 0;
+ struct rte_mbuf *data;
+ uint32_t count = 0;
+ uint32_t obj_node0;
+ uint32_t i;
+
+ obj_stats[3] += nb_objs;
+ fn_calls[3] += 1;
+ for (i = 0; i < nb_objs; i++) {
+ data = (struct rte_mbuf*)objs[i];
+ if ((data->udata64 >> 32) != tm->test_node[2].idx) {
+ printf("Data idx miss match at node 2, expected = %u"
+ " got = %u\n",tm->test_node[2].idx,
+ (uint32_t)(data->udata64 >> 32));
+ goto end;
+ }
+
+ if ((data->udata64 & 0xffff) != (i - count)) {
+ printf("Expected buff count miss match at node 2\n");
+ goto end;
+ }
+
+ if (data->udata64 & (0x1 << 16))
+ count = i + 1;
+ if (data->udata64 & (0x1 << 17))
+ second_pass = 1;
+ }
+
+ if (count != i) {
+ printf("Count missmatch at node 2\n");
+ goto end;
+ }
+
+ if (!second_pass) {
+ obj_node0 = nb_objs;
+ for (i = 0; i < obj_node0; i++) {
+ data = &mbuf[3][i];
+ data->udata64 = ((uint64_t)tm->test_node[3].idx << 32)
+ | i;
+ if ((i + 1) == obj_node0)
+ data->udata64 |= (1 << 16);
+ }
+ rte_node_enqueue(graph, node, 0, (void**)&mbuf_p[3][0],
+ obj_node0);
+ }
+end:
+ return nb_objs;
+}
+
+uint16_t
+test_node3_worker(struct rte_graph *graph, struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ test_main_t *tm = &test_main;
+ uint8_t second_pass = 0;
+ struct rte_mbuf *data;
+ uint32_t count = 0;
+ uint32_t obj_node0;
+ uint32_t i;
+
+ obj_stats[4] += nb_objs;
+ fn_calls[4] += 1;
+ for (i = 0; i < nb_objs; i++) {
+ data = (struct rte_mbuf*)objs[i];
+ if ((data->udata64 >> 32) != tm->test_node[3].idx) {
+ printf("Data idx miss match at node 3, expected = %u"
+ " got = %u\n",tm->test_node[3].idx,
+ (uint32_t)(data->udata64 >> 32));
+ goto end;
+ }
+
+ if ((data->udata64 & 0xffff) != (i - count)) {
+ printf("Expected buff count miss match at node 3\n");
+ goto end;
+ }
+
+ if (data->udata64 & (0x1 << 16))
+ count = i + 1;
+ if (data->udata64 & (0x1 << 17))
+ second_pass = 1;
+ }
+
+ if (count != i) {
+ printf("Count missmatch at node 3\n");
+ goto end;
+ }
+
+ if (second_pass) {
+ printf("Unexpected buffers are at node 3\n");
+ goto end;
+ } else {
+ obj_node0 = nb_objs * 2;
+ for (i = 0; i < obj_node0; i++) {
+ data = &mbuf[4][i];
+ data->udata64 = ((uint64_t)tm->test_node[0].idx << 32)
+ | i;
+ data->udata64 |= (1 << 17);
+ if ((i + 1) == obj_node0)
+ data->udata64 |= (1 << 16);
+ }
+ rte_node_enqueue(graph, node, 0, (void**)&mbuf_p[4][0],
+ obj_node0);
+ }
+end:
+ return nb_objs;
+}
+
+static int test_lookup_functions(void)
+{
+ test_main_t *tm = &test_main;
+ int i;
+
+
+ /* Verify the name with ID */
+ for (i = 1; i < MAX_NODES; i++) {
+ char *name = rte_node_id_to_name(tm->test_node[i].idx);
+ if (strcmp(name, node_names[i]) != 0) {
+ printf("Test node name verify by ID = %d failed "
+ "Expected = %s, got %s\n",
+ i, node_names[i], name);
+ return -1;
+ }
+ }
+
+ /* verify by name */
+ for (i = 1; i < MAX_NODES; i++) {
+ uint32_t idx = rte_node_from_name(node_names[i]);
+ if (idx != tm->test_node[i].idx) {
+ printf("Test node ID verify by name = %s failed "
+ "Expected = %d, got %d\n",
+ node_names[i], tm->test_node[i].idx,
+ idx);
+ return -1;
+ }
+ }
+
+ /* Verify edge count */
+ for (i = 1; i < MAX_NODES; i++) {
+ uint32_t count = rte_node_edge_count(tm->test_node[i].idx);
+ if (count != tm->test_node[i].node.nb_edges) {
+ printf("Test number of edges for node = %s failed "
+ "Expected = %d, got = %d\n",
+ tm->test_node[i].node.name,
+ tm->test_node[i].node.nb_edges,
+ count);
+ return -1;
+ }
+ }
+
+ /* verify edge names */
+ for (i = 1; i < MAX_NODES; i++) {
+ uint32_t j, count;
+ char **next_edges;
+
+ count = rte_node_edge_get(tm->test_node[i].idx, NULL);
+ if (count != tm->test_node[i].node.nb_edges * sizeof(char*)) {
+ printf("Test number of edge count for node = %s failed"
+ " Expected = %d, got = %d\n",
+ tm->test_node[i].node.name,
+ tm->test_node[i].node.nb_edges,
+ count);
+ return -1;
+ }
+ next_edges = malloc(count);
+ count = rte_node_edge_get(tm->test_node[i].idx,
+ next_edges);
+ if (count != tm->test_node[i].node.nb_edges) {
+ printf("Test number of edges for node = %s failed "
+ "Expected = %d, got %d\n",
+ tm->test_node[i].node.name,
+ tm->test_node[i].node.nb_edges,
+ count);
+ return -1;
+ }
+
+ for (j = 0; j < count; j++) {
+ if (strcmp(next_edges[j],
+ tm->test_node[i].node.next_nodes[j]) != 0) {
+ printf("Edge name miss match, expected = %s"
+ " got = %s\n",
+ tm->test_node[i].node.next_nodes[j],
+ next_edges[j]);
+ return -1;
+ }
+ }
+ free(next_edges);
+ }
+ return 0;
+}
+
+static int test_node_clone(void)
+{
+ test_main_t *tm = &test_main;
+ uint32_t node_id, dummy_id;
+ int i;
+
+ node_id = rte_node_from_name("test_node00");
+ tm->test_node[0].idx = node_id;
+
+ /* Clone with same name, should fail */
+ dummy_id = rte_node_clone(node_id, "test_node00");
+ if (!rte_node_is_invalid(dummy_id)) {
+ printf("Got valid id when clone with same name, "
+ " Expecting fail\n");
+ return -1;
+ }
+
+ for (i = 1; i < MAX_NODES; i++) {
+ tm->test_node[i].idx =
+ rte_node_clone(node_id, tm->test_node[i].node.name);
+ if (rte_node_is_invalid(tm->test_node[i].idx)) {
+ printf("Got invalid node id \n");
+ return -1;
+ }
+ }
+
+ /* clone from cloned node should fail */
+ dummy_id = rte_node_clone(tm->test_node[1].idx, "dummy_node");
+ if (!rte_node_is_invalid(dummy_id)) {
+ printf("Got valid node id when cloning from cloned node"
+ " Expected fail\n");
+ return -1;
+ }
+ return 0;
+}
+
+static int test_update_edges(void)
+{
+ test_main_t *tm = &test_main;
+ uint32_t node_id;
+ uint16_t count;
+ int i;
+
+
+ node_id = rte_node_from_name("test_node00");
+ count = rte_node_edge_update(node_id,
+ 0,
+ tm->test_node[0].node.next_nodes,
+ tm->test_node[0].node.nb_edges);
+ if (count != tm->test_node[0].node.nb_edges) {
+ printf("Update edges failed expected: %d"
+ " got = %d\n",
+ tm->test_node[0].node.nb_edges,
+ count);
+ return -1;
+ }
+
+ for (i = 1; i < MAX_NODES; i++) {
+ count = rte_node_edge_update(tm->test_node[i].idx,
+ 0,
+ tm->test_node[i].node.next_nodes,
+ tm->test_node[i].node.nb_edges);
+ if (count != tm->test_node[i].node.nb_edges) {
+ printf("Update edges failed expected: %d"
+ " got = %d\n",
+ tm->test_node[i].node.nb_edges,
+ count);
+ return -1;
+ }
+
+ count = rte_node_edge_shrink(tm->test_node[i].idx,
+ tm->test_node[i].node.nb_edges);
+ if (count != tm->test_node[i].node.nb_edges) {
+ printf("Shrink edges failed\n");
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int test_create_graph(void)
+{
+ const char *node_patterns_dummy[] = {
+ "test_node_source1",
+ "test_node00",
+ "test_node00-test_node11",
+ "test_node00-test_node22",
+ "test_node00-test_node33",
+ "test_node00-dummy_node",
+ };
+ struct rte_graph_param gconf = {
+ .socket_id = SOCKET_ID_ANY,
+ .nb_node_patterns = 6,
+ .node_patterns = node_patterns_dummy,
+ };
+ uint32_t dummy_node_id;
+ uint32_t node_id;
+
+
+ node_id = rte_node_from_name("test_node00");
+ dummy_node_id = rte_node_clone(node_id, "dummy_node");
+ if (rte_node_is_invalid(dummy_node_id)) {
+ printf("Got invalid node id \n");
+ return -1;
+ }
+ graph_id = rte_graph_create("worker0", &gconf);
+ if (graph_id != RTE_GRAPH_ID_INVALID) {
+ printf("Graph creation success with isolated node"
+ " Expected graph creation fail\n");
+ return -1;
+ }
+ gconf.nb_node_patterns = 5;
+ gconf.node_patterns = node_patterns;
+ graph_id = rte_graph_create("worker0", &gconf);
+ if (graph_id == RTE_GRAPH_ID_INVALID) {
+ printf("Graph creation failed with error = %d\n", rte_errno);
+ return -1;
+ }
+ return 0;
+}
+
+static int test_graph_walk(void)
+{
+ struct rte_graph *graph = rte_graph_lookup("worker0");
+ int i;
+
+ if (!graph) {
+ printf("Graph lookup failed\n");
+ return -1;
+ }
+
+ for (i = 0; i < 5; i++) {
+ rte_graph_walk(graph);
+ }
+
+ return 0;
+}
+
+
+static int test_graph_lookup_functions(void)
+{
+ test_main_t *tm = &test_main;
+ struct rte_node *node;
+ int i;
+
+
+ for (i = 0; i < MAX_NODES; i++) {
+ node = rte_graph_node_get(graph_id, tm->test_node[i].idx);
+ if (!node) {
+ printf("rte_graph_node_get, failed for node = %d\n",
+ tm->test_node[i].idx);
+ return -1;
+ }
+
+ if (tm->test_node[i].idx != node->id) {
+ printf("Node id didn't match, expected = %d"
+ " got = %d\n", tm->test_node[i].idx, node->id);
+ return 0;
+ }
+
+ if (strncmp(node->name, node_names[i], RTE_NODE_NAMESIZE)) {
+ printf("Node name didn't match, expected = %s"
+ " got %s\n",node_names[i], node->name);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < MAX_NODES; i++) {
+ node = rte_graph_node_get_by_name("worker0", node_names[i]);
+ if (!node) {
+ printf("rte_graph_node_get, failed for node = %d\n",
+ tm->test_node[i].idx);
+ return -1;
+ }
+
+ if (tm->test_node[i].idx != node->id) {
+ printf("Node id didn't match, expected = %d"
+ " got = %d\n", tm->test_node[i].idx, node->id);
+ return 0;
+ }
+
+ if (strncmp(node->name, node_names[i], RTE_NODE_NAMESIZE)) {
+ printf("Node name didn't match, expected = %s"
+ " got %s\n",node_names[i], node->name);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int
+graph_cluster_stats_cb_t(bool is_first, bool is_last, void *cookie,
+ const struct rte_graph_cluster_node_stats *st)
+{
+ int i;
+
+ RTE_SET_USED(is_first);
+ RTE_SET_USED(is_last);
+ RTE_SET_USED(cookie);
+
+ for (i = 0; i < MAX_NODES + 1; i++) {
+ rte_node_t id = rte_node_from_name(node_patterns[i]);
+ if (id == st->id) {
+ if (obj_stats[i] != st->objs) {
+ printf("Obj count miss match for node = %s"
+ "expected=%"PRId64" , got=%"PRId64"\n",
+ node_patterns[i], obj_stats[i],
+ st->objs);
+ return -1;
+ }
+
+ if (fn_calls[i] != st->calls) {
+ printf("func call miss match for node = %s"
+ "expected= %"PRId64" , got=%"PRId64"\n",
+ node_patterns[i], fn_calls[i],
+ st->calls);
+ return -1;
+ }
+ }
+ }
+ return 0;
+}
+
+static int
+test_print_stats(void)
+{
+ struct rte_graph_cluster_stats_param s_param;
+ struct rte_graph_cluster_stats *stats;
+ const char *pattern = "worker0";
+
+ if (!rte_graph_has_stats_feature())
+ return 0;
+
+ /* Prepare stats object */
+ memset(&s_param, 0, sizeof(s_param));
+ s_param.f = stdout;
+ s_param.socket_id = SOCKET_ID_ANY;
+ s_param.graph_patterns = &pattern;
+ s_param.nb_graph_patterns = 1;
+ s_param.fn = graph_cluster_stats_cb_t;
+
+ stats = rte_graph_cluster_stats_create(&s_param);
+ if (stats == NULL) {
+ printf("Unable to get stats\n");
+ return -1;
+ }
+
+ /* Clear screen and move to top left */
+ rte_graph_cluster_stats_get(stats, 0);
+
+ rte_graph_cluster_stats_destroy(stats);
+
+ return 0;
+}
+
+static int graph_setup(void)
+{
+ int i, j;
+
+ for (i = 0; i <= MAX_NODES; i++) {
+ for (j = 0 ; j < MBUFF_SIZE; j++) {
+ mbuf_p[i][j] = &mbuf[i][j];
+ }
+ }
+
+ if (test_node_clone()) {
+ printf("test_node_clone: fail\n");
+ return -1;
+ }
+ printf("test_node_clone: pass\n");
+ return 0;
+}
+
+static void graph_teardown(void)
+{
+ rte_graph_t id;
+
+ id = rte_graph_destroy("worker0");
+ if (id == RTE_GRAPH_ID_INVALID) {
+ printf("Graph Destroy failed\n");
+ }
+}
+
+static struct unit_test_suite graph_testsuite = {
+ .suite_name = "Graph library test suite",
+ .setup = graph_setup,
+ .teardown = graph_teardown,
+ .unit_test_cases = {
+ TEST_CASE(test_update_edges),
+ TEST_CASE(test_lookup_functions),
+ TEST_CASE(test_create_graph),
+ TEST_CASE(test_graph_lookup_functions),
+ TEST_CASE(test_graph_walk),
+ TEST_CASE(test_print_stats),
+ // add test case for rte_graph_cluster_stats_reset
+ TEST_CASES_END(), /**< NULL terminate unit test array */
+ }
+};
+
+static int
+graph_autotest_fn(void)
+{
+ return unit_test_suite_runner(&graph_testsuite);
+}
+
+REGISTER_TEST_COMMAND(graph_autotest, graph_autotest_fn);
--
2.24.1
^ permalink raw reply [flat|nested] 31+ messages in thread
* [dpdk-dev] [RFC PATCH 4/5] test: add graph performance test cases.
2020-01-31 17:01 [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem jerinj
` (2 preceding siblings ...)
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 3/5] test: add graph functional tests jerinj
@ 2020-01-31 17:02 ` jerinj
2020-01-31 17:02 ` [dpdk-dev] [RFC PATCH 5/5] example/l3fwd_graph: l3fwd using graph architecture jerinj
` (2 subsequent siblings)
6 siblings, 0 replies; 31+ messages in thread
From: jerinj @ 2020-01-31 17:02 UTC (permalink / raw)
To: dev
Cc: pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Example command to execute the test:
echo "graph_perf_autotest" | sudo ./build/app/test/dpdk-test -c 0x30
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 1 +
app/test/test_graph_perf.c | 888 +++++++++++++++++++++++++++++++++++++
3 files changed, 890 insertions(+)
create mode 100644 app/test/test_graph_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e1dbe297e..429c32209 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -222,6 +222,7 @@ endif
ifeq ($(CONFIG_RTE_LIBRTE_GRAPH), y)
SRCS-y += test_graph.c
+SRCS-y += test_graph_perf.c
endif
ifeq ($(CONFIG_RTE_LIBRTE_RAWDEV),y)
diff --git a/app/test/meson.build b/app/test/meson.build
index d5d0c2173..a0f3d3389 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -55,6 +55,7 @@ test_sources = files('commands.c',
'test_func_reentrancy.c',
'test_flow_classify.c',
'test_graph.c',
+ 'test_graph_perf.c',
'test_hash.c',
'test_hash_functions.c',
'test_hash_multiwriter.c',
diff --git a/app/test/test_graph_perf.c b/app/test/test_graph_perf.c
new file mode 100644
index 000000000..364cfd15a
--- /dev/null
+++ b/app/test/test_graph_perf.c
@@ -0,0 +1,888 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+#include <inttypes.h>
+#include <signal.h>
+#include <stdio.h>
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_errno.h>
+#include <rte_graph.h>
+#include <rte_graph_worker.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+
+#include "test.h"
+
+#define TEST_GRAPH_PERF_MZ "graph_perf_data"
+#define TEST_GRAPH_SRC_NAME "test_graph_perf_source"
+#define TEST_GRAPH_SRC_BRST_ONE_NAME "test_graph_perf_source_one"
+#define TEST_GRAPH_WRK_NAME "test_graph_perf_worker"
+#define TEST_GRAPH_SNK_NAME "test_graph_perf_sink"
+
+#define SOURCES(map) sizeof(map)/sizeof(map[0])
+#define STAGES(map) sizeof(map)/sizeof(map[0])
+#define NODES_PER_STAGE(map) sizeof(map[0])/sizeof(map[0][0])
+#define SINKS(map) sizeof(map[0])/sizeof(map[0][0])
+
+#define MAX_EDGES_PER_NODE 7
+
+struct test_node_data {
+ uint8_t node_id;
+ uint8_t is_sink;
+ uint8_t next_nodes[MAX_EDGES_PER_NODE];
+ uint8_t next_percentage[MAX_EDGES_PER_NODE];
+};
+
+struct test_graph_perf {
+ uint16_t nb_nodes;
+ rte_graph_t graph_id;
+ struct test_node_data *node_data;
+};
+
+struct graph_lcore_data {
+ uint8_t done;
+ rte_graph_t graph_id;
+};
+
+static struct test_node_data *
+graph_get_node_data(struct test_graph_perf *graph_data, rte_node_t id)
+{
+ struct test_node_data *node_data = NULL;
+ int i;
+
+ for (i = 0; i < graph_data->nb_nodes; i++)
+ if (graph_data->node_data[i].node_id == id) {
+ node_data = &graph_data->node_data[i];
+ break;
+ }
+
+ return node_data;
+}
+
+static int
+test_node_ctx_init(const struct rte_graph *graph, struct rte_node *node)
+{
+ struct test_graph_perf *graph_data;
+ struct test_node_data *node_data;
+ const struct rte_memzone *mz;
+ rte_node_t nid = node->id;
+ rte_edge_t edge = 0;
+ int i;
+
+ RTE_SET_USED(graph);
+
+ mz = rte_memzone_lookup(TEST_GRAPH_PERF_MZ);
+ graph_data = mz->addr;
+ node_data = graph_get_node_data(graph_data, nid);
+ node->ctx[0] = node->nb_edges;
+ for (i = 0; i < node->nb_edges && !node_data->is_sink; i++, edge++) {
+ node->ctx[i + 1] = edge;
+ node->ctx[i + 9] = node_data->next_percentage[i];
+ }
+
+
+ return 0;
+}
+
+static uint16_t
+test_perf_node_worker_source(struct rte_graph *graph,
+ struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ uint16_t count;
+ int i;
+
+ RTE_SET_USED(objs);
+ RTE_SET_USED(nb_objs);
+
+ for (i = 0; i < node->ctx[0]; i++) {
+ count = (node->ctx[i + 9] * RTE_GRAPH_BURST_SIZE) / 100;
+ rte_node_next_stream_get(graph, node, node->ctx[i + 1], count);
+ rte_node_next_stream_put(graph, node, node->ctx[i + 1], count);
+ }
+
+ return RTE_GRAPH_BURST_SIZE;
+}
+
+static struct rte_node_register test_graph_perf_source = {
+ .name = TEST_GRAPH_SRC_NAME,
+ .process = test_perf_node_worker_source,
+ .flags = RTE_NODE_SOURCE_F,
+ .init = test_node_ctx_init,
+};
+
+RTE_NODE_REGISTER(test_graph_perf_source);
+
+static uint16_t
+test_perf_node_worker_source_burst_one(struct rte_graph *graph,
+ struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ uint16_t count;
+ int i;
+
+ RTE_SET_USED(objs);
+ RTE_SET_USED(nb_objs);
+
+ for (i = 0; i < node->ctx[0]; i++) {
+ count = (node->ctx[i + 9]) / 100;
+ rte_node_next_stream_get(graph, node, node->ctx[i + 1], count);
+ rte_node_next_stream_put(graph, node, node->ctx[i + 1], count);
+ }
+
+ return 1;
+}
+
+static struct rte_node_register test_graph_perf_source_burst_one = {
+ .name = TEST_GRAPH_SRC_BRST_ONE_NAME,
+ .process = test_perf_node_worker_source_burst_one,
+ .flags = RTE_NODE_SOURCE_F,
+ .init = test_node_ctx_init,
+};
+
+RTE_NODE_REGISTER(test_graph_perf_source_burst_one);
+
+static uint16_t
+test_perf_node_worker(struct rte_graph *graph,
+ struct rte_node *node, void **objs, uint16_t nb_objs)
+{
+ uint16_t next = 0;
+ uint16_t enq = 0;
+ uint16_t count;
+ int i;
+
+ if (node->ctx[0] == 1) {
+ rte_node_next_stream_move(graph, node, node->ctx[1]);
+ return nb_objs;
+ }
+
+ for (i = 0; i < node->ctx[0]; i++) {
+ next = node->ctx[i + 1];
+ count = (node->ctx[i + 9] * nb_objs) / 100;
+ enq += count;
+ while (count) {
+ switch (count & (4 - 1)) {
+ case 0:
+ rte_node_enqueue_x4(graph, node, next, objs[0],
+ objs[1], objs[2], objs[3]);
+ objs += 4;
+ count -= 4;
+ break;
+ case 1:
+ rte_node_enqueue_x1(graph, node, next, objs[0]);
+ objs += 1;
+ count -= 1;
+ break;
+ case 2:
+ rte_node_enqueue_x2(graph, node, next, objs[0],
+ objs[1]);
+ objs += 2;
+ count -= 2;
+ break;
+ case 3:
+ rte_node_enqueue_x2(graph, node, next, objs[0],
+ objs[1]);
+ rte_node_enqueue_x1(graph, node, next, objs[0]);
+ objs += 3;
+ count -= 3;
+ break;
+ }
+ }
+
+ }
+
+ if (enq != nb_objs)
+ rte_node_enqueue(graph, node, next, objs, nb_objs - enq);
+
+ return nb_objs;
+}
+
+static struct rte_node_register test_graph_perf_worker = {
+ .name = TEST_GRAPH_WRK_NAME,
+ .process = test_perf_node_worker,
+ .init = test_node_ctx_init,
+};
+
+RTE_NODE_REGISTER(test_graph_perf_worker);
+
+static uint16_t
+test_perf_node_sink(struct rte_graph *graph,
+ struct rte_node *node, void **objs,
+ uint16_t nb_objs)
+{
+ RTE_SET_USED(graph);
+ RTE_SET_USED(node);
+ RTE_SET_USED(objs);
+ RTE_SET_USED(nb_objs);
+
+ return nb_objs;
+}
+
+static struct rte_node_register test_graph_perf_sink = {
+ .name = TEST_GRAPH_SNK_NAME,
+ .process = test_perf_node_sink,
+ .init = test_node_ctx_init,
+};
+
+RTE_NODE_REGISTER(test_graph_perf_sink);
+
+static int graph_perf_setup(void)
+{
+ if (rte_lcore_count() < 2) {
+ printf("Test requires atleast 2 lcores\n");
+ return TEST_SKIPPED;
+ }
+
+ return 0;
+}
+
+static void graph_perf_teardown(void)
+{
+}
+
+static inline rte_node_t
+graph_node_get(const char *pname, char *nname)
+{
+ rte_node_t pnode_id = rte_node_from_name(pname);
+ char lookup_name[RTE_NODE_NAMESIZE];
+ rte_node_t node_id;
+
+ snprintf(lookup_name, RTE_NODE_NAMESIZE, "%s-%s", pname, nname);
+ node_id = rte_node_from_name(lookup_name);
+
+ if (node_id != RTE_NODE_ID_INVALID) {
+ if (rte_node_edge_count(node_id))
+ rte_node_edge_shrink(node_id, 0);
+ return node_id;
+ }
+
+ return rte_node_clone(pnode_id, nname);
+}
+
+static uint16_t
+graph_node_count_edges(uint32_t stage, uint16_t node, uint16_t nodes_per_stage,
+ uint8_t edge_map[][nodes_per_stage][nodes_per_stage],
+ char *ename[], struct test_node_data *node_data,
+ rte_node_t **node_map)
+{
+ uint8_t total_percent = 0;
+ uint16_t edges = 0;
+ int i;
+
+ for (i = 0; i < nodes_per_stage && edges < MAX_EDGES_PER_NODE; i++) {
+ if (edge_map[stage + 1][i][node]) {
+ ename[edges] = malloc(sizeof(char) * RTE_NODE_NAMESIZE);
+ snprintf(ename[edges], RTE_NODE_NAMESIZE, "%s",
+ rte_node_id_to_name(node_map[stage + 1][i]));
+ node_data->next_nodes[edges] = node_map[stage + 1][i];
+ node_data->next_percentage[edges] =
+ edge_map[stage + 1][i][node];
+ edges++;
+ total_percent += edge_map[stage + 1][i][node];
+ }
+ }
+
+ if (edges >= MAX_EDGES_PER_NODE || (edges && total_percent != 100)) {
+ for (i = 0; i < edges; i++)
+ free(ename[i]);
+ return RTE_EDGE_ID_INVALID;
+ }
+
+ return edges;
+}
+
+static int graph_init(const char *gname, uint8_t nb_srcs, uint8_t nb_sinks,
+ uint32_t stages, uint16_t nodes_per_stage,
+ uint8_t src_map[][nodes_per_stage],
+ uint8_t snk_map[][nb_sinks],
+ uint8_t edge_map[][nodes_per_stage][nodes_per_stage],
+ uint8_t burst_one)
+{
+ struct test_graph_perf *graph_data;
+ char nname[RTE_NODE_NAMESIZE / 2];
+ struct test_node_data *node_data;
+ char *ename[nodes_per_stage];
+ struct rte_graph_param gconf;
+ const struct rte_memzone *mz;
+ uint8_t total_percent = 0;
+ rte_node_t *src_nodes;
+ rte_node_t *snk_nodes;
+ rte_node_t **node_map;
+ char **node_patterns;
+ rte_graph_t graph_id;
+ rte_edge_t edges;
+ rte_edge_t count;
+ uint32_t i, j, k;
+
+ mz = rte_memzone_reserve(TEST_GRAPH_PERF_MZ,
+ sizeof(struct test_graph_perf), 0, 0);
+ if (mz == NULL) {
+ printf("failed to allocate graph common memory\n");
+ return -ENOMEM;
+ }
+
+ graph_data = mz->addr;
+ graph_data->nb_nodes = 0;
+ graph_data->node_data = malloc(sizeof(struct test_node_data) *
+ (nb_srcs + nb_sinks + stages *
+ nodes_per_stage));
+ if (graph_data->node_data == NULL) {
+ printf("failed to reserve memzone for graph data\n");
+ goto memzone_free;
+ }
+
+ node_patterns = malloc(sizeof(char *) * (nb_srcs + nb_sinks + stages *
+ nodes_per_stage));
+ if (node_patterns == NULL) {
+ printf("failed to reserve memory for node patterns\n");
+ goto data_free;
+ }
+
+ src_nodes = malloc(sizeof(rte_node_t) * nb_srcs);
+ if (src_nodes == NULL) {
+ printf("failed to reserve memory for src nodes\n");
+ goto pattern_free;
+ }
+
+ snk_nodes = malloc(sizeof(rte_node_t) * nb_sinks);
+ if (snk_nodes == NULL) {
+ printf("failed to reserve memory for snk nodes\n");
+ goto src_free;
+ }
+
+ node_map = malloc(sizeof(rte_node_t *) * stages + sizeof(rte_node_t) *
+ nodes_per_stage * stages);
+ if (node_map == NULL) {
+ printf("failed to reserve memory for node map\n");
+ goto snk_free;
+ }
+
+ /* Setup the Graph */
+ for (i = 0; i < stages; i++) {
+ node_map[i] = (rte_node_t *) (node_map + stages) +
+ nodes_per_stage * i;
+ for (j = 0; j < nodes_per_stage; j++) {
+ total_percent = 0;
+ for (k = 0; k < nodes_per_stage; k++)
+ total_percent += edge_map[i][j][k];
+ if (!total_percent)
+ continue;
+ node_patterns[graph_data->nb_nodes] =
+ malloc(RTE_NODE_NAMESIZE);
+ if (node_patterns[graph_data->nb_nodes] == NULL) {
+ printf("Failed to create memory for pattern\n");
+ goto pattern_name_free;
+ }
+ snprintf(nname, sizeof(nname), "%d-%d", i, j);
+ node_map[i][j] = graph_node_get(TEST_GRAPH_WRK_NAME,
+ nname);
+ if (node_map[i][j] == RTE_NODE_ID_INVALID) {
+ printf("Failed to create node[%s]\n", nname);
+ graph_data->nb_nodes++;
+ goto pattern_name_free;
+ }
+ snprintf(node_patterns[graph_data->nb_nodes],
+ RTE_NODE_NAMESIZE, "%s",
+ rte_node_id_to_name(node_map[i][j]));
+ node_data = &graph_data->node_data[graph_data->nb_nodes];
+ node_data->node_id = node_map[i][j];
+ node_data->is_sink = false;
+ graph_data->nb_nodes++;
+ }
+
+ }
+
+ for (i = 0; i < stages - 1; i++) {
+ for (j = 0; j < nodes_per_stage; j++ ) {
+ node_data = graph_get_node_data(graph_data,
+ node_map[i][j]);
+ edges = graph_node_count_edges(i, j, nodes_per_stage,
+ edge_map, ename, node_data, node_map);
+ if (edges == RTE_EDGE_ID_INVALID) {
+ printf("Invalid edge configuration\n");
+ goto pattern_name_free;
+ }
+ if (!edges)
+ continue;
+ count = rte_node_edge_update(node_map[i][j], 0,
+ (const char **)(uintptr_t)
+ ename, edges);
+ for (k = 0; k < edges; k++)
+ free(ename[k]);
+ if (count != edges) {
+ printf("Couldnt add edges %d %d\n", edges,
+ count);
+ goto pattern_name_free;
+ }
+ }
+ }
+
+ /* Setup Source nodes */
+ for (i = 0; i < nb_srcs; i++) {
+ edges = 0;
+ total_percent = 0;
+ node_patterns[graph_data->nb_nodes] = malloc(RTE_NODE_NAMESIZE);
+ if (node_patterns[graph_data->nb_nodes] == NULL) {
+ printf("Failed to create memory for pattern\n");
+ goto pattern_name_free;
+ }
+ snprintf(nname, sizeof(nname), "%d", i);
+ src_nodes[i] = graph_node_get(burst_one ?
+ TEST_GRAPH_SRC_BRST_ONE_NAME :
+ TEST_GRAPH_SRC_NAME, nname);
+ if (src_nodes[i] == RTE_NODE_ID_INVALID) {
+ printf("Failed to create node[%s]\n", nname);
+ graph_data->nb_nodes++;
+ goto pattern_name_free;
+ }
+ snprintf(node_patterns[graph_data->nb_nodes], RTE_NODE_NAMESIZE,
+ "%s", rte_node_id_to_name(src_nodes[i]));
+ node_data = &graph_data->node_data[graph_data->nb_nodes];
+ node_data->node_id = src_nodes[i];
+ node_data->is_sink = false;
+ graph_data->nb_nodes++;
+ for (j = 0; j < nodes_per_stage; j++) {
+ if (!src_map[i][j])
+ continue;
+ ename[edges] = malloc(sizeof(char) * RTE_NODE_NAMESIZE);
+ snprintf(ename[edges], RTE_NODE_NAMESIZE, "%s",
+ rte_node_id_to_name(node_map[0][j]));
+ node_data->next_nodes[edges] = node_map[0][j];
+ node_data->next_percentage[edges] = src_map[i][j];
+ edges++;
+ total_percent += src_map[i][j];
+ }
+
+ if (!edges)
+ continue;
+ if (edges >= MAX_EDGES_PER_NODE || total_percent != 100) {
+ printf("Invalid edge configuration\n");
+ for (j = 0; j < edges; j++)
+ free(ename[j]);
+ goto pattern_name_free;
+ }
+ count = rte_node_edge_update(src_nodes[i], 0,
+ (const char **)(uintptr_t)ename,
+ edges);
+ for (k = 0; k < edges; k++)
+ free(ename[k]);
+ if (count != edges) {
+ printf("Couldnt add edges %d %d\n", edges, count);
+ goto pattern_name_free;
+ }
+ }
+
+ /* Setup Sink nodes */
+ for (i = 0; i < nb_sinks; i++) {
+ node_patterns[graph_data->nb_nodes] = malloc(RTE_NODE_NAMESIZE);
+ if (node_patterns[graph_data->nb_nodes] == NULL) {
+ printf("Failed to create memory for pattern\n");
+ goto pattern_name_free;
+ }
+ snprintf(nname, sizeof(nname), "%d", i);
+ snk_nodes[i] = graph_node_get(TEST_GRAPH_SNK_NAME, nname);
+ if (snk_nodes[i] == RTE_NODE_ID_INVALID) {
+ printf("Failed to create node[%s]\n", nname);
+ graph_data->nb_nodes++;
+ goto pattern_name_free;
+ }
+ snprintf(node_patterns[graph_data->nb_nodes],
+ RTE_NODE_NAMESIZE, "%s",
+ rte_node_id_to_name(snk_nodes[i]));
+ node_data = &graph_data->node_data[graph_data->nb_nodes];
+ node_data->node_id = snk_nodes[i];
+ node_data->is_sink = true;
+ graph_data->nb_nodes++;
+ }
+
+ for (i = 0; i < nodes_per_stage; i++) {
+ edges = 0;
+ total_percent = 0;
+ node_data = graph_get_node_data(graph_data,
+ node_map[stages - 1][i]);
+ for (j = 0; j < nb_sinks; j++) {
+ if(!snk_map[i][j])
+ continue;
+ ename[edges] = malloc(sizeof(char) * RTE_NODE_NAMESIZE);
+ snprintf(ename[edges], RTE_NODE_NAMESIZE, "%s",
+ rte_node_id_to_name(snk_nodes[j]));
+ node_data->next_nodes[edges] = snk_nodes[j];
+ node_data->next_percentage[edges] = snk_map[i][j];
+ edges++;
+ total_percent += snk_map[i][j];
+ }
+ if (!edges)
+ continue;
+ if (edges >= MAX_EDGES_PER_NODE || total_percent != 100) {
+ printf("Invalid edge configuration\n");
+ for (j = 0; j < edges; j++)
+ free(ename[i]);
+ goto pattern_name_free;
+ }
+ count = rte_node_edge_update(node_map[stages - 1][i], 0,
+ (const char **)(uintptr_t) ename,
+ edges);
+ for (k = 0; k < edges; k++)
+ free(ename[k]);
+ if (count != edges) {
+ printf("Couldnt add edges %d %d\n", edges, count);
+ goto pattern_name_free;
+ }
+ }
+
+ gconf.socket_id = SOCKET_ID_ANY;
+ gconf.nb_node_patterns = graph_data->nb_nodes;
+ gconf.node_patterns = (const char **)(uintptr_t) node_patterns;
+
+ graph_id = rte_graph_create(gname, &gconf);
+ if (graph_id == RTE_GRAPH_ID_INVALID) {
+ printf("Graph creation failed with error = %d\n", rte_errno);
+ goto pattern_name_free;
+ }
+ graph_data->graph_id = graph_id;
+
+ for (i = 0; i < graph_data->nb_nodes; i++)
+ free(node_patterns[i]);
+ free(snk_nodes);
+ free(src_nodes);
+ free(node_patterns);
+ return 0;
+
+pattern_name_free:
+ for (i = 0; i < graph_data->nb_nodes; i++)
+ free(node_patterns[i]);
+snk_free:
+ free(snk_nodes);
+src_free:
+ free(src_nodes);
+pattern_free:
+ free(node_patterns);
+data_free:
+ free(graph_data->node_data);
+memzone_free:
+ rte_memzone_free(mz);
+ return -ENOMEM;
+}
+
+static int
+_graph_perf_wrapper(void *args)
+{
+ struct graph_lcore_data *data = args;
+ struct rte_graph *graph;
+
+ graph = rte_graph_lookup(rte_graph_id_to_name(data->graph_id));
+ while (!data->done)
+ rte_graph_walk(graph);
+
+ return 0;
+}
+
+static int measure_perf_get(rte_graph_t graph_id)
+{
+ const char *pattern = rte_graph_id_to_name(graph_id);
+ uint32_t lcore_id = rte_get_next_lcore(-1, 1, 0);
+ struct rte_graph_cluster_stats_param param;
+ struct rte_graph_cluster_stats *stats;
+ struct graph_lcore_data *data;
+
+ data = rte_zmalloc("Graph_perf", sizeof(struct graph_lcore_data),
+ RTE_CACHE_LINE_SIZE);
+ data->graph_id = graph_id;
+ data->done = 0;
+
+ rte_eal_remote_launch(_graph_perf_wrapper, data, lcore_id);
+ if (rte_graph_has_stats_feature()) {
+ memset(¶m, 0, sizeof(param));
+ param.f = stdout;
+ param.socket_id = SOCKET_ID_ANY;
+ param.graph_patterns = &pattern;
+ param.nb_graph_patterns = 1;
+
+ stats = rte_graph_cluster_stats_create(¶m);
+ if (stats == NULL) {
+ printf("Failed to create stats\n");
+ return -ENOMEM;
+ }
+
+ rte_delay_ms(3E2);
+ rte_graph_cluster_stats_get(stats, true);
+ rte_delay_ms(1E3);
+ rte_graph_cluster_stats_get(stats, false);
+ rte_graph_cluster_stats_destroy(stats);
+ } else
+ rte_delay_ms(1E3);
+
+ data->done = 1;
+ rte_eal_wait_lcore(lcore_id);
+
+ return 0;
+}
+
+static inline void
+graph_fini(void)
+{
+ const struct rte_memzone *mz = rte_memzone_lookup(TEST_GRAPH_PERF_MZ);
+ struct test_graph_perf *graph_data;
+
+ if (mz == NULL)
+ return;
+ graph_data = mz->addr;
+
+ rte_graph_destroy(rte_graph_id_to_name(graph_data->graph_id));
+ free(graph_data->node_data);
+ rte_memzone_free(rte_memzone_lookup(TEST_GRAPH_PERF_MZ));
+}
+
+static int
+measure_perf(void)
+{
+ const struct rte_memzone *mz;
+ struct test_graph_perf *graph_data;
+
+ mz = rte_memzone_lookup(TEST_GRAPH_PERF_MZ);
+ graph_data = mz->addr;
+
+ return measure_perf_get(graph_data->graph_id);
+}
+
+static inline int
+graph_hr_4s_1n_1src_1snk(void) {
+ return measure_perf();
+}
+
+static inline int
+graph_hr_4s_1n_1src_1snk_brst_one(void) {
+ return measure_perf();
+}
+
+static inline int
+graph_hr_4s_1n_2src_1snk(void) {
+ return measure_perf();
+}
+
+static inline int
+graph_hr_4s_1n_1src_2snk(void) {
+ return measure_perf();
+}
+
+static inline int
+graph_tree_4s_4n_1src_4snk(void) {
+ return measure_perf();
+}
+
+static inline int
+graph_reverse_tree_3s_4n_1src_1snk(void) {
+ return measure_perf();
+}
+
+static inline int
+graph_parallel_tree_5s_4n_4src_4snk(void) {
+ return measure_perf();
+}
+
+static inline int
+graph_init_hr(void)
+{
+ uint8_t edge_map[][1][1] = {
+ {{100}},
+ {{100}},
+ {{100}},
+ {{100}},
+ };
+ uint8_t src_map[][1] = {{100}};
+ uint8_t snk_map[][1] = {{100}};
+ return graph_init("graph_hr", SOURCES(src_map), SINKS(snk_map),
+ STAGES(edge_map), NODES_PER_STAGE(edge_map), src_map,
+ snk_map, edge_map, 0);
+}
+
+static inline int
+graph_init_hr_brst_one(void)
+{
+ uint8_t edge_map[][1][1] = {
+ {{100}},
+ {{100}},
+ {{100}},
+ {{100}},
+ };
+ uint8_t src_map[][1] = {{100}};
+ uint8_t snk_map[][1] = {{100}};
+ return graph_init("graph_hr", SOURCES(src_map), SINKS(snk_map),
+ STAGES(edge_map), NODES_PER_STAGE(edge_map), src_map,
+ snk_map, edge_map, 1);
+}
+
+static inline int
+graph_init_hr_multi_src(void)
+{
+ uint8_t edge_map[][1][1] = {
+ {{100}},
+ {{100}},
+ {{100}},
+ {{100}},
+ };
+ uint8_t src_map[][1] = {{100},{100}};
+ uint8_t snk_map[][1] = {{100}};
+ return graph_init("graph_hr", SOURCES(src_map), SINKS(snk_map),
+ STAGES(edge_map), NODES_PER_STAGE(edge_map), src_map,
+ snk_map, edge_map, 0);
+}
+
+static inline int
+graph_init_hr_multi_snk(void)
+{
+ uint8_t edge_map[][1][1] = {
+ {{100}},
+ {{100}},
+ {{100}},
+ {{100}},
+ };
+ uint8_t src_map[][1] = {{100}};
+ uint8_t snk_map[][2] = {{50, 50}};
+ return graph_init("graph_hr", SOURCES(src_map), SINKS(snk_map),
+ STAGES(edge_map), NODES_PER_STAGE(edge_map), src_map,
+ snk_map, edge_map, 0);
+}
+
+static inline int
+graph_init_tree(void)
+{
+ uint8_t edge_map[][4][4] = {
+ {{100, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}},
+ {{50, 0, 0, 0}, {50, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 0}},
+ {{33, 33, 0, 0}, {34, 34, 0, 0}, {33, 33, 0, 0}, {0,0,0,0}},
+ {{25, 25, 25, 0}, {25, 25, 25, 0}, {25, 25, 25, 0},
+ {25, 25, 25, 0}}
+ };
+ uint8_t src_map[][4] = {{100, 0, 0, 0}};
+ uint8_t snk_map[][4] = {{100,0,0,0}, {0,100,0,0}, {0,0,100,0}, {0,0,0,100}};
+
+ return graph_init("graph_full_split", SOURCES(src_map), SINKS(snk_map),
+ STAGES(edge_map), NODES_PER_STAGE(edge_map), src_map,
+ snk_map, edge_map, 0);
+}
+
+static inline int
+graph_init_reverse_tree(void)
+{
+ uint8_t edge_map[][4][4] = {
+ {{25,25,25,25}, {25,25,25,25}, {25,25,25,25}, {25,25,25,25}},
+ {{33,33,33,33}, {33,33,33,33}, {34,34,34,34}, {0,0,0,0}},
+ {{50,50,50,0}, {50,50,50,0}, {0,0,0,0}, {0,0,0,0}},
+ };
+ uint8_t src_map[][4] = {{25, 25, 25, 25}};
+ uint8_t snk_map[][1] = {{100}, {100}, {0}, {0}};
+
+ return graph_init("graph_full_split", SOURCES(src_map), SINKS(snk_map),
+ STAGES(edge_map), NODES_PER_STAGE(edge_map), src_map,
+ snk_map, edge_map, 0);
+}
+
+static inline int
+graph_init_parallel_tree(void)
+{
+ uint8_t edge_map[][4][4] = {
+ {{100,0,0,0}, {0,100,0,0}, {0,0,100,0}, {0,0,0,100}},
+ {{100,0,0,0}, {0,100,0,0}, {0,0,100,0}, {0,0,0,100}},
+ {{100,0,0,0}, {0,100,0,0}, {0,0,100,0}, {0,0,0,100}},
+ {{100,0,0,0}, {0,100,0,0}, {0,0,100,0}, {0,0,0,100}},
+ {{100,0,0,0}, {0,100,0,0}, {0,0,100,0}, {0,0,0,100}},
+ };
+ uint8_t src_map[][4] = {{100,0,0,0}, {0,100,0,0}, {0,0,100,0}, {0,0,0,100}};
+ uint8_t snk_map[][4] = {{100,0,0,0}, {0,100,0,0}, {0,0,100,0}, {0,0,0,100}};
+
+ return graph_init("graph_parallel", SOURCES(src_map), SINKS(snk_map),
+ STAGES(edge_map), NODES_PER_STAGE(edge_map), src_map,
+ snk_map, edge_map, 0);
+}
+
+/** Graph Creation cheatsheet
+ * edge_map -> dictates graph flow from worker stage 0 to worker stage n-1.
+ * src_map -> dictates source nodes enqueue percentage to worker stage 0.
+ * snk_map -> dictates stage n-1 enqueue percentage to sink.
+ *
+ * Layout:
+ * edge_map[<nb_stages>][<nodes_per_stg>][<nodes_in_nxt_stg = nodes_per_stg>]
+ * src_map[<nb_sources>][<nodes_in_stage0 = nodes_per_stage>]
+ * snk_map[<nodes_in_stage(n-1) = nodes_per_stage>][<nb_sinks>]
+ *
+ * The last array dictates the percentage of received objs to enqueue to next
+ * stage.
+ *
+ * Note: edge_map[][0][] will always be unused as it will receive from source
+ *
+ * Example:
+ * Graph:
+ * http://bit.ly/2PqbqOy
+ * Each stage(n) connects to all nodes in the next stage in decreasing
+ * order.
+ * Since we cant resize the edge_map dynamically we get away by creating
+ * dummy nodes and assigning 0 percentages.
+ * Max nodes across all stages = 4
+ * stages = 3
+ * nb_src = 1
+ * nb_snk = 1
+ * // Stages
+ * edge_map[][4][4] = {
+ * // Nodes per stage
+ * {
+ * {25,25,25,25},{25,25,25,25},{25,25,25,25},{25,25,25,25}
+ * }, // This will be unused.
+ * {
+ * // Nodes enabled in current stage + prev stage enq %
+ * {33,33,33,33}, {33,33,33,33}, {34,34,34,34}, {0,0,0,0}
+ * },
+ * {
+ * {50,50,50,0}, {50,50,50,0}, {0,0,0,0}, {0,0,0,0}
+ * },
+ * };
+ * Above, each stage tells how much it should receive from previous except
+ * from stage_0.
+ *
+ * src_map[][4] = {{25, 25, 25, 25}};
+ * Here, we tell each source the % it has to send to stage_0 nodes. In
+ * case we want 2 source node we can declae as
+ * src_map[][4] = {{25, 25, 25, 25}, {25, 25, 25, 25}};
+ *
+ * snk_map[][1] = {{100}, {100}, {0}, {0}}
+ * Here, we tell stage - 1 nodes how much to enqueue to sink_0.
+ * If we have 2 sinks we can do as follows
+ * snk_map[][2] = {{50, 50}, {50, 50}, {0, 0}, {0, 0}}
+ *
+ * TODO: add validation logic for above declaration style.
+ * */
+
+static struct unit_test_suite graph_perf_testsuite = {
+ .suite_name = "Graph library performance test suite",
+ .setup = graph_perf_setup,
+ .teardown = graph_perf_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(graph_init_hr, graph_fini,
+ graph_hr_4s_1n_1src_1snk),
+ TEST_CASE_ST(graph_init_hr_brst_one, graph_fini,
+ graph_hr_4s_1n_1src_1snk_brst_one),
+ TEST_CASE_ST(graph_init_hr_multi_src, graph_fini,
+ graph_hr_4s_1n_2src_1snk),
+ TEST_CASE_ST(graph_init_hr_multi_snk, graph_fini,
+ graph_hr_4s_1n_1src_2snk),
+ TEST_CASE_ST(graph_init_tree, graph_fini,
+ graph_tree_4s_4n_1src_4snk),
+ TEST_CASE_ST(graph_init_reverse_tree, graph_fini,
+ graph_reverse_tree_3s_4n_1src_1snk),
+ TEST_CASE_ST(graph_init_parallel_tree, graph_fini,
+ graph_parallel_tree_5s_4n_4src_4snk),
+ TEST_CASES_END(), /**< NULL terminate unit test array */
+ }
+};
+
+static int
+test_graph_perf_func(void)
+{
+ return unit_test_suite_runner(&graph_perf_testsuite);
+}
+
+REGISTER_TEST_COMMAND(graph_perf_autotest, test_graph_perf_func);
--
2.24.1
^ permalink raw reply [flat|nested] 31+ messages in thread
* [dpdk-dev] [RFC PATCH 5/5] example/l3fwd_graph: l3fwd using graph architecture
2020-01-31 17:01 [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem jerinj
` (3 preceding siblings ...)
2020-01-31 17:02 ` [dpdk-dev] [RFC PATCH 4/5] test: add graph performance test cases jerinj
@ 2020-01-31 17:02 ` jerinj
2020-01-31 18:34 ` [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem Ray Kinsella
2020-02-25 5:22 ` Honnappa Nagarahalli
6 siblings, 0 replies; 31+ messages in thread
From: jerinj @ 2020-01-31 17:02 UTC (permalink / raw)
To: dev
Cc: pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
This example application showcase the l3fwd implementation
using graph architecture.
The graphical topology representation of the l3fwd-graph is the following,
http://bit.ly/39UPPGm
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
examples/Makefile | 3 +
examples/l3fwd-graph/Makefile | 58 ++
examples/l3fwd-graph/main.c | 1131 ++++++++++++++++++++++++++++++
examples/l3fwd-graph/meson.build | 13 +
examples/meson.build | 6 +-
5 files changed, 1209 insertions(+), 2 deletions(-)
create mode 100644 examples/l3fwd-graph/Makefile
create mode 100644 examples/l3fwd-graph/main.c
create mode 100644 examples/l3fwd-graph/meson.build
diff --git a/examples/Makefile b/examples/Makefile
index feff79784..96a2b9575 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -51,6 +51,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_ACL) += l3fwd-acl
ifeq ($(CONFIG_RTE_LIBRTE_LPM)$(CONFIG_RTE_LIBRTE_HASH),yy)
DIRS-$(CONFIG_RTE_LIBRTE_POWER) += l3fwd-power
endif
+ifeq ($(CONFIG_RTE_LIBRTE_GRAPH), y)
+DIRS-y += l3fwd-graph
+endif
DIRS-y += link_status_interrupt
DIRS-y += multi_process
DIRS-y += ntb
diff --git a/examples/l3fwd-graph/Makefile b/examples/l3fwd-graph/Makefile
new file mode 100644
index 000000000..7596de500
--- /dev/null
+++ b/examples/l3fwd-graph/Makefile
@@ -0,0 +1,58 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Marvell International Ltd.
+
+# binary name
+APP = l3fwd-graph
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+# Build using pkg-config variables if possible
+ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
+
+all: shared
+.PHONY: shared static
+shared: build/$(APP)-shared
+ ln -sf $(APP)-shared build/$(APP)
+static: build/$(APP)-static
+ ln -sf $(APP)-static build/$(APP)
+
+PKGCONF=pkg-config --define-prefix
+
+PC_FILE := $(shell $(PKGCONF) --path libdpdk)
+CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk) -DALLOW_EXPERIMENTAL_API
+LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)
+LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk)
+
+build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)
+
+build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC)
+
+build:
+ @mkdir -p $@
+
+.PHONY: clean
+clean:
+ rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared
+ test -d build && rmdir -p build || true
+
+else # Build using legacy build system
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, detect a build directory, by looking for a path with a .config
+RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config)))))
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+CFLAGS += -I$(SRCDIR)
+CFLAGS += -O3 $(USER_FLAGS)
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
+endif
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
new file mode 100644
index 000000000..7be41c4dc
--- /dev/null
+++ b/examples/l3fwd-graph/main.c
@@ -0,0 +1,1131 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell International Ltd.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <string.h>
+#include <sys/queue.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <getopt.h>
+#include <signal.h>
+#include <stdbool.h>
+#include <arpa/inet.h>
+
+#include <rte_common.h>
+#include <rte_vect.h>
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_eal.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+#include <rte_string_fns.h>
+#include <rte_cpuflags.h>
+#include <rte_graph_worker.h>
+#include <rte_node_eth_api.h>
+#include <rte_node_ip4_api.h>
+
+#include <unistd.h>
+#include <cmdline_parse.h>
+#include <cmdline_parse_etheraddr.h>
+
+/* Log type */
+#define RTE_LOGTYPE_L3FWD_GRAPH RTE_LOGTYPE_USER1
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
+#define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS
+#define MAX_RX_QUEUE_PER_PORT 128
+
+#define MAX_RX_QUEUE_PER_LCORE 16
+
+#define MAX_LCORE_PARAMS 1024
+
+#define NB_SOCKETS 8
+
+/* Static global variables used within this file. */
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/**< Ports set in promiscuous mode off by default. */
+static int promiscuous_on;
+
+static int numa_on = 1; /**< NUMA is enabled by default. */
+static int per_port_pool; /**< Use separate buffer pools per port; disabled */
+ /**< by default */
+
+static volatile bool force_quit;
+
+/* ethernet addresses of ports */
+static uint64_t dest_eth_addr[RTE_MAX_ETHPORTS];
+static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
+xmm_t val_eth[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint32_t enabled_port_mask;
+
+struct lcore_rx_queue {
+ uint16_t port_id;
+ uint8_t queue_id;
+ char node_name[RTE_NODE_NAMESIZE];
+};
+
+/* lcore conf */
+struct lcore_conf {
+ uint16_t n_rx_queue;
+ struct lcore_rx_queue rx_queue_list[MAX_RX_QUEUE_PER_LCORE];
+
+ struct rte_graph *graph;
+ char name[RTE_GRAPH_NAMESIZE];
+ rte_graph_t graph_id;
+} __rte_cache_aligned;
+
+static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
+
+struct lcore_params {
+ uint16_t port_id;
+ uint8_t queue_id;
+ uint8_t lcore_id;
+} __rte_cache_aligned;
+
+static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS];
+static struct lcore_params lcore_params_array_default[] = {
+ {0, 0, 2},
+ {0, 1, 2},
+ {0, 2, 2},
+ {1, 0, 2},
+ {1, 1, 2},
+ {1, 2, 2},
+ {2, 0, 2},
+ {3, 0, 3},
+ {3, 1, 3},
+};
+
+static struct lcore_params * lcore_params = lcore_params_array_default;
+static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
+ sizeof(lcore_params_array_default[0]);
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
+ .split_hdr_size = 0,
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_key = NULL,
+ .rss_hf = ETH_RSS_IP,
+ },
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ },
+};
+
+static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
+
+static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
+
+struct ipv4_l3fwd_lpm_route {
+ uint32_t ip;
+ uint8_t depth;
+ uint8_t if_out;
+};
+
+#define IPV4_L3FWD_LPM_NUM_ROUTES \
+ (sizeof(ipv4_l3fwd_lpm_route_array) / sizeof(ipv4_l3fwd_lpm_route_array[0]))
+/* 198.18.0.0/16 are set aside for RFC2544 benchmarking. */
+static struct ipv4_l3fwd_lpm_route ipv4_l3fwd_lpm_route_array[] = {
+ {RTE_IPV4(198, 18, 0, 0), 24, 0},
+ {RTE_IPV4(198, 18, 1, 0), 24, 1},
+ {RTE_IPV4(198, 18, 2, 0), 24, 2},
+ {RTE_IPV4(198, 18, 3, 0), 24, 3},
+ {RTE_IPV4(198, 18, 4, 0), 24, 4},
+ {RTE_IPV4(198, 18, 5, 0), 24, 5},
+ {RTE_IPV4(198, 18, 6, 0), 24, 6},
+ {RTE_IPV4(198, 18, 7, 0), 24, 7},
+};
+
+static int
+check_lcore_params(void)
+{
+ uint8_t queue, lcore;
+ uint16_t i;
+ int socketid;
+
+ for (i = 0; i < nb_lcore_params; ++i) {
+ queue = lcore_params[i].queue_id;
+ if (queue >= MAX_RX_QUEUE_PER_PORT) {
+ printf("invalid queue number: %hhu\n", queue);
+ return -1;
+ }
+ lcore = lcore_params[i].lcore_id;
+ if (!rte_lcore_is_enabled(lcore)) {
+ printf("error: lcore %hhu is not enabled in lcore mask\n", lcore);
+ return -1;
+ }
+
+ if (lcore == rte_get_master_lcore()) {
+ printf("error: lcore %u is master lcore\n", lcore);
+ return -1;
+ }
+ if ((socketid = rte_lcore_to_socket_id(lcore) != 0) &&
+ (numa_on == 0)) {
+ printf("warning: lcore %hhu is on socket %d with numa off \n",
+ lcore, socketid);
+ }
+ }
+ return 0;
+}
+
+static int
+check_port_config(void)
+{
+ uint16_t portid;
+ uint16_t i;
+
+ for (i = 0; i < nb_lcore_params; ++i) {
+ portid = lcore_params[i].port_id;
+ if ((enabled_port_mask & (1 << portid)) == 0) {
+ printf("port %u is not enabled in port mask\n", portid);
+ return -1;
+ }
+ if (!rte_eth_dev_is_valid_port(portid)) {
+ printf("port %u is not present on the board\n", portid);
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static uint8_t
+get_port_n_rx_queues(const uint16_t port)
+{
+ int queue = -1;
+ uint16_t i;
+
+ for (i = 0; i < nb_lcore_params; ++i) {
+ if (lcore_params[i].port_id == port) {
+ if (lcore_params[i].queue_id == queue+1)
+ queue = lcore_params[i].queue_id;
+ else
+ rte_exit(EXIT_FAILURE, "queue ids of the port %d must be"
+ " in sequence and must start with 0\n",
+ lcore_params[i].port_id);
+ }
+ }
+ return (uint8_t)(++queue);
+}
+
+static int
+init_lcore_rx_queues(void)
+{
+ uint16_t i, nb_rx_queue;
+ uint8_t lcore;
+
+ for (i = 0; i < nb_lcore_params; ++i) {
+ lcore = lcore_params[i].lcore_id;
+ nb_rx_queue = lcore_conf[lcore].n_rx_queue;
+ if (nb_rx_queue >= MAX_RX_QUEUE_PER_LCORE) {
+ printf("error: too many queues (%u) for lcore: %u\n",
+ (unsigned)nb_rx_queue + 1, (unsigned)lcore);
+ return -1;
+ } else {
+ lcore_conf[lcore].rx_queue_list[nb_rx_queue].port_id =
+ lcore_params[i].port_id;
+ lcore_conf[lcore].rx_queue_list[nb_rx_queue].queue_id =
+ lcore_params[i].queue_id;
+ lcore_conf[lcore].n_rx_queue++;
+ }
+ }
+ return 0;
+}
+
+/* display usage */
+static void
+print_usage(const char *prgname)
+{
+ fprintf(stderr, "%s [EAL options] --"
+ " -p PORTMASK"
+ " [-P]"
+ " [-E]"
+ " [-L]"
+ " --config (port,queue,lcore)[,(port,queue,lcore)]"
+ " [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
+ " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--no-numa]"
+ " [--per-port-pool]\n\n"
+
+ " -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
+ " -P : Enable promiscuous mode\n"
+ " --config (port,queue,lcore): Rx queue configuration\n"
+ " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for port X\n"
+ " --enable-jumbo: Enable jumbo frames\n"
+ " --max-pkt-len: Under the premise of enabling jumbo,\n"
+ " maximum packet length in decimal (64-9600)\n"
+ " --no-numa: Disable numa awareness\n"
+ " --per-port-pool: Use separate buffer pool per port\n\n",
+ prgname);
+}
+
+static int
+parse_max_pkt_len(const char *pktlen)
+{
+ char *end = NULL;
+ unsigned long len;
+
+ /* parse decimal string */
+ len = strtoul(pktlen, &end, 10);
+ if ((pktlen[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+
+ if (len == 0)
+ return -1;
+
+ return len;
+}
+
+static int
+parse_portmask(const char *portmask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(portmask, &end, 16);
+ if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+
+ if (pm == 0)
+ return -1;
+
+ return pm;
+}
+
+static int
+parse_config(const char *q_arg)
+{
+ char s[256];
+ const char *p, *p0 = q_arg;
+ char *end;
+ enum fieldnames {
+ FLD_PORT = 0,
+ FLD_QUEUE,
+ FLD_LCORE,
+ _NUM_FLD
+ };
+ unsigned long int_fld[_NUM_FLD];
+ char *str_fld[_NUM_FLD];
+ int i;
+ unsigned size;
+
+ nb_lcore_params = 0;
+
+ while ((p = strchr(p0,'(')) != NULL) {
+ ++p;
+ if((p0 = strchr(p,')')) == NULL)
+ return -1;
+
+ size = p0 - p;
+ if(size >= sizeof(s))
+ return -1;
+
+ snprintf(s, sizeof(s), "%.*s", size, p);
+ if (rte_strsplit(s, sizeof(s), str_fld, _NUM_FLD, ',') != _NUM_FLD)
+ return -1;
+ for (i = 0; i < _NUM_FLD; i++){
+ errno = 0;
+ int_fld[i] = strtoul(str_fld[i], &end, 0);
+ if (errno != 0 || end == str_fld[i] || int_fld[i] > 255)
+ return -1;
+ }
+ if (nb_lcore_params >= MAX_LCORE_PARAMS) {
+ printf("exceeded max number of lcore params: %hu\n",
+ nb_lcore_params);
+ return -1;
+ }
+ lcore_params_array[nb_lcore_params].port_id =
+ (uint8_t)int_fld[FLD_PORT];
+ lcore_params_array[nb_lcore_params].queue_id =
+ (uint8_t)int_fld[FLD_QUEUE];
+ lcore_params_array[nb_lcore_params].lcore_id =
+ (uint8_t)int_fld[FLD_LCORE];
+ ++nb_lcore_params;
+ }
+ lcore_params = lcore_params_array;
+ return 0;
+}
+
+static void
+parse_eth_dest(const char *optarg)
+{
+ uint16_t portid;
+ char *port_end;
+ uint8_t c, *dest, peer_addr[6];
+
+ errno = 0;
+ portid = strtoul(optarg, &port_end, 10);
+ if (errno != 0 || port_end == optarg || *port_end++ != ',')
+ rte_exit(EXIT_FAILURE,
+ "Invalid eth-dest: %s", optarg);
+ if (portid >= RTE_MAX_ETHPORTS)
+ rte_exit(EXIT_FAILURE,
+ "eth-dest: port %d >= RTE_MAX_ETHPORTS(%d)\n",
+ portid, RTE_MAX_ETHPORTS);
+
+ if (cmdline_parse_etheraddr(NULL, port_end,
+ &peer_addr, sizeof(peer_addr)) < 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid ethernet address: %s\n",
+ port_end);
+ dest = (uint8_t *)&dest_eth_addr[portid];
+ for (c = 0; c < 6; c++)
+ dest[c] = peer_addr[c];
+ *(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
+}
+
+#define MAX_JUMBO_PKT_LEN 9600
+#define MEMPOOL_CACHE_SIZE 256
+
+static const char short_options[] =
+ "p:" /* portmask */
+ "P" /* promiscuous */
+ "L" /* enable long prefix match */
+ "E" /* enable exact match */
+ ;
+
+#define CMD_LINE_OPT_CONFIG "config"
+#define CMD_LINE_OPT_ETH_DEST "eth-dest"
+#define CMD_LINE_OPT_NO_NUMA "no-numa"
+#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
+enum {
+ /* long options mapped to a short option */
+
+ /* first long only option value must be >= 256, so that we won't
+ * conflict with short options */
+ CMD_LINE_OPT_MIN_NUM = 256,
+ CMD_LINE_OPT_CONFIG_NUM,
+ CMD_LINE_OPT_ETH_DEST_NUM,
+ CMD_LINE_OPT_NO_NUMA_NUM,
+ CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_PARSE_PER_PORT_POOL,
+};
+
+static const struct option lgopts[] = {
+ {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
+ {CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
+ {CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
+ {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
+ {NULL, 0, 0, 0}
+};
+
+/*
+ * This expression is used to calculate the number of mbufs needed
+ * depending on user input, taking into account memory for rx and
+ * tx hardware rings, cache per lcore and mtable per port per lcore.
+ * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum
+ * value of 8192
+ */
+#define NB_MBUF(nports) RTE_MAX( \
+ (nports*nb_rx_queue*nb_rxd + \
+ nports*nb_lcores*RTE_GRAPH_BURST_SIZE + \
+ nports*n_tx_queue*nb_txd + \
+ nb_lcores*MEMPOOL_CACHE_SIZE), \
+ (unsigned)8192)
+
+/* Parse the argument given in the command line of the application */
+static int
+parse_args(int argc, char **argv)
+{
+ int opt, ret;
+ char **argvopt;
+ int option_index;
+ char *prgname = argv[0];
+
+ argvopt = argv;
+
+ /* Error or normal output strings. */
+ while ((opt = getopt_long(argc, argvopt, short_options,
+ lgopts, &option_index)) != EOF) {
+
+ switch (opt) {
+ /* portmask */
+ case 'p':
+ enabled_port_mask = parse_portmask(optarg);
+ if (enabled_port_mask == 0) {
+ fprintf(stderr, "Invalid portmask\n");
+ print_usage(prgname);
+ return -1;
+ }
+ break;
+
+ case 'P':
+ promiscuous_on = 1;
+ break;
+
+ /* long options */
+ case CMD_LINE_OPT_CONFIG_NUM:
+ ret = parse_config(optarg);
+ if (ret) {
+ fprintf(stderr, "Invalid config\n");
+ print_usage(prgname);
+ return -1;
+ }
+ break;
+
+ case CMD_LINE_OPT_ETH_DEST_NUM:
+ parse_eth_dest(optarg);
+ break;
+
+ case CMD_LINE_OPT_NO_NUMA_NUM:
+ numa_on = 0;
+ break;
+
+ case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
+ const struct option lenopts = {
+ "max-pkt-len", required_argument, 0, 0
+ };
+
+ port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+
+ /*
+ * if no max-pkt-len set, use the default
+ * value RTE_ETHER_MAX_LEN.
+ */
+ if (getopt_long(argc, argvopt, "",
+ &lenopts, &option_index) == 0) {
+ ret = parse_max_pkt_len(optarg);
+ if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
+ fprintf(stderr,
+ "invalid maximum packet length\n");
+ print_usage(prgname);
+ return -1;
+ }
+ port_conf.rxmode.max_rx_pkt_len = ret;
+ }
+ break;
+ }
+
+ case CMD_LINE_OPT_PARSE_PER_PORT_POOL:
+ printf("per port buffer pool is enabled\n");
+ per_port_pool = 1;
+ break;
+
+ default:
+ print_usage(prgname);
+ return -1;
+ }
+ }
+
+ if (optind >= 0)
+ argv[optind-1] = prgname;
+
+ ret = optind-1;
+ optind = 1; /* reset getopt lib */
+ return ret;
+}
+
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
+static int
+init_mem(uint16_t portid, unsigned int nb_mbuf)
+{
+ int socketid;
+ unsigned lcore_id;
+ char s[64];
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ if (rte_lcore_is_enabled(lcore_id) == 0)
+ continue;
+
+ if (numa_on)
+ socketid = rte_lcore_to_socket_id(lcore_id);
+ else
+ socketid = 0;
+
+ if (socketid >= NB_SOCKETS) {
+ rte_exit(EXIT_FAILURE,
+ "Socket %d of lcore %u is out of range %d\n",
+ socketid, lcore_id, NB_SOCKETS);
+ }
+
+ if (pktmbuf_pool[portid][socketid] == NULL) {
+ snprintf(s, sizeof(s), "mbuf_pool_%d:%d",
+ portid, socketid);
+ /* Create a pool with priv size of a cacheline */
+ pktmbuf_pool[portid][socketid] =
+ rte_pktmbuf_pool_create(s, nb_mbuf,
+ MEMPOOL_CACHE_SIZE, RTE_CACHE_LINE_SIZE,
+ RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+ if (pktmbuf_pool[portid][socketid] == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Cannot init mbuf pool on socket %d\n",
+ socketid);
+ else
+ printf("Allocated mbuf pool on socket %d\n",
+ socketid);
+
+ }
+ }
+ return 0;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+
+ printf("\nChecking link status");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ if (force_quit)
+ return;
+ all_ports_up = 1;
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (force_quit)
+ return;
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ rte_eth_link_get_nowait(portid, &link);
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status)
+ printf(
+ "Port%d Link Up. Speed %u Mbps -%s\n",
+ portid, link.link_speed,
+ (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ ("full-duplex") : ("half-duplex\n"));
+ else
+ printf("Port %d Link Down\n", portid);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ printf(".");
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+ print_flag = 1;
+ printf("done\n");
+ }
+ }
+}
+
+static void
+signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ force_quit = true;
+ }
+}
+
+static void
+print_stats(void)
+{
+ const char topLeft[] = { 27, '[', '1', ';', '1', 'H','\0' };
+ const char clr[] = { 27, '[', '2', 'J', '\0' };
+ struct rte_graph_cluster_stats_param s_param;
+ struct rte_graph_cluster_stats *stats;
+ const char *pattern = "worker_*";
+
+ /* Prepare stats object */
+ memset(&s_param, 0, sizeof(s_param));
+ s_param.f = stdout;
+ s_param.socket_id = SOCKET_ID_ANY;
+ s_param.graph_patterns = &pattern;
+ s_param.nb_graph_patterns = 1;
+
+ stats = rte_graph_cluster_stats_create(&s_param);
+ if (stats == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Unable to create stats object\n");
+
+ while (!force_quit) {
+ /* Clear screen and move to top left */
+ printf("%s%s", clr, topLeft);
+ rte_graph_cluster_stats_get(stats, 0);
+ rte_delay_ms(1E3);
+ }
+
+ rte_graph_cluster_stats_destroy(stats);
+}
+
+/* main processing loop */
+static int
+graph_main_loop(void *conf)
+{
+ struct lcore_conf *qconf;
+ struct rte_graph *graph;
+ unsigned lcore_id;
+
+ RTE_SET_USED(conf);
+
+ lcore_id = rte_lcore_id();
+ qconf = &lcore_conf[lcore_id];
+ graph = qconf->graph;
+
+ if (!graph) {
+ RTE_LOG(INFO, L3FWD_GRAPH,
+ "lcore %u has nothing to do\n", lcore_id);
+ return 0;
+ }
+
+ RTE_LOG(INFO, L3FWD_GRAPH,
+ "entering main loop on lcore %u, graph %s(%p)\n",
+ lcore_id, qconf->name, graph);
+
+ while (likely(!force_quit)) {
+ rte_graph_walk(graph);
+ }
+
+ return 0;
+}
+
+int
+main(int argc, char **argv)
+{
+ const char *node_patterns[64] = {"ip4*", "ethdev_tx-*", "pkt_drop", };
+ uint8_t rewrite_data[2 * sizeof(struct rte_ether_addr)];
+ uint8_t nb_rx_queue, queue, socketid;
+ struct rte_ether_addr *ether_addr;
+ struct rte_graph_param graph_conf;
+ struct rte_eth_dev_info dev_info;
+ unsigned nb_ports, nb_conf = 0;
+ uint32_t n_tx_queue, nb_lcores;
+ struct rte_eth_txconf *txconf;
+ uint16_t queueid, portid, i;
+ struct lcore_conf *qconf;
+ uint16_t nb_patterns = 3;
+ uint16_t nb_graphs = 0;
+ uint8_t rewrite_len;
+ unsigned lcore_id;
+ int ret;
+
+ /* init EAL */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n");
+ argc -= ret;
+ argv += ret;
+
+ force_quit = false;
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+
+ /* pre-init dst MACs for all ports to 02:00:00:00:00:xx */
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+ dest_eth_addr[portid] =
+ RTE_ETHER_LOCAL_ADMIN_ADDR + ((uint64_t)portid << 40);
+ *(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
+ }
+
+ /* parse application arguments (after the EAL ones) */
+ ret = parse_args(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid L3FWD_GRAPH parameters\n");
+
+ if (check_lcore_params() < 0)
+ rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
+
+ ret = init_lcore_rx_queues();
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "init_lcore_rx_queues failed\n");
+
+ nb_ports = rte_eth_dev_count_avail();
+
+ if (check_port_config() < 0)
+ rte_exit(EXIT_FAILURE, "check_port_config failed\n");
+
+ nb_lcores = rte_lcore_count();
+
+ /* initialize all ports */
+ RTE_ETH_FOREACH_DEV(portid) {
+ struct rte_eth_conf local_port_conf = port_conf;
+
+ /* skip ports that are not enabled */
+ if ((enabled_port_mask & (1 << portid)) == 0) {
+ printf("\nSkipping disabled port %d\n", portid);
+ continue;
+ }
+
+ /* init port */
+ printf("Initializing port %d ... ", portid );
+ fflush(stdout);
+
+ nb_rx_queue = get_port_n_rx_queues(portid);
+ n_tx_queue = nb_lcores;
+ if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
+ n_tx_queue = MAX_TX_QUEUE_PER_PORT;
+ printf("Creating queues: nb_rxq=%d nb_txq=%u... ",
+ nb_rx_queue, (unsigned)n_tx_queue );
+
+ rte_eth_dev_info_get(portid, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+ local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+ dev_info.flow_type_rss_offloads;
+ if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+ port_conf.rx_adv_conf.rss_conf.rss_hf) {
+ printf("Port %u modified RSS hash function based on hardware support,"
+ "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+ portid,
+ port_conf.rx_adv_conf.rss_conf.rss_hf,
+ local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+ }
+
+ ret = rte_eth_dev_configure(portid, nb_rx_queue,
+ (uint16_t)n_tx_queue, &local_port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot configure device: err=%d, port=%d\n",
+ ret, portid);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(portid, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot adjust number of descriptors: err=%d, "
+ "port=%d\n", ret, portid);
+
+ rte_eth_macaddr_get(portid, &ports_eth_addr[portid]);
+ print_ethaddr(" Address:", &ports_eth_addr[portid]);
+ printf(", ");
+ print_ethaddr("Destination:",
+ (const struct rte_ether_addr *)&dest_eth_addr[portid]);
+ printf(", ");
+
+ /*
+ * prepare src MACs for each port.
+ */
+ rte_ether_addr_copy(&ports_eth_addr[portid],
+ (struct rte_ether_addr *)(val_eth + portid) + 1);
+
+ /* init memory */
+ if (!per_port_pool) {
+ /* portid = 0; this is *not* signifying the first port,
+ * rather, it signifies that portid is ignored.
+ */
+ ret = init_mem(0, NB_MBUF(nb_ports));
+ } else {
+ ret = init_mem(portid, NB_MBUF(1));
+ }
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "init_mem failed\n");
+
+ /* init one TX queue per couple (lcore,port) */
+ queueid = 0;
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ if (rte_lcore_is_enabled(lcore_id) == 0)
+ continue;
+
+ qconf = &lcore_conf[lcore_id];
+
+ if (numa_on)
+ socketid =
+ (uint8_t)rte_lcore_to_socket_id(lcore_id);
+ else
+ socketid = 0;
+
+ printf("txq=%u,%d,%d ", lcore_id, queueid, socketid);
+ fflush(stdout);
+
+ txconf = &dev_info.default_txconf;
+ txconf->offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,
+ socketid, txconf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_tx_queue_setup: err=%d, "
+ "port=%d\n", ret, portid);
+ queueid++;
+ }
+
+ /* Setup ethdev node config */
+ ethdev_conf[nb_conf].port_id = portid;
+ ethdev_conf[nb_conf].num_rx_queues = nb_rx_queue;
+ ethdev_conf[nb_conf].num_tx_queues = n_tx_queue;
+ if (!per_port_pool)
+ ethdev_conf[nb_conf].mp =
+ pktmbuf_pool[0];
+
+ else
+ ethdev_conf[nb_conf].mp =
+ pktmbuf_pool[portid];
+ ethdev_conf[nb_conf].mp_count = NB_SOCKETS;
+
+ nb_conf++;
+ printf("\n");
+ }
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ if (rte_lcore_is_enabled(lcore_id) == 0)
+ continue;
+ qconf = &lcore_conf[lcore_id];
+ printf("\nInitializing rx queues on lcore %u ... ", lcore_id );
+ fflush(stdout);
+ /* init RX queues */
+ for(queue = 0; queue < qconf->n_rx_queue; ++queue) {
+ struct rte_eth_rxconf rxq_conf;
+
+ portid = qconf->rx_queue_list[queue].port_id;
+ queueid = qconf->rx_queue_list[queue].queue_id;
+
+ if (numa_on)
+ socketid =
+ (uint8_t)rte_lcore_to_socket_id(lcore_id);
+ else
+ socketid = 0;
+
+ printf("rxq=%d,%d,%d ", portid, queueid, socketid);
+ fflush(stdout);
+
+ rte_eth_dev_info_get(portid, &dev_info);
+ rxq_conf = dev_info.default_rxconf;
+ rxq_conf.offloads = port_conf.rxmode.offloads;
+ if (!per_port_pool)
+ ret = rte_eth_rx_queue_setup(portid, queueid,
+ nb_rxd, socketid,
+ &rxq_conf,
+ pktmbuf_pool[0][socketid]);
+ else
+ ret = rte_eth_rx_queue_setup(portid, queueid,
+ nb_rxd, socketid,
+ &rxq_conf,
+ pktmbuf_pool[portid][socketid]);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_rx_queue_setup: err=%d, port=%d\n",
+ ret, portid);
+
+ /* Add this queue node to its graph */
+ snprintf(qconf->rx_queue_list[queue].node_name,
+ RTE_NODE_NAMESIZE, "ethdev_rx-%u-%u",
+ portid, queueid);
+ }
+
+ /* Alloc a graph to this lcore only if source exists */
+ if (qconf->n_rx_queue) {
+ qconf->graph_id = nb_graphs;
+ nb_graphs++;
+ }
+ }
+
+ printf("\n");
+
+ /* Ethdev node config, skip rx queue mapping */
+ ret = rte_node_eth_config(ethdev_conf, nb_conf, nb_graphs);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "rte_node_eth_config: err=%d\n", ret);
+
+ /* start ports */
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((enabled_port_mask & (1 << portid)) == 0) {
+ continue;
+ }
+ /* Start device */
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_dev_start: err=%d, port=%d\n",
+ ret, portid);
+
+ /*
+ * If enabled, put device in promiscuous mode.
+ * This allows IO forwarding mode to forward packets
+ * to itself through 2 cross-connected ports of the
+ * target machine.
+ */
+ if (promiscuous_on)
+ rte_eth_promiscuous_enable(portid);
+ }
+
+ printf("\n");
+
+ check_all_ports_link_status(enabled_port_mask);
+
+ /* Graph Initialization */
+ memset(&graph_conf, 0, sizeof(graph_conf));
+ graph_conf.node_patterns = node_patterns;
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ rte_graph_t graph_id;
+ rte_edge_t i;
+
+ if (rte_lcore_is_enabled(lcore_id) == 0)
+ continue;
+
+ qconf = &lcore_conf[lcore_id];
+
+ /* Skip graph creation if no source exists */
+ if (!qconf->n_rx_queue)
+ continue;
+
+ /* Add rx node patterns of this lcore */
+ for (i = 0; i < qconf->n_rx_queue; i++) {
+ graph_conf.node_patterns[nb_patterns + i]
+ = qconf->rx_queue_list[i].node_name;
+ }
+
+ graph_conf.nb_node_patterns = nb_patterns + i;
+ graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id);
+
+ snprintf(qconf->name, sizeof(qconf->name),
+ "worker_%u", lcore_id);
+
+ graph_id = rte_graph_create(qconf->name, &graph_conf);
+ if (graph_id != qconf->graph_id)
+ rte_exit(EXIT_FAILURE,
+ "rte_graph_create(): graph_id=%d not "
+ " as expected for lcore %u(%u\n",
+ graph_id, lcore_id, qconf->graph_id);
+
+ qconf->graph = rte_graph_lookup(qconf->name);
+ if (!qconf->graph)
+ rte_exit(EXIT_FAILURE,
+ "rte_graph_lookup(): graph %s not found\n",
+ qconf->name);
+
+ }
+
+ /* Update mac header rewrite data */
+ memset(&rewrite_data, 0, sizeof(rewrite_data));
+ rewrite_len = sizeof(rewrite_data);
+ ether_addr = (struct rte_ether_addr *)rewrite_data;
+ ether_addr[0].addr_bytes[0] = RTE_ETHER_LOCAL_ADMIN_ADDR;
+ ether_addr[1].addr_bytes[0] = RTE_ETHER_LOCAL_ADMIN_ADDR;
+
+ /* Add route to ip4 graph infra */
+ for (i = 0; i < IPV4_L3FWD_LPM_NUM_ROUTES; i++) {
+ char route_str[INET6_ADDRSTRLEN * 4];
+ char abuf[INET6_ADDRSTRLEN];
+ struct in_addr in;
+ uint32_t dst_port;
+ uint16_t next_hop;
+
+ /* skip unused ports */
+ if ((1 << ipv4_l3fwd_lpm_route_array[i].if_out &
+ enabled_port_mask) == 0)
+ continue;
+
+ dst_port = ipv4_l3fwd_lpm_route_array[i].if_out;
+ next_hop = i;
+
+ in.s_addr = htonl(ipv4_l3fwd_lpm_route_array[i].ip);
+ snprintf(route_str, sizeof(route_str), "%s / %d (%d)",
+ inet_ntop(AF_INET, &in, abuf, sizeof(abuf)),
+ ipv4_l3fwd_lpm_route_array[i].depth,
+ ipv4_l3fwd_lpm_route_array[i].if_out);
+
+ ret = rte_node_ip4_route_add(ipv4_l3fwd_lpm_route_array[i].ip,
+ ipv4_l3fwd_lpm_route_array[i].depth, next_hop,
+ IP4_LOOKUP_NEXT_REWRITE);
+
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Unable to add ip4 route %s to graph\n",
+ route_str);
+
+ ether_addr[0].addr_bytes[5] = dst_port + 1;
+
+ /* Add next hop for a given destination */
+ ret = rte_node_ip4_rewrite_add(next_hop, rewrite_data,
+ rewrite_len, dst_port);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Unable to add next hop %u for "
+ "route %s\n", next_hop, route_str);
+
+ RTE_LOG(INFO, L3FWD_GRAPH,
+ "Added route %s, next_hop %u\n",
+ route_str, next_hop);
+ }
+
+ /* launch per-lcore init on every slave lcore */
+ rte_eal_mp_remote_launch(graph_main_loop, NULL, SKIP_MASTER);
+
+ /* Accumulate and print stats on master until exit */
+ if (rte_graph_has_stats_feature())
+ print_stats();
+
+ /* Wait for slave cores to exit */
+ ret = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ ret = rte_eal_wait_lcore(lcore_id);
+ /* Destroy graph */
+ rte_graph_destroy(lcore_conf[lcore_id].name);
+ if (ret < 0) {
+ ret = -1;
+ break;
+ }
+ }
+
+ /* stop ports */
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("Closing port %d...", portid);
+ rte_eth_dev_stop(portid);
+ rte_eth_dev_close(portid);
+ printf(" Done\n");
+ }
+ printf("Bye...\n");
+
+ return ret;
+}
diff --git a/examples/l3fwd-graph/meson.build b/examples/l3fwd-graph/meson.build
new file mode 100644
index 000000000..a816bd890
--- /dev/null
+++ b/examples/l3fwd-graph/meson.build
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Marvell International Ltd.
+
+# meson file, for building this example as part of a main DPDK build.
+#
+# To build this example as a standalone application with an already-installed
+# DPDK instance, use 'make'
+
+deps += ['graph', 'eal', 'lpm', 'ethdev', 'node' ]
+sources = files(
+ 'main.c'
+)
+allow_experimental_apis = true
diff --git a/examples/meson.build b/examples/meson.build
index 1f2b6f516..3b540012f 100644
--- a/examples/meson.build
+++ b/examples/meson.build
@@ -2,8 +2,10 @@
# Copyright(c) 2017-2019 Intel Corporation
driver_libs = []
+node_libs = []
if get_option('default_library') == 'static'
driver_libs = dpdk_drivers
+ node_libs = dpdk_graph_nodes
endif
execinfo = cc.find_library('execinfo', required: false)
@@ -23,7 +25,7 @@ all_examples = [
'l2fwd', 'l2fwd-cat', 'l2fwd-event',
'l2fwd-crypto', 'l2fwd-jobstats',
'l2fwd-keepalive', 'l3fwd',
- 'l3fwd-acl', 'l3fwd-power',
+ 'l3fwd-acl', 'l3fwd-power', 'l3fwd-graph',
'link_status_interrupt',
'multi_process/client_server_mp/mp_client',
'multi_process/client_server_mp/mp_server',
@@ -99,7 +101,7 @@ foreach example: examples
endif
executable('dpdk-' + name, sources,
include_directories: includes,
- link_whole: driver_libs,
+ link_whole: driver_libs + node_libs,
link_args: dpdk_extra_ldflags,
c_args: cflags,
dependencies: dep_objs)
--
2.24.1
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-01-31 17:01 [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem jerinj
` (4 preceding siblings ...)
2020-01-31 17:02 ` [dpdk-dev] [RFC PATCH 5/5] example/l3fwd_graph: l3fwd using graph architecture jerinj
@ 2020-01-31 18:34 ` Ray Kinsella
2020-02-01 5:44 ` Jerin Jacob
2020-02-25 5:22 ` Honnappa Nagarahalli
6 siblings, 1 reply; 31+ messages in thread
From: Ray Kinsella @ 2020-01-31 18:34 UTC (permalink / raw)
To: jerinj, dev
Cc: pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, techboard
Hi Jerin,
Much kudos on a huge contribution to the community.
Look forward to spend more time looking at it in the next few days.
I'll bite and ask the obvious questions - why would I use rte_graph over FD.io VPP?
Ray K
On 31/01/2020 17:01, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
>
> This RFC is targeted for v20.05 release.
>
> This RFC patch includes an implementation of graph architecture for packet
> processing using DPDK primitives.
>
> Using graph traversal for packet processing is a proven architecture
> that has been implemented in various open source libraries.
>
> Graph architecture for packet processing enables abstracting the data
> processing functions as “nodes” and “links” them together to create a complex
> “graph” to create reusable/modular data processing functions.
>
> The RFC patch further includes performance enhancements and modularity
> to the DPDK as discussed in more detail below.
>
> What this RFC patch contains:
> -----------------------------
> 1) The API definition to "create" nodes and "link" together to create a "graph"
> for packet processing. See, lib/librte_graph/rte_graph.h
>
> 2) The Fast path API definition for the graph walker and enqueue function
> used by the workers. See, lib/librte_graph/rte_graph_worker.h
>
> 3) Optimized SW implementation for (1) and (2). See, lib/librte_graph/
>
> 4) Test case to verify the graph infrastructure functionality
> See, app/test/test_graph.c
>
> 5) Performance test cases to evaluate the cost of graph walker and nodes
> enqueue fast-path function for various combinations.
>
> See app/test/test_graph_perf.c
>
> 6) Packet processing nodes(Null, Rx, Tx, Pkt drop, IPV4 rewrite, IPv4 lookup)
> using graph infrastructure. See lib/librte_node/*
>
> 7) An example application to showcase l3fwd
> (functionality same as existing examples/l3fwd) using graph infrastructure and
> use packets processing nodes (item (6)). See examples/l3fwd-graph/.
>
> Performance
> -----------
> 1) Graph walk and node enqueue overhead can be tested with performance test
> case application [1]
> # If all packets go from a node to another node (we call it as "homerun") then
> it will be just a pointer swap for a burst of packets.
> # In the worst case, a couple of handful cycles to move an object from a node
> to another node.
>
> 2) Performance comparison with existing l3fwd (The complete static code with out
> any nodes) vs modular l3fwd-graph with 5 nodes
> (ip4_lookup, ip4_rewrite, ethdev_tx, ethdev_rx, pkt_drop).
> Here is graphical representation of the l3fwd-graph as Graphviz dot file:
> http://bit.ly/39UPPGm
>
> # l3fwd-graph performance is -2.5% wrt static l3fwd.
>
> # We have simulated the similar test with existing librte_pipeline application [4].
> ip_pipline application is -48.62% wrt static l3fwd.
>
> The above results are on octeontx2. It may vary on other platforms.
> The platforms with higher L1 and L2 caches will have further better performance.
>
> Tested architectures:
> --------------------
> 1) AArch64
> 2) X86
>
>
> Graph library Features
> ----------------------
> 1) Nodes as plugins
> 2) Support for out of tree nodes
> 3) Multi-process support.
> 4) Low overhead graph walk and node enqueue
> 5) Low overhead statistics collection infrastructure
> 6) Support to export the graph as a Graphviz dot file.
> See rte_graph_export()
> Example of exported graph: http://bit.ly/2PqbqOy
> 7) Allow having another graph walk implementation
> in the future by segregating the fast path and slow path code.
>
>
> Advantages of Graph architecture:
> ---------------------------------
>
> 1) Memory latency is the enemy for high-speed packet processing,
> moving the similar packet processing code to a node will reduce
> the I cache and D caches misses.
> 2) Exploits the probability that most packets will follow the same nodes in the graph.
> 3) Allow SIMD instructions for packet processing of the node.
> 4) The modular scheme allows having reusable nodes for the consumers.
> 5) The modular scheme allows us to abstract the vendor HW specific
> optimizations as a node.
>
>
> What is different than existing libpipeline library
> ---------------------------------------------------
> At a very high level, libpipeline created to allow modular plugin interface.
> Based on our analysis the performance is better in the graph model.
> Check the details under the Performance section, Item (2).
>
> This rte_graph implementation has taken care of fixing some of the
> architecture/implementations limitations with libpipeline.
>
> 1) Use cases like IP fragmentation, TCP ACK processing
> (with new TCP data sent out in the same context)
> have a problem as rte_pipeline_run() passes just pkt_mask of 64 bits to different
> tables and packet pointers are stored in the single array in struct rte_pipeline_run.
>
> In Graph architecture, The node has complete control of how many packets are
> output to next node seamlessly.
>
> 2) Since pktmask is passed to different tables, it takes multiple for loops to
> extract pkts out of fragmented pkts_mask. This makes it difficult to prefetch
> ahead a set of packets. This issue does not exist in Graph architecture.
>
> 3) Every table have two/three function pointers unlike graph architecture that
> has a single function pointer for node.
>
> 4) The current libpipeline main fast-path function doesn't support tree-like
> topology where 64 packets can be redirected to 64 different tables.
> It is currently limited to table-based next table id instead of per-packet
> action based next table id. So in a typical case, we need to cascade tables and
> sequentially go through all the tables to reach the last table.
>
> 5) pkt_mask limit is 64 bits which is the max burst size possible.
> The graph library supports up to 256.
>
> In short, both are significantly different architectures.
> Allowing the end-user to choose the model would be a more appropriate decision
> by keeping both in DPDK.
>
>
> Why this RFC
> ------------
> 1) We believe, Graph architecture provides the best performance for
> reusable/modular packet processing framework.
> Since DPDK does not have it, it is good to have it in DPDK.
>
> 2) Based on our experience, NPU HW accelerates are so different than one vendor
> to another vendor. Going forward, We believe, API abstraction may not be enough
> abstract the difference in HW. The Vendor-specific nodes can abstract the HW
> differences and reuse generic the nodes as needed.
> This would help both the silicon vendors and DPDK end users.
>
> 3) The framework enables the protocol stack as use native mbuf for
> graph processing to avoid any conversion between the formats for
> better performance.
>
> 4) DPDK becomes the "goto library" for userspace HW acceleration.
> It is good to have native Graph packet processing library in DPDK.
>
> 5) Obviously, Our customers are interested in Graph library in DPDK :-)
>
> Identified tweaking for better performance on different targets
> ---------------------------------------------------------------
> 1) Test with various burst size values (256, 128, 64, 32) using
> CONFIG_RTE_GRAPH_BURST_SIZE config option.
> Based on our testing, on x86 and arm64 servers, The sweet spot is 256 burst size.
> While on arm64 embedded SoCs, it is either 64 or 128.
>
> 2) Disable node statistics (use CONFIG_RTE_LIBRTE_GRAPH_STATS config option)
> if not needed.
>
> 3) Use arm64 optimized memory copy for arm64 architecture by
> selecting CONFIG_RTE_ARCH_ARM64_MEMCPY.
>
> Commands to run tests
> ---------------------
>
> [1]
> perf test:
> echo "graph_perf_autotest" | sudo ./build/app/test/dpdk-test -c 0x30
>
> [2]
> functionality test:
> echo "graph_autotest" | sudo ./build/app/test/dpdk-test -c 0x30
>
> [3]
> l3fwd-graph:
> ./l3fwd-graph -c 0x100 -- -p 0x3 --config="(0, 0, 8)" -P
>
> [4]
> # ./ip_pipeline --c 0xff0000 -- -s route.cli
>
> Route.cli: (Copy paste to the shell to avoid dos format issues)
>
> https://pastebin.com/raw/B4Ktx7TT
>
>
> Next steps
> -----------------------------
> 1) Feedback from the community on the library.
> 2) Collect the API requirements from the community.
> 3) Sending the next version by addressing the community initial.
> feedback and fixing the following identified "pending items".
>
>
> Pending items (Will be addressed in next revision)
> -------------------------------------------------
> 1) Add documentation as a patch
> 2) Add Doxygen API documentation
> 3) Split the patches at a more logical level for a better review.
> 4) code cleanup
> 5) more optimizations in the nodes and graph infrastructure.
>
>
> Programming guide and API walk-through
> --------------------------------------
> # Anatomy of Node:
> ~~~~~~~~~~~~~~~~~
> See the https://github.com/jerinjacobk/share/blob/master/Anatomy_of_a_node.svg
>
> The above diagram depicts the anatomy of a node.
> The node is the basic building block of the graph framework.
>
> A node consists of:
> a) process():
>
> The callback function will be invoked by worker thread using
> rte_graph_walk() function when there is data to be processed by the node.
> A graph node process the function using process() and enqueue to next
> downstream node using rte_node_enqueue*() function.
>
> b) Context memory:
>
> It is memory allocated by the library to store the node-specific context
> information. which will be used by process(), init(), fini() callbacks.
>
> c) init():
>
> The callback function which will be invoked by rte_graph_create() on when a node
> gets attached to a graph.
>
> d) fini():
>
> The callback function which will be invoked by rte_graph_destroy() on when a node
> gets detached to a graph.
>
>
> e) Node name:
>
> It is the name of the node. When a node registers to graph library, the library
> gives the ID as rte_node_t type. Both ID or Name shall be used lookup the node.
> rte_node_from_name(), rte_node_id_to_name() are the node lookup functions.
>
> f) nb_edges:
>
> Number of downstream nodes connected to this node. The next_nodes[] stores the
> downstream nodes objects. rte_node_edge_update() and rte_node_edge_shrink()
> functions shall be used to update the next_node[] objects. Consumers of the node
> APIs are free to update the next_node[] objects till rte_graph_create() invoked.
>
> g) next_node[]:
>
> The dynamic array to store the downstream nodes connected to this node.
>
>
> # Node creation and registration
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> a) Node implementer creates the node by implementing ops and attributes of
> 'struct rte_node_register'
> b) The library registers the node by invoking RTE_NODE_REGISTER on library load
> using the constructor scheme.
> The constructor scheme used here to support multi-process.
>
>
> # Link the Nodes to create the graph topology
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> See the https://github.com/jerinjacobk/share/blob/master/Link_the_nodes.svg
>
> The above diagram shows a graph topology after linking the N nodes.
>
> Once nodes are available to the program, Application or node public API functions
> can links them together to create a complex packet processing graph.
>
> There are multiple different types of strategies to link the nodes.
>
> Method a) Provide the next_nodes[] at the node registration time.
> See 'struct rte_node_register::nb_edges'. This is a use case to address the static
> node scheme where one knows upfront the next_nodes[] of the node.
>
> Method b) Use rte_node_edge_get(), rte_node_edge_update(), rte_node_edge_shrink() to
> Update the next_nodes[] links for the node dynamically.
>
> Method c) Use rte_node_clone() to clone a already existing node.
> When rte_node_clone() invoked, The library, would clone all the attributes
> of the node and creates a new one. The name for cloned node shall be
> "parent_node_name-user_provided_name". This method enables the use case of Rx and Tx
> nodes where multiple of those nodes need to be cloned based on the number of CPU
> available in the system. The cloned nodes will be identical, except the "context memory".
> Context memory will have information of port, queue pair incase of Rx and Tx ethdev nodes.
>
> # Create the graph object
> ~~~~~~~~~~~~~~~~~~~~~~~~~
> Now that the nodes are linked, Its time to create a graph by including
> the required nodes. The application can provide a set of node patterns to
> form a graph object.
> The fnmatch() API used underneath for the pattern matching to include
> the required nodes.
>
> The rte_graph_create() API shall be used to create the graph.
>
> Example of a graph object creation:
>
> {"ethdev_rx_0_0", ipv4-*, ethdev_tx_0_*"}
>
> In the above example, A graph object will be created with ethdev Rx
> node of port 0 and queue 0, all ipv4* nodes in the system,
> and ethdev tx node of port 0 with all queues.
>
>
> # Multi core graph processing
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> In the current graph library implementation, specifically,
> rte_graph_walk() and rte_node_enqueue* fast path API functions
> are designed to work on single-core to have better performance.
> The fast path API works on graph object, So the multi-core graph
> processing strategy would be to create graph object PER WORKER.
>
>
> # In fast path:
> ~~~~~~~~~~~~~~~
>
> Typical fast-path code looks like below, where the application
> gets the fast-path graph object through rte_graph_lookup()
> on the worker thread and run the rte_graph_walk() in a tight loop.
>
> struct rte_graph *graph = rte_graph_lookup("worker0");
>
> while (!done) {
> rte_graph_walk(graph);
> }
>
> # Context update when graph walk in action
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The fast-path object for the node is `struct rte_node`.
>
> It may be possible that in slow-path or after the graph walk-in action,
> the user needs to update the context of the node hence access to
> struct rte_node * memory.
>
> rte_graph_foreach_node(), rte_graph_node_get(), rte_graph_node_get_by_name()
> APIs can be used to to get the struct rte_node*. rte_graph_foreach_node() iterator
> function works on struct rte_graph * fast-path graph object while others
> works on graph ID or name.
>
>
> # Get the node statistics using graph cluster
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The user may need to know the aggregate stats of the node across
> multiple graph objects. Especially the situation where each
> graph object bound to a worker thread.
>
> Introduced a graph cluster object for statistics. rte_graph_cluster_stats_create()
> shall be used for creating a graph cluster with multiple graph objects and
> rte_graph_cluster_stats_get() to get the aggregate node statistics.
>
> An example statistics output from rte_graph_cluster_stats_get()
>
> +-----------+------------+-------------+---------------+------------+---------------+-----------+
> |Node |calls |objs |realloc_count |objs/call |objs/sec(10E6) |cycles/call|
> +------------------------+-------------+---------------+------------+---------------+-----------+
> |node0 |12977424 |3322220544 |5 |256.000 |3047.151872 |20.0000 |
> |node1 |12977653 |3322279168 |0 |256.000 |3047.210496 |17.0000 |
> |node2 |12977696 |3322290176 |0 |256.000 |3047.221504 |17.0000 |
> |node3 |12977734 |3322299904 |0 |256.000 |3047.231232 |17.0000 |
> |node4 |12977784 |3322312704 |1 |256.000 |3047.243776 |17.0000 |
> |node5 |12977825 |3322323200 |0 |256.000 |3047.254528 |17.0000 |
> +-----------+------------+-------------+---------------+------------+---------------+-----------+
>
> # Node writing guide lines
> ~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The process() function of a node is fast-path function and that needs to be written
> carefully to achieve max performance.
>
> Broadly speaking, there are two different types of nodes.
>
> 1) First kind of nodes are those that have a fixed next_nodes[] for the
> complete burst (like ethdev_rx, ethdev_tx) and it is simple to write.
> Process() function can move the obj burst to the next node either using
> rte_node_next_stream_move() or using rte_node_next_stream_get() and
> rte_node_next_stream_put().
>
>
> 2) The second kind of such node is `intermediate nodes` that decide what is the next_node[]
> to send to on a per-packet basis. In these nodes,
>
> a) Firstly, there has to be the best possible packet processing logic.
> b) Secondly, each packet needs to be queued to its next node.
>
> At least on some architectures, we get around ~10% more performance if we can avoid copying of
> packet pointers from one node to next as it is ~= memcpy(BURST_SIZE x sizeof(void *)) x NODE_COUNT.
>
> This can be avoided only in the case where all the packets are destined to the same
> next node. We call this as home run case and we use rte_node_next_stream_move() to
> just move burst of object array by swapping the pointer. a.k.a move stream from one node to next node
> with least number of cycles.
>
> Example of intermediate node implementation with home run:
> a) Start with speculation that next_node = ctx->next_node.
> This could be the next_node application used in the previous function call of this node.
> b) Get the next_node stream array and space using
> rte_node_next_stream_get(next_node, &space)
> c) while space != 0 and n_pkts_left != 0,
> prefetch next pkt_set and process current pkt_set to find their next node
> d) if all the next nodes of the current pkt_set match speculated next node,
> just count them as successfully speculated(last_spec) till now and
> continue the loop without actually moving them to the next node.
> else if there is a mismatch,
> copy all the pkt_set pointers that were last_spec and
> move the current pkt_set to their respective next's nodes using
> rte_enqueue_next_x1(). Also one of the next_node can be updated as
> speculated next_node if it is more probable. Also set last_spec = 0
> e) if n_pkts_left != 0 and space != 0
> goto c) as there is space in the speculated next_node.
> f) if last_spec == n_pkts_left,
> then we successfully speculated all the packets to right next node.
> Just call rte_node_next_stream_move(node, next_node) to just move the
> stream/obj array to next node. This is home run where we avoided
> memcpy of buffer pointers to next node.
> g) if space = 0 and n_pkts_left != 0
> goto b)
> h) Update the ctx->next_node with more probable next node.
>
> # In-tree node documentation
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> a) librte_node/ethdev_rx.c:
> This node does rte_eth_rx_burst() into stream buffer acquired using
> rte_node_next_stream_get() and does rte_node_next_stream_put(count)
> only when there are packets received. Each rte_node works on only on
> one rx port and queue that it gets from node->context.
> For each (port X, rx_queue Y), a rte_node is cloned from ethdev_rx_base_node
> as "ethdev_rx-X-Y" in rte_node_eth_config() along with updating
> node->context. Each graph needs to be associated with a unique
> rte_node for a (port, rx_queue).
>
> b) librte_node/ethdev_tx.c:
> This node does rte_eth_tx_burst() for a burst of objs received by it.
> It sends the burst to a fixed Tx Port and Queue information from
> node->context. For each (port X), this rte_node is cloned from
> ethdev_tx_node_base as "ethdev_tx-X" in rte_node_eth_config()
> along with updating node->context.
> Since each graph doesn't need more than one Txq, per port,
> a Txq is assigned based on graph id to each rte_node instance.
> Each graph needs to be associated with a rte_node for each (port).
>
> c) librte_node/pkt_drop.c:
> This node frees all the objects that are passed to it.
>
> d) librte_node/ip4_lookup.c:
> This node is an intermediate node that does lpm lookup for the received
> ipv4 packets and the result determines each packets next node.
> a) On successful lpm lookup, the result contains the nex_node id and
> next-hop id with which the packet needs to be further processed.
> b) On lpm lookup failure, objects are redirected to pkt_drop node.
> rte_node_ip4_route_add() is control path API to add ipv4 routes.
> To achieve home run, we use rte_node_stream_move() as mentioned in above
> sections.
>
> e) librte_node/ip4_rewrite.c:
> This node gets packets from ip4_lookup node with next-hop id for each
> packet is embedded in rte_node_mbuf_priv1(mbuf)->nh. This id is used
> to determine the L2 header to be written to the pkt before sending
> the pkt out to a particular ethdev_tx node.
> rte_node_ip4_rewrite_add() is control path API to add next-hop info.
>
> Jerin Jacob (1):
> graph: introduce graph subsystem
>
> Kiran Kumar K (1):
> test: add graph functional tests
>
> Nithin Dabilpuram (2):
> node: add packet processing nodes
> example/l3fwd_graph: l3fwd using graph architecture
>
> Pavan Nikhilesh (1):
> test: add graph performance test cases.
>
> app/test/Makefile | 5 +
> app/test/meson.build | 10 +-
> app/test/test_graph.c | 820 +++++++++++++++++
> app/test/test_graph_perf.c | 888 +++++++++++++++++++
> config/common_base | 13 +
> config/rte_config.h | 4 +
> examples/Makefile | 3 +
> examples/l3fwd-graph/Makefile | 58 ++
> examples/l3fwd-graph/main.c | 1131 ++++++++++++++++++++++++
> examples/l3fwd-graph/meson.build | 13 +
> examples/meson.build | 6 +-
> lib/Makefile | 6 +
> lib/librte_graph/Makefile | 28 +
> lib/librte_graph/graph.c | 578 ++++++++++++
> lib/librte_graph/graph_debug.c | 81 ++
> lib/librte_graph/graph_ops.c | 163 ++++
> lib/librte_graph/graph_populate.c | 224 +++++
> lib/librte_graph/graph_private.h | 113 +++
> lib/librte_graph/graph_stats.c | 396 +++++++++
> lib/librte_graph/meson.build | 11 +
> lib/librte_graph/node.c | 419 +++++++++
> lib/librte_graph/rte_graph.h | 277 ++++++
> lib/librte_graph/rte_graph_version.map | 46 +
> lib/librte_graph/rte_graph_worker.h | 280 ++++++
> lib/librte_node/Makefile | 30 +
> lib/librte_node/ethdev_ctrl.c | 106 +++
> lib/librte_node/ethdev_rx.c | 218 +++++
> lib/librte_node/ethdev_rx.h | 17 +
> lib/librte_node/ethdev_rx_priv.h | 45 +
> lib/librte_node/ethdev_tx.c | 74 ++
> lib/librte_node/ethdev_tx_priv.h | 33 +
> lib/librte_node/ip4_lookup.c | 657 ++++++++++++++
> lib/librte_node/ip4_lookup_priv.h | 17 +
> lib/librte_node/ip4_rewrite.c | 340 +++++++
> lib/librte_node/ip4_rewrite_priv.h | 44 +
> lib/librte_node/log.c | 14 +
> lib/librte_node/meson.build | 8 +
> lib/librte_node/node_private.h | 61 ++
> lib/librte_node/null.c | 23 +
> lib/librte_node/pkt_drop.c | 26 +
> lib/librte_node/rte_node_eth_api.h | 31 +
> lib/librte_node/rte_node_ip4_api.h | 33 +
> lib/librte_node/rte_node_version.map | 9 +
> lib/meson.build | 5 +-
> meson.build | 1 +
> mk/rte.app.mk | 2 +
> 46 files changed, 7362 insertions(+), 5 deletions(-)
> create mode 100644 app/test/test_graph.c
> create mode 100644 app/test/test_graph_perf.c
> create mode 100644 examples/l3fwd-graph/Makefile
> create mode 100644 examples/l3fwd-graph/main.c
> create mode 100644 examples/l3fwd-graph/meson.build
> create mode 100644 lib/librte_graph/Makefile
> create mode 100644 lib/librte_graph/graph.c
> create mode 100644 lib/librte_graph/graph_debug.c
> create mode 100644 lib/librte_graph/graph_ops.c
> create mode 100644 lib/librte_graph/graph_populate.c
> create mode 100644 lib/librte_graph/graph_private.h
> create mode 100644 lib/librte_graph/graph_stats.c
> create mode 100644 lib/librte_graph/meson.build
> create mode 100644 lib/librte_graph/node.c
> create mode 100644 lib/librte_graph/rte_graph.h
> create mode 100644 lib/librte_graph/rte_graph_version.map
> create mode 100644 lib/librte_graph/rte_graph_worker.h
> create mode 100644 lib/librte_node/Makefile
> create mode 100644 lib/librte_node/ethdev_ctrl.c
> create mode 100644 lib/librte_node/ethdev_rx.c
> create mode 100644 lib/librte_node/ethdev_rx.h
> create mode 100644 lib/librte_node/ethdev_rx_priv.h
> create mode 100644 lib/librte_node/ethdev_tx.c
> create mode 100644 lib/librte_node/ethdev_tx_priv.h
> create mode 100644 lib/librte_node/ip4_lookup.c
> create mode 100644 lib/librte_node/ip4_lookup_priv.h
> create mode 100644 lib/librte_node/ip4_rewrite.c
> create mode 100644 lib/librte_node/ip4_rewrite_priv.h
> create mode 100644 lib/librte_node/log.c
> create mode 100644 lib/librte_node/meson.build
> create mode 100644 lib/librte_node/node_private.h
> create mode 100644 lib/librte_node/null.c
> create mode 100644 lib/librte_node/pkt_drop.c
> create mode 100644 lib/librte_node/rte_node_eth_api.h
> create mode 100644 lib/librte_node/rte_node_ip4_api.h
> create mode 100644 lib/librte_node/rte_node_version.map
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-01-31 18:34 ` [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem Ray Kinsella
@ 2020-02-01 5:44 ` Jerin Jacob
2020-02-17 7:19 ` Jerin Jacob
0 siblings, 1 reply; 31+ messages in thread
From: Jerin Jacob @ 2020-02-01 5:44 UTC (permalink / raw)
To: Ray Kinsella
Cc: Jerin Jacob, dpdk-dev, Prasun Kapoor, Nithin Dabilpuram,
Kiran Kumar K, Pavan Nikhilesh, Narayana Prasad, nsaxena,
sshankarnara, Honnappa Nagarahalli, Thomas Monjalon,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard
On Sat, Feb 1, 2020 at 12:05 AM Ray Kinsella <mdr@ashroe.eu> wrote:
>
> Hi Jerin,
Hi Ray,
> Much kudos on a huge contribution to the community.
All the authors of this patch set spend at least the last 3/4 months
to bring this up RFC with performance data with an l3fwd-graph example
application.
We hope it would be useful for DPDK community.
> Look forward to spend more time looking at it in the next few days.
That would be very helpful.
>
> I'll bite and ask the obvious questions - why would I use rte_graph over FD.io VPP?
I did not get the opportunity to work day to day on FD.io projects. My
understanding of FD.io is very limited.
I do think, it is NOT one vs other. VPP is quite a mature project and
they are pioneers in graph architecture.
VPP is an entirely separate framework by itself and provides an
alternate data plane environment.
The objective of rte_graph is to add a graph subsystem to DPDK as a
foundational element.
This will allow the DPDK community to use the powerfull graph
architecture concept in a fundamental
way with purely DPDK based applications
That would boil down to:
1) Provision to use pure native mbuf based dpdk application with graph
architecture. i.e
avoid the cost of packet format conversion for good.
2) Use rte_mempool, rte_flow, rte_tm, rte_cryptodev, rte_eventdev,
rte_regexdev HW accelerated
API in the data plane application.
3) Based on our experience, NPU HW accelerates are so different than
one vendor to another vendor.
Going forward, We believe, API abstraction may not be enough abstract
the difference in HW.
The Vendor-specific nodes can abstract the HW differences and reuse
generic the nodes as needed.
This would help both the silicon vendors and DPDK end-users to avoid writing
capabilities based APIs and avoid vendor-specific fast path routines.
So such vendor plugin can be part of dpdk to help both vendors
and end-user of DPDK.
4) Provision for multiprocess support in graph architecture.
5) Contribute to dpdk.org
6) Use Linux coding standards.
7) Finally, one may consider using rte_graph, _if_ specific workload
performs better in performance
in this model due to framework and/or the HW acceleration attached to it.
>
> Ray K
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 1/5] graph: introduce graph subsystem
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 1/5] " jerinj
@ 2020-02-02 10:34 ` Stephen Hemminger
2020-02-02 10:35 ` Stephen Hemminger
2020-02-02 10:38 ` Stephen Hemminger
2 siblings, 0 replies; 31+ messages in thread
From: Stephen Hemminger @ 2020-02-02 10:34 UTC (permalink / raw)
To: jerinj
Cc: dev, pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard
On Fri, 31 Jan 2020 22:31:57 +0530
<jerinj@marvell.com> wrote:
> + /* Create graph object */
> + graph = calloc(1, sizeof(*graph));
> + if (graph == NULL)
> + set_err(ENOMEM, fail, "failed to calloc graph object");
T
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 1/5] graph: introduce graph subsystem
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 1/5] " jerinj
2020-02-02 10:34 ` Stephen Hemminger
@ 2020-02-02 10:35 ` Stephen Hemminger
2020-02-02 11:08 ` Jerin Jacob
2020-02-02 10:38 ` Stephen Hemminger
2 siblings, 1 reply; 31+ messages in thread
From: Stephen Hemminger @ 2020-02-02 10:35 UTC (permalink / raw)
To: jerinj
Cc: dev, pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard
On Fri, 31 Jan 2020 22:31:57 +0530
<jerinj@marvell.com> wrote:
> + /* Create graph object */
> + graph = calloc(1, sizeof(*graph));
> + if (graph == NULL)
> + set_err(ENOMEM, fail, "failed to calloc graph object");
This won't be safe if used in primary/secondary process model.
You would need to use rte_calloc etc to allow this.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 1/5] graph: introduce graph subsystem
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 1/5] " jerinj
2020-02-02 10:34 ` Stephen Hemminger
2020-02-02 10:35 ` Stephen Hemminger
@ 2020-02-02 10:38 ` Stephen Hemminger
2020-02-02 11:21 ` Jerin Jacob
2 siblings, 1 reply; 31+ messages in thread
From: Stephen Hemminger @ 2020-02-02 10:38 UTC (permalink / raw)
To: jerinj
Cc: dev, pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, honnappa.nagarahalli, thomas,
david.marchand, ferruh.yigit, arybchenko, ajit.khaparde,
xiaolong.ye, rasland, maxime.coquelin, akhil.goyal,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, gavin.hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard
On Fri, 31 Jan 2020 22:31:57 +0530
<jerinj@marvell.com> wrote:
> +
> +#define set_err(err, where, fmt, ...) do { \
> + graph_err(fmt, ##__VA_ARGS__); \
> + rte_errno = err; \
> + goto where; \
> +} while (0)
I dislike this macro, it makes static analysis harder and requires
the reader to know that the argument is a goto target. And since it is lower
case, implies that it is a function. Usually macros are in upper case.
It makes the code smaller but a cost of being different which impacts the
readability of the code.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 1/5] graph: introduce graph subsystem
2020-02-02 10:35 ` Stephen Hemminger
@ 2020-02-02 11:08 ` Jerin Jacob
0 siblings, 0 replies; 31+ messages in thread
From: Jerin Jacob @ 2020-02-02 11:08 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Jerin Jacob, dpdk-dev, Prasun Kapoor, Nithin Dabilpuram,
Kiran Kumar K, Pavan Nikhilesh, Narayana Prasad, nsaxena,
sshankarnara, Honnappa Nagarahalli, Thomas Monjalon,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
Ray Kinsella, techboard
On Sun, Feb 2, 2020 at 4:05 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Fri, 31 Jan 2020 22:31:57 +0530
> <jerinj@marvell.com> wrote:
>
> > + /* Create graph object */
> > + graph = calloc(1, sizeof(*graph));
> > + if (graph == NULL)
> > + set_err(ENOMEM, fail, "failed to calloc graph object");
>
> This won't be safe if used in primary/secondary process model.
> You would need to use rte_calloc etc to allow this.
That memory used only local housekeeping purposes. Down in the
function, it creates a graph reel from
the memzone with all the required data for fast path. see graph_fp_mem_create()
workers/secondary process get the real fast path graph object through
rte_graph_lookup() which
returns the "struct rte_graph *". Followed by invoking
rte_graph_walk(struct rte_graph *graph)
for the graph walk.
struct rte_graph *
rte_graph_lookup(const char *name)
{
const struct rte_memzone *mz;
struct rte_graph *rc = NULL;
mz = rte_memzone_lookup(name);
if (mz)
rc = mz->addr;
return graph_mem_fixup_secondray(rc);
}
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 1/5] graph: introduce graph subsystem
2020-02-02 10:38 ` Stephen Hemminger
@ 2020-02-02 11:21 ` Jerin Jacob
2020-02-03 9:14 ` Gaetan Rivet
0 siblings, 1 reply; 31+ messages in thread
From: Jerin Jacob @ 2020-02-02 11:21 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Jerin Jacob, dpdk-dev, Prasun Kapoor, Nithin Dabilpuram,
Kiran Kumar K, Pavan Nikhilesh, Narayana Prasad, nsaxena,
sshankarnara, Honnappa Nagarahalli, Thomas Monjalon,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
Ray Kinsella, techboard
On Sun, Feb 2, 2020 at 4:08 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Fri, 31 Jan 2020 22:31:57 +0530
> <jerinj@marvell.com> wrote:
>
> > +
> > +#define set_err(err, where, fmt, ...) do { \
> > + graph_err(fmt, ##__VA_ARGS__); \
> > + rte_errno = err; \
> > + goto where; \
> > +} while (0)
>
> I dislike this macro, it makes static analysis harder and requires
> the reader to know that the argument is a goto target. And since it is lower
> case, implies that it is a function. Usually macros are in upper case.
>
> It makes the code smaller but a cost of being different which impacts the
> readability of the code.
I don't like macro either. That's the only case where I have used
macro, Even in the fast path,
I did multiple rework to remove macro. Without that macro, that code bloats
and will have a lot repeatable code. I have a preference in using _goto_ to have
a unified exit to simply the stuff and therefor maintain the code.
You could see the amount of verification done in rte_graph_create(). So, IMO,
In this case, it is justified to use the macro.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 1/5] graph: introduce graph subsystem
2020-02-02 11:21 ` Jerin Jacob
@ 2020-02-03 9:14 ` Gaetan Rivet
2020-02-03 9:49 ` Jerin Jacob
0 siblings, 1 reply; 31+ messages in thread
From: Gaetan Rivet @ 2020-02-03 9:14 UTC (permalink / raw)
To: Jerin Jacob; +Cc: Stephen Hemminger, dpdk-dev
On 02/02/2020 12:21, Jerin Jacob wrote:
> On Sun, Feb 2, 2020 at 4:08 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
>>
>> On Fri, 31 Jan 2020 22:31:57 +0530
>> <jerinj@marvell.com> wrote:
>>
>>> +
>>> +#define set_err(err, where, fmt, ...) do { \
>>> + graph_err(fmt, ##__VA_ARGS__); \
>>> + rte_errno = err; \
>>> + goto where; \
>>> +} while (0)
>>
>> I dislike this macro, it makes static analysis harder and requires
>> the reader to know that the argument is a goto target. And since it is lower
>> case, implies that it is a function. Usually macros are in upper case.
>>
>> It makes the code smaller but a cost of being different which impacts the
>> readability of the code.
>
> I don't like macro either. That's the only case where I have used
> macro, Even in the fast path,
> I did multiple rework to remove macro. Without that macro, that code bloats
> and will have a lot repeatable code. I have a preference in using _goto_ to have
> a unified exit to simply the stuff and therefor maintain the code.
> You could see the amount of verification done in rte_graph_create(). So, IMO,
> In this case, it is justified to use the macro.
>
Hi Jerin,
Renaming it in uppercase and making clear there will be a jump could alleviate some concerns?
Something like SET_ERR_JMP() maybe or SET_ERR_GOTO().
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 1/5] graph: introduce graph subsystem
2020-02-03 9:14 ` Gaetan Rivet
@ 2020-02-03 9:49 ` Jerin Jacob
0 siblings, 0 replies; 31+ messages in thread
From: Jerin Jacob @ 2020-02-03 9:49 UTC (permalink / raw)
To: Gaetan Rivet; +Cc: Stephen Hemminger, dpdk-dev
On Mon, Feb 3, 2020 at 2:44 PM Gaetan Rivet <grive@u256.net> wrote:
>
> On 02/02/2020 12:21, Jerin Jacob wrote:
> > On Sun, Feb 2, 2020 at 4:08 PM Stephen Hemminger
> > <stephen@networkplumber.org> wrote:
> >>
> >> On Fri, 31 Jan 2020 22:31:57 +0530
> >> <jerinj@marvell.com> wrote:
> >>
> >>> +
> >>> +#define set_err(err, where, fmt, ...) do { \
> >>> + graph_err(fmt, ##__VA_ARGS__); \
> >>> + rte_errno = err; \
> >>> + goto where; \
> >>> +} while (0)
> >>
> >> I dislike this macro, it makes static analysis harder and requires
> >> the reader to know that the argument is a goto target. And since it is lower
> >> case, implies that it is a function. Usually macros are in upper case.
> >>
> >> It makes the code smaller but a cost of being different which impacts the
> >> readability of the code.
> >
> > I don't like macro either. That's the only case where I have used
> > macro, Even in the fast path,
> > I did multiple rework to remove macro. Without that macro, that code bloats
> > and will have a lot repeatable code. I have a preference in using _goto_ to have
> > a unified exit to simply the stuff and therefor maintain the code.
> > You could see the amount of verification done in rte_graph_create(). So, IMO,
> > In this case, it is justified to use the macro.
> >
>
> Hi Jerin,
Hi Gaetan,
>
> Renaming it in uppercase and making clear there will be a jump could alleviate some concerns?
> Something like SET_ERR_JMP() maybe or SET_ERR_GOTO().
OK. I will change to SET_ERR_JMP() in the next version.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-01 5:44 ` Jerin Jacob
@ 2020-02-17 7:19 ` Jerin Jacob
2020-02-17 8:38 ` Thomas Monjalon
0 siblings, 1 reply; 31+ messages in thread
From: Jerin Jacob @ 2020-02-17 7:19 UTC (permalink / raw)
To: Ray Kinsella
Cc: Jerin Jacob, dpdk-dev, Prasun Kapoor, Nithin Dabilpuram,
Kiran Kumar K, Pavan Nikhilesh, Narayana Prasad, nsaxena,
sshankarnara, Honnappa Nagarahalli, Thomas Monjalon,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
the comments.
Is anyone else planning to have an architecture level or API usage
level review or any review of other top-level aspects?
I believe low-level aspects of the code can be taken care of from the
v1 series onwards.
I am just wondering what would be an appropriate time for sending v1.
If someone planning for reviewing at the top level,
I can wait until the review complete. Let us know if anyone planning to review?
If no other comment then I would like to request tech board approval
for the library on 26/Feb meeting.
[1]
http://mails.dpdk.org/archives/dev/2020-January/156765.html
On Sat, Feb 1, 2020 at 11:14 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Sat, Feb 1, 2020 at 12:05 AM Ray Kinsella <mdr@ashroe.eu> wrote:
> >
> > Hi Jerin,
>
> Hi Ray,
>
> > Much kudos on a huge contribution to the community.
>
> All the authors of this patch set spend at least the last 3/4 months
> to bring this up RFC with performance data with an l3fwd-graph example
> application.
> We hope it would be useful for DPDK community.
>
> > Look forward to spend more time looking at it in the next few days.
>
> That would be very helpful.
>
> >
> > I'll bite and ask the obvious questions - why would I use rte_graph over FD.io VPP?
>
> I did not get the opportunity to work day to day on FD.io projects. My
> understanding of FD.io is very limited.
> I do think, it is NOT one vs other. VPP is quite a mature project and
> they are pioneers in graph architecture.
>
> VPP is an entirely separate framework by itself and provides an
> alternate data plane environment.
> The objective of rte_graph is to add a graph subsystem to DPDK as a
> foundational element.
> This will allow the DPDK community to use the powerfull graph
> architecture concept in a fundamental
> way with purely DPDK based applications
>
> That would boil down to:
> 1) Provision to use pure native mbuf based dpdk application with graph
> architecture. i.e
> avoid the cost of packet format conversion for good.
> 2) Use rte_mempool, rte_flow, rte_tm, rte_cryptodev, rte_eventdev,
> rte_regexdev HW accelerated
> API in the data plane application.
> 3) Based on our experience, NPU HW accelerates are so different than
> one vendor to another vendor.
> Going forward, We believe, API abstraction may not be enough abstract
> the difference in HW.
> The Vendor-specific nodes can abstract the HW differences and reuse
> generic the nodes as needed.
> This would help both the silicon vendors and DPDK end-users to avoid writing
> capabilities based APIs and avoid vendor-specific fast path routines.
> So such vendor plugin can be part of dpdk to help both vendors
> and end-user of DPDK.
> 4) Provision for multiprocess support in graph architecture.
> 5) Contribute to dpdk.org
> 6) Use Linux coding standards.
> 7) Finally, one may consider using rte_graph, _if_ specific workload
> performs better in performance
> in this model due to framework and/or the HW acceleration attached to it.
>
>
> >
> > Ray K
> >
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-17 7:19 ` Jerin Jacob
@ 2020-02-17 8:38 ` Thomas Monjalon
2020-02-17 10:58 ` Jerin Jacob
0 siblings, 1 reply; 31+ messages in thread
From: Thomas Monjalon @ 2020-02-17 8:38 UTC (permalink / raw)
To: Jerin Jacob, Jerin Jacob
Cc: Ray Kinsella, dpdk-dev, Prasun Kapoor, Nithin Dabilpuram,
Kiran Kumar K, Pavan Nikhilesh, Narayana Prasad, nsaxena,
sshankarnara, Honnappa Nagarahalli, David Marchand, Ferruh Yigit,
Andrew Rybchenko, Ajit Khaparde, Ye, Xiaolong, Raslan Darawsheh,
Maxime Coquelin, Akhil Goyal, Cristian Dumitrescu, John McNamara,
Richardson, Bruce, Anatoly Burakov, Gavin Hu, David Christensen,
Ananyev, Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao,
Nikhil, Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
Hi Jerin,
17/02/2020 08:19, Jerin Jacob:
> I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
> the comments.
>
> Is anyone else planning to have an architecture level or API usage
> level review or any review of other top-level aspects?
If we add rte_graph to DPDK, we will have 2 similar libraries.
I already proposed several times to move rte_pipeline in a separate
repository for two reasons:
1/ it is acting at a higher API layer level
2/ there can be different solutions in this layer
I think 1/ was commonly agreed in the community.
Now we see one more proof of the reason 2/.
I believe it is time to move rte_pipeline (Packet Framework)
in a separate repository, and welcome rte_graph as well in another
separate repository.
I think the original DPDK repository should focus on low-level features
which offer hardware offloads and optimizations.
Consuming the low-level API in different abstractions,
and building applications, should be done on top of dpdk.git.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-17 8:38 ` Thomas Monjalon
@ 2020-02-17 10:58 ` Jerin Jacob
2020-02-21 10:30 ` Jerin Jacob
0 siblings, 1 reply; 31+ messages in thread
From: Jerin Jacob @ 2020-02-17 10:58 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> Hi Jerin,
Hi Thomas,
Thanks for starting this discussion now. It is an interesting
discussion. Some thoughts below.
We can decide based on community consensus and follow a single rule
across the components.
>
> 17/02/2020 08:19, Jerin Jacob:
> > I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
> > the comments.
> >
> > Is anyone else planning to have an architecture level or API usage
> > level review or any review of other top-level aspects?
>
> If we add rte_graph to DPDK, we will have 2 similar libraries.
>
> I already proposed several times to move rte_pipeline in a separate
> repository for two reasons:
> 1/ it is acting at a higher API layer level
We need to define what is the higher layer API. Is it processing beyond L2?
In the context of Graph library, it is a framework, not using any of
the substem API
other than EAL and it is under lib/librte_graph.
Nodes library using graph and other subsystem components such as ethdev and
it is under lib/lib_node/
Another interesting question would what would be an issue in DPDK supporting
beyond L2. Or higher level protocols?
> 2/ there can be different solutions in this layer
Is there any issue with that?
There is overlap with the distributor library and eventdev as well.
ethdev and SW traffic manager libraries as well. That list goes on.
>
> I think 1/ was commonly agreed in the community.
> Now we see one more proof of the reason 2/.
>
> I believe it is time to move rte_pipeline (Packet Framework)
> in a separate repository, and welcome rte_graph as well in another
> separate repository.
What would be gain out of this?
My concerns are:
# Like packet-gen, The new code will be filled with unnecessary DPDK
version checks
and unnecessary compatibility issues.
# Anything is not in main dpdk repo, it is a second class citizen.
# Customer has the pain to use two repos and two releases. Internally,
it can be two different
repo but release needs to go through one repo.
If we are focusing ONLY on the driver API then how can DPDK grow
further? If linux kernel
would be thought only have just the kernel and networking/storage as
different repo it would
not have grown up?
What is the real concern? Maintenance?
> I think the original DPDK repository should focus on low-level features
> which offer hardware offloads and optimizations.
The nodes can be vendor-specific to optimize the specific use cases.
As I mentioned in the cover letter,
"
2) Based on our experience, NPU HW accelerates are so different than one vendor
to another vendor. Going forward, We believe, API abstraction may not be enough
abstract the difference in HW. The Vendor-specific nodes can abstract the HW
differences and reuse generic the nodes as needed.
This would help both the silicon vendors and DPDK end users.
"
Thoughts from other folks?
> Consuming the low-level API in different abstractions,
> and building applications, should be done on top of dpdk.git.
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-17 10:58 ` Jerin Jacob
@ 2020-02-21 10:30 ` Jerin Jacob
2020-02-21 11:10 ` Thomas Monjalon
0 siblings, 1 reply; 31+ messages in thread
From: Jerin Jacob @ 2020-02-21 10:30 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > Hi Jerin,
>
> Hi Thomas,
>
> Thanks for starting this discussion now. It is an interesting
> discussion. Some thoughts below.
> We can decide based on community consensus and follow a single rule
> across the components.
Thomas,
No feedback yet on the below questions.
If there no consensus in the email, I would like to propose this topic
to the 26th Feb TB meeting.
>
> >
> > 17/02/2020 08:19, Jerin Jacob:
> > > I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
> > > the comments.
> > >
> > > Is anyone else planning to have an architecture level or API usage
> > > level review or any review of other top-level aspects?
> >
> > If we add rte_graph to DPDK, we will have 2 similar libraries.
> >
> > I already proposed several times to move rte_pipeline in a separate
> > repository for two reasons:
> > 1/ it is acting at a higher API layer level
>
> We need to define what is the higher layer API. Is it processing beyond L2?
>
> In the context of Graph library, it is a framework, not using any of
> the substem API
> other than EAL and it is under lib/librte_graph.
> Nodes library using graph and other subsystem components such as ethdev and
> it is under lib/lib_node/
>
>
> Another interesting question would what would be an issue in DPDK supporting
> beyond L2. Or higher level protocols?
>
>
> > 2/ there can be different solutions in this layer
>
> Is there any issue with that?
> There is overlap with the distributor library and eventdev as well.
> ethdev and SW traffic manager libraries as well. That list goes on.
>
> >
> > I think 1/ was commonly agreed in the community.
> > Now we see one more proof of the reason 2/.
> >
> > I believe it is time to move rte_pipeline (Packet Framework)
> > in a separate repository, and welcome rte_graph as well in another
> > separate repository.
>
> What would be gain out of this?
>
> My concerns are:
> # Like packet-gen, The new code will be filled with unnecessary DPDK
> version checks
> and unnecessary compatibility issues.
> # Anything is not in main dpdk repo, it is a second class citizen.
> # Customer has the pain to use two repos and two releases. Internally,
> it can be two different
> repo but release needs to go through one repo.
>
> If we are focusing ONLY on the driver API then how can DPDK grow
> further? If linux kernel
> would be thought only have just the kernel and networking/storage as
> different repo it would
> not have grown up?
>
> What is the real concern? Maintenance?
>
> > I think the original DPDK repository should focus on low-level features
> > which offer hardware offloads and optimizations.
>
> The nodes can be vendor-specific to optimize the specific use cases.
> As I mentioned in the cover letter,
>
> "
> 2) Based on our experience, NPU HW accelerates are so different than one vendor
> to another vendor. Going forward, We believe, API abstraction may not be enough
> abstract the difference in HW. The Vendor-specific nodes can abstract the HW
> differences and reuse generic the nodes as needed.
> This would help both the silicon vendors and DPDK end users.
> "
>
> Thoughts from other folks?
>
>
> > Consuming the low-level API in different abstractions,
> > and building applications, should be done on top of dpdk.git.
> >
> >
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-21 10:30 ` Jerin Jacob
@ 2020-02-21 11:10 ` Thomas Monjalon
2020-02-21 15:38 ` Mattias Rönnblom
2020-02-21 15:56 ` Jerin Jacob
0 siblings, 2 replies; 31+ messages in thread
From: Thomas Monjalon @ 2020-02-21 11:10 UTC (permalink / raw)
To: Jerin Jacob
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
21/02/2020 11:30, Jerin Jacob:
> On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > Thanks for starting this discussion now. It is an interesting
> > discussion. Some thoughts below.
> > We can decide based on community consensus and follow a single rule
> > across the components.
>
> Thomas,
>
> No feedback yet on the below questions.
Indeed. I was waiting for opininons from others.
> If there no consensus in the email, I would like to propose this topic
> to the 26th Feb TB meeting.
I gave my opinion below.
If a consensus cannot be reached, I agree with the request to the techboard.
> > > 17/02/2020 08:19, Jerin Jacob:
> > > > I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
> > > > the comments.
> > > >
> > > > Is anyone else planning to have an architecture level or API usage
> > > > level review or any review of other top-level aspects?
> > >
> > > If we add rte_graph to DPDK, we will have 2 similar libraries.
> > >
> > > I already proposed several times to move rte_pipeline in a separate
> > > repository for two reasons:
> > > 1/ it is acting at a higher API layer level
> >
> > We need to define what is the higher layer API. Is it processing beyond L2?
My opinion is that any API which is implemented differently
for different hardware should be in DPDK.
Hardware devices can offload protocol processing higher than L2,
so L2 does not look to be a good limit from my point of view.
> > In the context of Graph library, it is a framework, not using any of
> > the substem API
> > other than EAL and it is under lib/librte_graph.
> > Nodes library using graph and other subsystem components such as ethdev and
> > it is under lib/lib_node/
> >
> >
> > Another interesting question would what would be an issue in DPDK supporting
> > beyond L2. Or higher level protocols?
Definitely higher than L2 is OK in DPDK as long as it is related to hardware
capabilities, not software stack (which can be a DPDK application).
> > > 2/ there can be different solutions in this layer
> >
> > Is there any issue with that?
> > There is overlap with the distributor library and eventdev as well.
> > ethdev and SW traffic manager libraries as well. That list goes on.
I don't know how much it is an issue.
But I think it shows that at least one implementation is not generic enough.
> > > I think 1/ was commonly agreed in the community.
> > > Now we see one more proof of the reason 2/.
> > >
> > > I believe it is time to move rte_pipeline (Packet Framework)
> > > in a separate repository, and welcome rte_graph as well in another
> > > separate repository.
> >
> > What would be gain out of this?
The gain is to be clear about what should be the focus for contributors
working on the main DPDK repository.
What is expected to be maintained, tested, etc.
> > My concerns are:
> > # Like packet-gen, The new code will be filled with unnecessary DPDK
> > version checks
> > and unnecessary compatibility issues.
> > # Anything is not in main dpdk repo, it is a second class citizen.
> > # Customer has the pain to use two repos and two releases. Internally,
> > it can be two different
> > repo but release needs to go through one repo.
> >
> > If we are focusing ONLY on the driver API then how can DPDK grow
> > further? If linux kernel
> > would be thought only have just the kernel and networking/storage as
> > different repo it would
> > not have grown up?
Linux kernel is selecting what can enter in the focus or not.
And I wonder what is the desire of extending/growing the scope of a library?
> > What is the real concern? Maintenance?
> >
> > > I think the original DPDK repository should focus on low-level features
> > > which offer hardware offloads and optimizations.
> >
> > The nodes can be vendor-specific to optimize the specific use cases.
> > As I mentioned in the cover letter,
> >
> > "
> > 2) Based on our experience, NPU HW accelerates are so different than one vendor
> > to another vendor. Going forward, We believe, API abstraction may not be enough
> > abstract the difference in HW. The Vendor-specific nodes can abstract the HW
> > differences and reuse generic the nodes as needed.
> > This would help both the silicon vendors and DPDK end users.
> > "
> >
> > Thoughts from other folks?
> >
> >
> > > Consuming the low-level API in different abstractions,
> > > and building applications, should be done on top of dpdk.git.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-21 11:10 ` Thomas Monjalon
@ 2020-02-21 15:38 ` Mattias Rönnblom
2020-02-21 15:53 ` dave
2020-02-21 15:56 ` Jerin Jacob
1 sibling, 1 reply; 31+ messages in thread
From: Mattias Rönnblom @ 2020-02-21 15:38 UTC (permalink / raw)
To: Thomas Monjalon, Jerin Jacob
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith, Jasvinder Singh,
Vladimir Medvedkin, techboard, Stephen Hemminger, dave
On 2020-02-21 12:10, Thomas Monjalon wrote:
> 21/02/2020 11:30, Jerin Jacob:
>> On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>>> On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>> Thanks for starting this discussion now. It is an interesting
>>> discussion. Some thoughts below.
>>> We can decide based on community consensus and follow a single rule
>>> across the components.
>> Thomas,
>>
>> No feedback yet on the below questions.
> Indeed. I was waiting for opininons from others.
>
>> If there no consensus in the email, I would like to propose this topic
>> to the 26th Feb TB meeting.
> I gave my opinion below.
> If a consensus cannot be reached, I agree with the request to the techboard.
>
>
>>>> 17/02/2020 08:19, Jerin Jacob:
>>>>> I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
>>>>> the comments.
>>>>>
>>>>> Is anyone else planning to have an architecture level or API usage
>>>>> level review or any review of other top-level aspects?
>>>> If we add rte_graph to DPDK, we will have 2 similar libraries.
>>>>
>>>> I already proposed several times to move rte_pipeline in a separate
>>>> repository for two reasons:
>>>> 1/ it is acting at a higher API layer level
>>> We need to define what is the higher layer API. Is it processing beyond L2?
> My opinion is that any API which is implemented differently
> for different hardware should be in DPDK.
> Hardware devices can offload protocol processing higher than L2,
> so L2 does not look to be a good limit from my point of view.
>
If you assume the capability of networking hardware will grow, and you
want to unify different networking hardware with varying capabilities
(and also include software-only implementations) under one API, then you
might well end up growing DPDK into the software stack you mention
below. Soft implementations of complex protocols will require operating
system-like support services like timers, RCU, various lock-less data
structures, deferred work mechanism, counter handling frameworks,
control plane interfaces, etc. Coupling should always be avoided of
course, but DPDK would inevitably no longer be a pick-and-choose
smörgåsbord library - at least as long as the consumer wants to utilize
this higher-layer functionality.
This would make DPDK more of a packet processing run-time or a
special-purpose, networking operating system than the "a bunch of
Ethernet drivers in user space" as it started out as.
I'm not saying that's a bad thing. In fact, I think it sounds like an
interesting option, although also a very challenging one. From what I
can see, DPDK has already set out along this route already. If this is a
conscious decision or not, I don't know. Add to this, if Linux expands
further with AF_XDP-like features, beyond simply packet I/O, it might
not only try to take over DPDK's original concerns, but also more of the
current ones.
>>> In the context of Graph library, it is a framework, not using any of
>>> the substem API
>>> other than EAL and it is under lib/librte_graph.
>>> Nodes library using graph and other subsystem components such as ethdev and
>>> it is under lib/lib_node/
>>>
>>>
>>> Another interesting question would what would be an issue in DPDK supporting
>>> beyond L2. Or higher level protocols?
> Definitely higher than L2 is OK in DPDK as long as it is related to hardware
> capabilities, not software stack (which can be a DPDK application).
>
>
>>>> 2/ there can be different solutions in this layer
>>> Is there any issue with that?
>>> There is overlap with the distributor library and eventdev as well.
>>> ethdev and SW traffic manager libraries as well. That list goes on.
> I don't know how much it is an issue.
> But I think it shows that at least one implementation is not generic enough.
>
>
>>>> I think 1/ was commonly agreed in the community.
>>>> Now we see one more proof of the reason 2/.
>>>>
>>>> I believe it is time to move rte_pipeline (Packet Framework)
>>>> in a separate repository, and welcome rte_graph as well in another
>>>> separate repository.
>>> What would be gain out of this?
> The gain is to be clear about what should be the focus for contributors
> working on the main DPDK repository.
> What is expected to be maintained, tested, etc.
>
>
>>> My concerns are:
>>> # Like packet-gen, The new code will be filled with unnecessary DPDK
>>> version checks
>>> and unnecessary compatibility issues.
>>> # Anything is not in main dpdk repo, it is a second class citizen.
>>> # Customer has the pain to use two repos and two releases. Internally,
>>> it can be two different
>>> repo but release needs to go through one repo.
>>>
>>> If we are focusing ONLY on the driver API then how can DPDK grow
>>> further? If linux kernel
>>> would be thought only have just the kernel and networking/storage as
>>> different repo it would
>>> not have grown up?
> Linux kernel is selecting what can enter in the focus or not.
> And I wonder what is the desire of extending/growing the scope of a library?
>
>
>>> What is the real concern? Maintenance?
>>>
>>>> I think the original DPDK repository should focus on low-level features
>>>> which offer hardware offloads and optimizations.
>>> The nodes can be vendor-specific to optimize the specific use cases.
>>> As I mentioned in the cover letter,
>>>
>>> "
>>> 2) Based on our experience, NPU HW accelerates are so different than one vendor
>>> to another vendor. Going forward, We believe, API abstraction may not be enough
>>> abstract the difference in HW. The Vendor-specific nodes can abstract the HW
>>> differences and reuse generic the nodes as needed.
>>> This would help both the silicon vendors and DPDK end users.
>>> "
>>>
>>> Thoughts from other folks?
>>>
>>>
>>>> Consuming the low-level API in different abstractions,
>>>> and building applications, should be done on top of dpdk.git.
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-21 15:38 ` Mattias Rönnblom
@ 2020-02-21 15:53 ` dave
2020-02-21 16:04 ` Thomas Monjalon
0 siblings, 1 reply; 31+ messages in thread
From: dave @ 2020-02-21 15:53 UTC (permalink / raw)
To: 'Mattias Rönnblom', 'Thomas Monjalon',
'Jerin Jacob'
Cc: 'Jerin Jacob', 'Ray Kinsella', 'dpdk-dev',
'Prasun Kapoor', 'Nithin Dabilpuram',
'Kiran Kumar K', 'Pavan Nikhilesh',
'Narayana Prasad',
nsaxena, sshankarnara, 'Honnappa Nagarahalli',
'David Marchand', 'Ferruh Yigit',
'Andrew Rybchenko', 'Ajit Khaparde',
'Ye, Xiaolong', 'Raslan Darawsheh',
'Maxime Coquelin', 'Akhil Goyal',
'Cristian Dumitrescu', 'John McNamara',
'Richardson, Bruce', 'Anatoly Burakov',
'Gavin Hu', 'David Christensen',
'Ananyev, Konstantin', 'Pallavi Kadam',
'Olivier Matz', 'Gage Eads',
'Rao, Nikhil', 'Erik Gabriel Carrillo',
'Hemant Agrawal', 'Artem V. Andreev',
'Stephen Hemminger', 'Shahaf Shuler',
'Wiles, Keith', 'Jasvinder Singh',
'Vladimir Medvedkin',
techboard, 'Stephen Hemminger'
I can share a data-point with respect to constructing a reasonably functional network stack. Original work on the project which eventually became fd.io vpp started in 2002. I've worked on the vpp code base full-time for 18 years.
In terms of lines of code: the vpp graph subsystem is a minuscule fraction of the project as a whole. We've rewritten performance-critical bits of the vpp netstack multiple times.
FWIW... Dave
-----Original Message-----
From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Sent: Friday, February 21, 2020 10:39 AM
To: Thomas Monjalon <thomas@monjalon.net>; Jerin Jacob <jerinjacobk@gmail.com>
Cc: Jerin Jacob <jerinj@marvell.com>; Ray Kinsella <mdr@ashroe.eu>; dpdk-dev <dev@dpdk.org>; Prasun Kapoor <pkapoor@marvell.com>; Nithin Dabilpuram <ndabilpuram@marvell.com>; Kiran Kumar K <kirankumark@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>; Narayana Prasad <pathreya@marvell.com>; nsaxena@marvell.com; sshankarnara@marvell.com; Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>; David Marchand <david.marchand@redhat.com>; Ferruh Yigit <ferruh.yigit@intel.com>; Andrew Rybchenko <arybchenko@solarflare.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Ye, Xiaolong <xiaolong.ye@intel.com>; Raslan Darawsheh <rasland@mellanox.com>; Maxime Coquelin <maxime.coquelin@redhat.com>; Akhil Goyal <akhil.goyal@nxp.com>; Cristian Dumitrescu <cristian.dumitrescu@intel.com>; John McNamara <john.mcnamara@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Anatoly Burakov <anatoly.burakov@intel.com>; Gavin Hu <gavin.hu@arm.com>; David Christensen <drc@linux.vnet.ibm.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Pallavi Kadam <pallavi.kadam@intel.com>; Olivier Matz <olivier.matz@6wind.com>; Gage Eads <gage.eads@intel.com>; Rao, Nikhil <nikhil.rao@intel.com>; Erik Gabriel Carrillo <erik.g.carrillo@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>; Artem V. Andreev <artem.andreev@oktetlabs.ru>; Stephen Hemminger <sthemmin@microsoft.com>; Shahaf Shuler <shahafs@mellanox.com>; Wiles, Keith <keith.wiles@intel.com>; Jasvinder Singh <jasvinder.singh@intel.com>; Vladimir Medvedkin <vladimir.medvedkin@intel.com>; techboard@dpdk.org; Stephen Hemminger <stephen@networkplumber.org>; dave@barachs.net
Subject: Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
On 2020-02-21 12:10, Thomas Monjalon wrote:
> 21/02/2020 11:30, Jerin Jacob:
>> On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>>> On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>> Thanks for starting this discussion now. It is an interesting
>>> discussion. Some thoughts below.
>>> We can decide based on community consensus and follow a single rule
>>> across the components.
>> Thomas,
>>
>> No feedback yet on the below questions.
> Indeed. I was waiting for opininons from others.
>
>> If there no consensus in the email, I would like to propose this
>> topic to the 26th Feb TB meeting.
> I gave my opinion below.
> If a consensus cannot be reached, I agree with the request to the techboard.
>
>
>>>> 17/02/2020 08:19, Jerin Jacob:
>>>>> I got initial comments from Ray and Stephen on this RFC[1]. Thanks
>>>>> for the comments.
>>>>>
>>>>> Is anyone else planning to have an architecture level or API usage
>>>>> level review or any review of other top-level aspects?
>>>> If we add rte_graph to DPDK, we will have 2 similar libraries.
>>>>
>>>> I already proposed several times to move rte_pipeline in a separate
>>>> repository for two reasons:
>>>> 1/ it is acting at a higher API layer level
>>> We need to define what is the higher layer API. Is it processing beyond L2?
> My opinion is that any API which is implemented differently for
> different hardware should be in DPDK.
> Hardware devices can offload protocol processing higher than L2, so L2
> does not look to be a good limit from my point of view.
>
If you assume the capability of networking hardware will grow, and you want to unify different networking hardware with varying capabilities (and also include software-only implementations) under one API, then you might well end up growing DPDK into the software stack you mention below. Soft implementations of complex protocols will require operating system-like support services like timers, RCU, various lock-less data structures, deferred work mechanism, counter handling frameworks, control plane interfaces, etc. Coupling should always be avoided of course, but DPDK would inevitably no longer be a pick-and-choose smörgåsbord library - at least as long as the consumer wants to utilize this higher-layer functionality.
This would make DPDK more of a packet processing run-time or a special-purpose, networking operating system than the "a bunch of Ethernet drivers in user space" as it started out as.
I'm not saying that's a bad thing. In fact, I think it sounds like an interesting option, although also a very challenging one. From what I can see, DPDK has already set out along this route already. If this is a conscious decision or not, I don't know. Add to this, if Linux expands further with AF_XDP-like features, beyond simply packet I/O, it might not only try to take over DPDK's original concerns, but also more of the current ones.
>>> In the context of Graph library, it is a framework, not using any of
>>> the substem API other than EAL and it is under lib/librte_graph.
>>> Nodes library using graph and other subsystem components such as
>>> ethdev and it is under lib/lib_node/
>>>
>>>
>>> Another interesting question would what would be an issue in DPDK
>>> supporting beyond L2. Or higher level protocols?
> Definitely higher than L2 is OK in DPDK as long as it is related to
> hardware capabilities, not software stack (which can be a DPDK application).
>
>
>>>> 2/ there can be different solutions in this layer
>>> Is there any issue with that?
>>> There is overlap with the distributor library and eventdev as well.
>>> ethdev and SW traffic manager libraries as well. That list goes on.
> I don't know how much it is an issue.
> But I think it shows that at least one implementation is not generic enough.
>
>
>>>> I think 1/ was commonly agreed in the community.
>>>> Now we see one more proof of the reason 2/.
>>>>
>>>> I believe it is time to move rte_pipeline (Packet Framework) in a
>>>> separate repository, and welcome rte_graph as well in another
>>>> separate repository.
>>> What would be gain out of this?
> The gain is to be clear about what should be the focus for
> contributors working on the main DPDK repository.
> What is expected to be maintained, tested, etc.
>
>
>>> My concerns are:
>>> # Like packet-gen, The new code will be filled with unnecessary DPDK
>>> version checks and unnecessary compatibility issues.
>>> # Anything is not in main dpdk repo, it is a second class citizen.
>>> # Customer has the pain to use two repos and two releases.
>>> Internally, it can be two different repo but release needs to go
>>> through one repo.
>>>
>>> If we are focusing ONLY on the driver API then how can DPDK grow
>>> further? If linux kernel would be thought only have just the kernel
>>> and networking/storage as different repo it would not have grown up?
> Linux kernel is selecting what can enter in the focus or not.
> And I wonder what is the desire of extending/growing the scope of a library?
>
>
>>> What is the real concern? Maintenance?
>>>
>>>> I think the original DPDK repository should focus on low-level
>>>> features which offer hardware offloads and optimizations.
>>> The nodes can be vendor-specific to optimize the specific use cases.
>>> As I mentioned in the cover letter,
>>>
>>> "
>>> 2) Based on our experience, NPU HW accelerates are so different than
>>> one vendor to another vendor. Going forward, We believe, API
>>> abstraction may not be enough abstract the difference in HW. The
>>> Vendor-specific nodes can abstract the HW differences and reuse generic the nodes as needed.
>>> This would help both the silicon vendors and DPDK end users.
>>> "
>>>
>>> Thoughts from other folks?
>>>
>>>
>>>> Consuming the low-level API in different abstractions, and building
>>>> applications, should be done on top of dpdk.git.
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-21 11:10 ` Thomas Monjalon
2020-02-21 15:38 ` Mattias Rönnblom
@ 2020-02-21 15:56 ` Jerin Jacob
2020-02-21 16:14 ` Thomas Monjalon
1 sibling, 1 reply; 31+ messages in thread
From: Jerin Jacob @ 2020-02-21 15:56 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
On Fri, Feb 21, 2020 at 4:40 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 21/02/2020 11:30, Jerin Jacob:
> > On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > > On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > Thanks for starting this discussion now. It is an interesting
> > > discussion. Some thoughts below.
> > > We can decide based on community consensus and follow a single rule
> > > across the components.
> >
> > Thomas,
> >
> > No feedback yet on the below questions.
>
> Indeed. I was waiting for opininons from others.
Me too.
>
> > If there no consensus in the email, I would like to propose this topic
> > to the 26th Feb TB meeting.
>
> I gave my opinion below.
> If a consensus cannot be reached, I agree with the request to the techboard.
OK.
>
>
> > > > 17/02/2020 08:19, Jerin Jacob:
> > > > > I got initial comments from Ray and Stephen on this RFC[1]. Thanks for
> > > > > the comments.
> > > > >
> > > > > Is anyone else planning to have an architecture level or API usage
> > > > > level review or any review of other top-level aspects?
> > > >
> > > > If we add rte_graph to DPDK, we will have 2 similar libraries.
> > > >
> > > > I already proposed several times to move rte_pipeline in a separate
> > > > repository for two reasons:
> > > > 1/ it is acting at a higher API layer level
> > >
> > > We need to define what is the higher layer API. Is it processing beyond L2?
>
> My opinion is that any API which is implemented differently
> for different hardware should be in DPDK.
We need to define SIMD optimization(not HW specific to but
architecture-specific)
treatment as well, as the graph and node library will have SIMD
optimization as well.
In general, by the above policy enforced, we need to split DPDK like below,
dpdk.git
----------
librte_compressdev
librte_bbdev
librte_eventdev
librte_pci
librte_rawdev
librte_eal
librte_security
librte_mempool
librte_mbuf
librte_cryptodev
librte_ethdev
other repo(s).
----------------
librte_cmdline
librte_cfgfile
librte_bitratestats
librte_efd
librte_latencystats
librte_kvargs
librte_jobstats
librte_gso
librte_gro
librte_flow_classify
librte_pipeline
librte_net
librte_metrics
librte_meter
librte_member
librte_table
librte_stack
librte_sched
librte_rib
librte_reorder
librte_rcu
librte_power
librte_distributor
librte_bpf
librte_ip_frag
librte_hash
librte_fib
librte_timer
librte_telemetry
librte_port
librte_pdump
librte_kni
librte_acl
librte_vhost
librte_ring
librte_lpm
librte_ipsec
> Hardware devices can offload protocol processing higher than L2,
> so L2 does not look to be a good limit from my point of view.
The node may use HW specific optimization if needed.
>
>
> > > In the context of Graph library, it is a framework, not using any of
> > > the substem API
> > > other than EAL and it is under lib/librte_graph.
> > > Nodes library using graph and other subsystem components such as ethdev and
> > > it is under lib/lib_node/
> > >
> > >
> > > Another interesting question would what would be an issue in DPDK supporting
> > > beyond L2. Or higher level protocols?
>
> Definitely higher than L2 is OK in DPDK as long as it is related to hardware
> capabilities, not software stack (which can be a DPDK application).
The software stack is a vague term. librte_ipsec could be a software stack.
>
>
> > > > 2/ there can be different solutions in this layer
> > >
> > > Is there any issue with that?
> > > There is overlap with the distributor library and eventdev as well.
> > > ethdev and SW traffic manager libraries as well. That list goes on.
>
> I don't know how much it is an issue.
> But I think it shows that at least one implementation is not generic enough.
I don't think, distributor lies there because of eventdev is not generic.
In fact, SW traffic manager is hooked to ethdev as well. It can work as both.
>
>
> > > > I think 1/ was commonly agreed in the community.
> > > > Now we see one more proof of the reason 2/.
> > > >
> > > > I believe it is time to move rte_pipeline (Packet Framework)
> > > > in a separate repository, and welcome rte_graph as well in another
> > > > separate repository.
> > >
> > > What would be gain out of this?
>
> The gain is to be clear about what should be the focus for contributors
> working on the main DPDK repository.
Not sure how it can defocus if there is another code in the repo.
If that case, the Linux kernel is not focused at all.
> What is expected to be maintained, tested, etc.
We need to maintain and test other code in OTHER dpdk repo as well.
>
>
> > > My concerns are:
> > > # Like packet-gen, The new code will be filled with unnecessary DPDK
> > > version checks
> > > and unnecessary compatibility issues.
> > > # Anything is not in main dpdk repo, it is a second class citizen.
> > > # Customer has the pain to use two repos and two releases. Internally,
> > > it can be two different
> > > repo but release needs to go through one repo.
> > >
> > > If we are focusing ONLY on the driver API then how can DPDK grow
> > > further? If linux kernel
> > > would be thought only have just the kernel and networking/storage as
> > > different repo it would
> > > not have grown up?
>
> Linux kernel is selecting what can enter in the focus or not.
Sorry. This sentence is not very clear to me.
> And I wonder what is the desire of extending/growing the scope of a library?
If the HW/Arch accelerated packet processing in the scope of DPDK this
library shall
come to that.
IMO, As long as there is maintainer, who can give pull request in time
and contribute to
the technical decision of the specific library, I think, that should be enough
to add in dpdk.git.
IMO, we can not get away from more contribution to dpdk. Assume, some set of
library goto pulled out main dpdk.git for some reason. One can still make
new releases say "dpdk-next" to including dpdk,git and various libraries.
Is that something, we are looking to enable as an end solution for
distros and/or
end-users.
>
>
> > > What is the real concern? Maintenance?
> > >
> > > > I think the original DPDK repository should focus on low-level features
> > > > which offer hardware offloads and optimizations.
> > >
> > > The nodes can be vendor-specific to optimize the specific use cases.
> > > As I mentioned in the cover letter,
> > >
> > > "
> > > 2) Based on our experience, NPU HW accelerates are so different than one vendor
> > > to another vendor. Going forward, We believe, API abstraction may not be enough
> > > abstract the difference in HW. The Vendor-specific nodes can abstract the HW
> > > differences and reuse generic the nodes as needed.
> > > This would help both the silicon vendors and DPDK end users.
> > > "
> > >
> > > Thoughts from other folks?
> > >
> > >
> > > > Consuming the low-level API in different abstractions,
> > > > and building applications, should be done on top of dpdk.git.
>
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-21 15:53 ` dave
@ 2020-02-21 16:04 ` Thomas Monjalon
0 siblings, 0 replies; 31+ messages in thread
From: Thomas Monjalon @ 2020-02-21 16:04 UTC (permalink / raw)
To: dave
Cc: 'Mattias Rönnblom', 'Jerin Jacob',
'Jerin Jacob', 'Ray Kinsella', 'dpdk-dev',
'Prasun Kapoor', 'Nithin Dabilpuram',
'Kiran Kumar K', 'Pavan Nikhilesh',
'Narayana Prasad',
nsaxena, sshankarnara, 'Honnappa Nagarahalli',
'David Marchand', 'Ferruh Yigit',
'Andrew Rybchenko', 'Ajit Khaparde',
'Ye, Xiaolong', 'Raslan Darawsheh',
'Maxime Coquelin', 'Akhil Goyal',
'Cristian Dumitrescu', 'John McNamara',
'Richardson, Bruce', 'Anatoly Burakov',
'Gavin Hu', 'David Christensen',
'Ananyev, Konstantin', 'Pallavi Kadam',
'Olivier Matz', 'Gage Eads',
'Rao, Nikhil', 'Erik Gabriel Carrillo',
'Hemant Agrawal', 'Artem V. Andreev',
'Stephen Hemminger', 'Shahaf Shuler',
'Wiles, Keith', 'Jasvinder Singh',
'Vladimir Medvedkin',
techboard, 'Stephen Hemminger'
21/02/2020 16:53, dave@barachs.net:
> I can share a data-point with respect to constructing a reasonably functional network stack. Original work on the project which eventually became fd.io vpp started in 2002. I've worked on the vpp code base full-time for 18 years.
>
> In terms of lines of code: the vpp graph subsystem is a minuscule fraction of the project as a whole. We've rewritten performance-critical bits of the vpp netstack multiple times.
Please could you elaborate?
It would be nice to read more about your thoughts and experience.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-21 15:56 ` Jerin Jacob
@ 2020-02-21 16:14 ` Thomas Monjalon
2020-02-22 9:05 ` Jerin Jacob
0 siblings, 1 reply; 31+ messages in thread
From: Thomas Monjalon @ 2020-02-21 16:14 UTC (permalink / raw)
To: Jerin Jacob
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
21/02/2020 16:56, Jerin Jacob:
> On Fri, Feb 21, 2020 at 4:40 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > 21/02/2020 11:30, Jerin Jacob:
> > > On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > > > On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > > > If we add rte_graph to DPDK, we will have 2 similar libraries.
> > > > >
> > > > > I already proposed several times to move rte_pipeline in a separate
> > > > > repository for two reasons:
> > > > > 1/ it is acting at a higher API layer level
> > > >
> > > > We need to define what is the higher layer API. Is it processing beyond L2?
> >
> > My opinion is that any API which is implemented differently
> > for different hardware should be in DPDK.
>
> We need to define SIMD optimization(not HW specific to but
> architecture-specific)
> treatment as well, as the graph and node library will have SIMD
> optimization as well.
I think SIMD optimization is generic to any performance-related project,
not specific to DPDK.
> In general, by the above policy enforced, we need to split DPDK like below,
> dpdk.git
> ----------
> librte_compressdev
> librte_bbdev
> librte_eventdev
> librte_pci
> librte_rawdev
> librte_eal
> librte_security
> librte_mempool
> librte_mbuf
> librte_cryptodev
> librte_ethdev
>
> other repo(s).
> ----------------
> librte_cmdline
> librte_cfgfile
> librte_bitratestats
> librte_efd
> librte_latencystats
> librte_kvargs
> librte_jobstats
> librte_gso
> librte_gro
> librte_flow_classify
> librte_pipeline
> librte_net
> librte_metrics
> librte_meter
> librte_member
> librte_table
> librte_stack
> librte_sched
> librte_rib
> librte_reorder
> librte_rcu
> librte_power
> librte_distributor
> librte_bpf
> librte_ip_frag
> librte_hash
> librte_fib
> librte_timer
> librte_telemetry
> librte_port
> librte_pdump
> librte_kni
> librte_acl
> librte_vhost
> librte_ring
> librte_lpm
> librte_ipsec
I think it is a fair conclusion of the scope I am arguing, yes.
> > Hardware devices can offload protocol processing higher than L2,
> > so L2 does not look to be a good limit from my point of view.
>
> The node may use HW specific optimization if needed.
That's an interesting argument.
> > > > In the context of Graph library, it is a framework, not using any of
> > > > the substem API
> > > > other than EAL and it is under lib/librte_graph.
> > > > Nodes library using graph and other subsystem components such as ethdev and
> > > > it is under lib/lib_node/
> > > >
> > > >
> > > > Another interesting question would what would be an issue in DPDK supporting
> > > > beyond L2. Or higher level protocols?
> >
> > Definitely higher than L2 is OK in DPDK as long as it is related to hardware
> > capabilities, not software stack (which can be a DPDK application).
>
> The software stack is a vague term. librte_ipsec could be a software stack.
I agree.
> > > > > 2/ there can be different solutions in this layer
> > > >
> > > > Is there any issue with that?
> > > > There is overlap with the distributor library and eventdev as well.
> > > > ethdev and SW traffic manager libraries as well. That list goes on.
> >
> > I don't know how much it is an issue.
> > But I think it shows that at least one implementation is not generic enough.
>
> I don't think, distributor lies there because of eventdev is not generic.
> In fact, SW traffic manager is hooked to ethdev as well. It can work as both.
> >
> >
> > > > > I think 1/ was commonly agreed in the community.
> > > > > Now we see one more proof of the reason 2/.
> > > > >
> > > > > I believe it is time to move rte_pipeline (Packet Framework)
> > > > > in a separate repository, and welcome rte_graph as well in another
> > > > > separate repository.
> > > >
> > > > What would be gain out of this?
> >
> > The gain is to be clear about what should be the focus for contributors
> > working on the main DPDK repository.
>
> Not sure how it can defocus if there is another code in the repo.
> If that case, the Linux kernel is not focused at all.
I see your point.
> > What is expected to be maintained, tested, etc.
>
> We need to maintain and test other code in OTHER dpdk repo as well.
Yes but the ones responsible are not the same.
> > > > My concerns are:
> > > > # Like packet-gen, The new code will be filled with unnecessary DPDK
> > > > version checks
> > > > and unnecessary compatibility issues.
> > > > # Anything is not in main dpdk repo, it is a second class citizen.
> > > > # Customer has the pain to use two repos and two releases. Internally,
> > > > it can be two different
> > > > repo but release needs to go through one repo.
> > > >
> > > > If we are focusing ONLY on the driver API then how can DPDK grow
> > > > further? If linux kernel
> > > > would be thought only have just the kernel and networking/storage as
> > > > different repo it would
> > > > not have grown up?
> >
> > Linux kernel is selecting what can enter in the focus or not.
>
> Sorry. This sentence is not very clear to me.
I mean not everything proposed to Linux community is merged.
> > And I wonder what is the desire of extending/growing the scope of a library?
>
> If the HW/Arch accelerated packet processing in the scope of DPDK this
> library shall
> come to that.
>
> IMO, As long as there is maintainer, who can give pull request in time
> and contribute to
> the technical decision of the specific library, I think, that should be enough
> to add in dpdk.git.
Yes, that's fair.
> IMO, we can not get away from more contribution to dpdk. Assume, some set of
> library goto pulled out main dpdk.git for some reason. One can still make
> new releases say "dpdk-next" to including dpdk,git and various libraries.
> Is that something, we are looking to enable as an end solution for
> distros and/or
> end-users.
>
>
> > > > What is the real concern? Maintenance?
> > > >
> > > > > I think the original DPDK repository should focus on low-level features
> > > > > which offer hardware offloads and optimizations.
> > > >
> > > > The nodes can be vendor-specific to optimize the specific use cases.
> > > > As I mentioned in the cover letter,
> > > >
> > > > "
> > > > 2) Based on our experience, NPU HW accelerates are so different than one vendor
> > > > to another vendor. Going forward, We believe, API abstraction may not be enough
> > > > abstract the difference in HW. The Vendor-specific nodes can abstract the HW
> > > > differences and reuse generic the nodes as needed.
> > > > This would help both the silicon vendors and DPDK end users.
> > > > "
> > > >
> > > > Thoughts from other folks?
> > > >
> > > >
> > > > > Consuming the low-level API in different abstractions,
> > > > > and building applications, should be done on top of dpdk.git.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-21 16:14 ` Thomas Monjalon
@ 2020-02-22 9:05 ` Jerin Jacob
2020-02-22 9:52 ` Thomas Monjalon
0 siblings, 1 reply; 31+ messages in thread
From: Jerin Jacob @ 2020-02-22 9:05 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
On Fri, Feb 21, 2020 at 9:44 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 21/02/2020 16:56, Jerin Jacob:
> > On Fri, Feb 21, 2020 at 4:40 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > 21/02/2020 11:30, Jerin Jacob:
> > > > On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > > > > On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > > > > If we add rte_graph to DPDK, we will have 2 similar libraries.
> > > > > >
> > > > > > I already proposed several times to move rte_pipeline in a separate
> > > > > > repository for two reasons:
> > > > > > 1/ it is acting at a higher API layer level
> > > > >
> > > > > We need to define what is the higher layer API. Is it processing beyond L2?
> > >
> > > My opinion is that any API which is implemented differently
> > > for different hardware should be in DPDK.
> >
> > We need to define SIMD optimization(not HW specific to but
> > architecture-specific)
> > treatment as well, as the graph and node library will have SIMD
> > optimization as well.
>
> I think SIMD optimization is generic to any performance-related project,
> not specific to DPDK.
>
>
> > In general, by the above policy enforced, we need to split DPDK like below,
> > dpdk.git
> > ----------
> > librte_compressdev
> > librte_bbdev
> > librte_eventdev
> > librte_pci
> > librte_rawdev
> > librte_eal
> > librte_security
> > librte_mempool
> > librte_mbuf
> > librte_cryptodev
> > librte_ethdev
> >
> > other repo(s).
> > ----------------
> > librte_cmdline
> > librte_cfgfile
> > librte_bitratestats
> > librte_efd
> > librte_latencystats
> > librte_kvargs
> > librte_jobstats
> > librte_gso
> > librte_gro
> > librte_flow_classify
> > librte_pipeline
> > librte_net
> > librte_metrics
> > librte_meter
> > librte_member
> > librte_table
> > librte_stack
> > librte_sched
> > librte_rib
> > librte_reorder
> > librte_rcu
> > librte_power
> > librte_distributor
> > librte_bpf
> > librte_ip_frag
> > librte_hash
> > librte_fib
> > librte_timer
> > librte_telemetry
> > librte_port
> > librte_pdump
> > librte_kni
> > librte_acl
> > librte_vhost
> > librte_ring
> > librte_lpm
> > librte_ipsec
>
> I think it is a fair conclusion of the scope I am arguing, yes.
OK. See below.
> > > What is expected to be maintained, tested, etc.
> >
> > We need to maintain and test other code in OTHER dpdk repo as well.
>
> Yes but the ones responsible are not the same.
I see your point. Can I interpret it as you would like to NOT take
responsibility
of SW libraries(Items enumerated in the second list)?
I think, the main question would be, how it will deliver to distros
and/or end-users
and what will be part of the dpdk release?
I can think of two options. Maybe distro folks have better view on this.
options 1:
- Split dpdk to dpdk-core.git, dpdk-algo.git etc based on the
functionalities and maintainer's availability.
- Follow existing release cadence and deliver single release tarball
with content from the above repos.
options 2:
- Introduce more subtrees(dpdk-next-algo.git etc) based on the
functionalities and maintainer's availability.
- Follow existing release cadence and have a pull request to main
dpdk.git just like Linux kernel or existing scheme of things.
I am for option 2.
NOTE: This new graph and node library, I would like to make its new
subtree in the existing scheme of
things so that it will NOT be a burden for you to manage.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-22 9:05 ` Jerin Jacob
@ 2020-02-22 9:52 ` Thomas Monjalon
2020-02-22 10:24 ` Jerin Jacob
0 siblings, 1 reply; 31+ messages in thread
From: Thomas Monjalon @ 2020-02-22 9:52 UTC (permalink / raw)
To: Jerin Jacob
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
22/02/2020 10:05, Jerin Jacob:
> On Fri, Feb 21, 2020 at 9:44 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > 21/02/2020 16:56, Jerin Jacob:
> > > On Fri, Feb 21, 2020 at 4:40 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > > 21/02/2020 11:30, Jerin Jacob:
> > > > > On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > > > > > On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > > > > > If we add rte_graph to DPDK, we will have 2 similar libraries.
> > > > > > >
> > > > > > > I already proposed several times to move rte_pipeline in a separate
> > > > > > > repository for two reasons:
> > > > > > > 1/ it is acting at a higher API layer level
> > > > > >
> > > > > > We need to define what is the higher layer API. Is it processing beyond L2?
> > > >
> > > > My opinion is that any API which is implemented differently
> > > > for different hardware should be in DPDK.
> > >
> > > We need to define SIMD optimization(not HW specific to but
> > > architecture-specific)
> > > treatment as well, as the graph and node library will have SIMD
> > > optimization as well.
> >
> > I think SIMD optimization is generic to any performance-related project,
> > not specific to DPDK.
> >
> >
> > > In general, by the above policy enforced, we need to split DPDK like below,
> > > dpdk.git
> > > ----------
> > > librte_compressdev
> > > librte_bbdev
> > > librte_eventdev
> > > librte_pci
> > > librte_rawdev
> > > librte_eal
> > > librte_security
> > > librte_mempool
> > > librte_mbuf
> > > librte_cryptodev
> > > librte_ethdev
> > >
> > > other repo(s).
> > > ----------------
> > > librte_cmdline
> > > librte_cfgfile
> > > librte_bitratestats
> > > librte_efd
> > > librte_latencystats
> > > librte_kvargs
> > > librte_jobstats
> > > librte_gso
> > > librte_gro
> > > librte_flow_classify
> > > librte_pipeline
> > > librte_net
> > > librte_metrics
> > > librte_meter
> > > librte_member
> > > librte_table
> > > librte_stack
> > > librte_sched
> > > librte_rib
> > > librte_reorder
> > > librte_rcu
> > > librte_power
> > > librte_distributor
> > > librte_bpf
> > > librte_ip_frag
> > > librte_hash
> > > librte_fib
> > > librte_timer
> > > librte_telemetry
> > > librte_port
> > > librte_pdump
> > > librte_kni
> > > librte_acl
> > > librte_vhost
> > > librte_ring
> > > librte_lpm
> > > librte_ipsec
> >
> > I think it is a fair conclusion of the scope I am arguing, yes.
>
> OK. See below.
>
> > > > What is expected to be maintained, tested, etc.
> > >
> > > We need to maintain and test other code in OTHER dpdk repo as well.
> >
> > Yes but the ones responsible are not the same.
>
> I see your point. Can I interpret it as you would like to NOT take
> responsibility
> of SW libraries(Items enumerated in the second list)?
It's not only about me. This is a community decision.
> I think, the main question would be, how it will deliver to distros
> and/or end-users
> and what will be part of the dpdk release?
>
> I can think of two options. Maybe distro folks have better view on this.
>
> options 1:
> - Split dpdk to dpdk-core.git, dpdk-algo.git etc based on the
> functionalities and maintainer's availability.
> - Follow existing release cadence and deliver single release tarball
> with content from the above repos.
>
> options 2:
> - Introduce more subtrees(dpdk-next-algo.git etc) based on the
> functionalities and maintainer's availability.
> - Follow existing release cadence and have a pull request to main
> dpdk.git just like Linux kernel or existing scheme of things.
>
> I am for option 2.
>
> NOTE: This new graph and node library, I would like to make its new
> subtree in the existing scheme of
> things so that it will NOT be a burden for you to manage.
The option 2 is to make maintainers life easier.
Keeping all libraries in the same repository allows to have
an unique release and a central place for the apps and docs.
The option 1 may make contributors life easier if we consider
adding new libraries can make contributions harder in case of dependencies.
The option 1 makes also repositories smaller, so maybe easier to approach.
It makes easier to fully validate testing and quality of a repository.
Having separate packages makes easier to select what is distributed and supported.
After years thinking about the scope of DPDK repository,
I am still not sure which solution is best.
I really would like to see more opinions, thanks.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-22 9:52 ` Thomas Monjalon
@ 2020-02-22 10:24 ` Jerin Jacob
2020-02-24 10:59 ` Ray Kinsella
0 siblings, 1 reply; 31+ messages in thread
From: Jerin Jacob @ 2020-02-22 10:24 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Prasun Kapoor,
Nithin Dabilpuram, Kiran Kumar K, Pavan Nikhilesh,
Narayana Prasad, nsaxena, sshankarnara, Honnappa Nagarahalli,
David Marchand, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
Ye, Xiaolong, Raslan Darawsheh, Maxime Coquelin, Akhil Goyal,
Cristian Dumitrescu, John McNamara, Richardson, Bruce,
Anatoly Burakov, Gavin Hu, David Christensen, Ananyev,
Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao, Nikhil,
Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
On Sat, Feb 22, 2020 at 3:23 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 22/02/2020 10:05, Jerin Jacob:
> > On Fri, Feb 21, 2020 at 9:44 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > 21/02/2020 16:56, Jerin Jacob:
> > > > On Fri, Feb 21, 2020 at 4:40 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > > > 21/02/2020 11:30, Jerin Jacob:
> > > > > > On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > > > > > > On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > > > > > > If we add rte_graph to DPDK, we will have 2 similar libraries.
> > > > > > > >
> > > > > > > > I already proposed several times to move rte_pipeline in a separate
> > > > > > > > repository for two reasons:
> > > > > > > > 1/ it is acting at a higher API layer level
> > > > > > >
> > > > > > > We need to define what is the higher layer API. Is it processing beyond L2?
> > > > >
> > > > > My opinion is that any API which is implemented differently
> > > > > for different hardware should be in DPDK.
> > > >
> > > > We need to define SIMD optimization(not HW specific to but
> > > > architecture-specific)
> > > > treatment as well, as the graph and node library will have SIMD
> > > > optimization as well.
> > >
> > > I think SIMD optimization is generic to any performance-related project,
> > > not specific to DPDK.
> > >
> > >
> > > > In general, by the above policy enforced, we need to split DPDK like below,
> > > > dpdk.git
> > > > ----------
> > > > librte_compressdev
> > > > librte_bbdev
> > > > librte_eventdev
> > > > librte_pci
> > > > librte_rawdev
> > > > librte_eal
> > > > librte_security
> > > > librte_mempool
> > > > librte_mbuf
> > > > librte_cryptodev
> > > > librte_ethdev
> > > >
> > > > other repo(s).
> > > > ----------------
> > > > librte_cmdline
> > > > librte_cfgfile
> > > > librte_bitratestats
> > > > librte_efd
> > > > librte_latencystats
> > > > librte_kvargs
> > > > librte_jobstats
> > > > librte_gso
> > > > librte_gro
> > > > librte_flow_classify
> > > > librte_pipeline
> > > > librte_net
> > > > librte_metrics
> > > > librte_meter
> > > > librte_member
> > > > librte_table
> > > > librte_stack
> > > > librte_sched
> > > > librte_rib
> > > > librte_reorder
> > > > librte_rcu
> > > > librte_power
> > > > librte_distributor
> > > > librte_bpf
> > > > librte_ip_frag
> > > > librte_hash
> > > > librte_fib
> > > > librte_timer
> > > > librte_telemetry
> > > > librte_port
> > > > librte_pdump
> > > > librte_kni
> > > > librte_acl
> > > > librte_vhost
> > > > librte_ring
> > > > librte_lpm
> > > > librte_ipsec
> > >
> > > I think it is a fair conclusion of the scope I am arguing, yes.
> >
> > OK. See below.
> >
> > > > > What is expected to be maintained, tested, etc.
> > > >
> > > > We need to maintain and test other code in OTHER dpdk repo as well.
> > >
> > > Yes but the ones responsible are not the same.
> >
> > I see your point. Can I interpret it as you would like to NOT take
> > responsibility
> > of SW libraries(Items enumerated in the second list)?
>
> It's not only about me. This is a community decision.
OK. Let wait for community feedback.
Probably we discuss more in public TB meeting in 26th Feb.
>
>
> > I think, the main question would be, how it will deliver to distros
> > and/or end-users
> > and what will be part of the dpdk release?
> >
> > I can think of two options. Maybe distro folks have better view on this.
> >
> > options 1:
> > - Split dpdk to dpdk-core.git, dpdk-algo.git etc based on the
> > functionalities and maintainer's availability.
> > - Follow existing release cadence and deliver single release tarball
> > with content from the above repos.
> >
> > options 2:
> > - Introduce more subtrees(dpdk-next-algo.git etc) based on the
> > functionalities and maintainer's availability.
> > - Follow existing release cadence and have a pull request to main
> > dpdk.git just like Linux kernel or existing scheme of things.
> >
> > I am for option 2.
> >
> > NOTE: This new graph and node library, I would like to make its new
> > subtree in the existing scheme of
> > things so that it will NOT be a burden for you to manage.
>
> The option 2 is to make maintainers life easier.
> Keeping all libraries in the same repository allows to have
> an unique release and a central place for the apps and docs.
>
> The option 1 may make contributors life easier if we consider
> adding new libraries can make contributions harder in case of dependencies.
> The option 1 makes also repositories smaller, so maybe easier to approach.
> It makes easier to fully validate testing and quality of a repository.
> Having separate packages makes easier to select what is distributed and supported.
If the final dpdk release tarball looks same for option1 and option2
then I think,
option 1 is overhead to manage intra repo dependency.
I agree with Thomas, it is better to decide as a community what
direction we need
to take and align existing and new libraries with that scheme.
>
> After years thinking about the scope of DPDK repository,
> I am still not sure which solution is best.
> I really would like to see more opinions, thanks.
Yes.
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-22 10:24 ` Jerin Jacob
@ 2020-02-24 10:59 ` Ray Kinsella
0 siblings, 0 replies; 31+ messages in thread
From: Ray Kinsella @ 2020-02-24 10:59 UTC (permalink / raw)
To: Jerin Jacob, Thomas Monjalon
Cc: Jerin Jacob, dpdk-dev, Prasun Kapoor, Nithin Dabilpuram,
Kiran Kumar K, Pavan Nikhilesh, Narayana Prasad, nsaxena,
sshankarnara, Honnappa Nagarahalli, David Marchand, Ferruh Yigit,
Andrew Rybchenko, Ajit Khaparde, Ye, Xiaolong, Raslan Darawsheh,
Maxime Coquelin, Akhil Goyal, Cristian Dumitrescu, John McNamara,
Richardson, Bruce, Anatoly Burakov, Gavin Hu, David Christensen,
Ananyev, Konstantin, Pallavi Kadam, Olivier Matz, Gage Eads, Rao,
Nikhil, Erik Gabriel Carrillo, Hemant Agrawal, Artem V. Andreev,
Stephen Hemminger, Shahaf Shuler, Wiles, Keith,
Mattias Rönnblom, Jasvinder Singh, Vladimir Medvedkin,
techboard, Stephen Hemminger, dave
On 22/02/2020 10:24, Jerin Jacob wrote:
> On Sat, Feb 22, 2020 at 3:23 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>
>> 22/02/2020 10:05, Jerin Jacob:
>>> On Fri, Feb 21, 2020 at 9:44 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>>> 21/02/2020 16:56, Jerin Jacob:
>>>>> On Fri, Feb 21, 2020 at 4:40 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>>>>> 21/02/2020 11:30, Jerin Jacob:
>>>>>>> On Mon, Feb 17, 2020 at 4:28 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>>>>>>>> On Mon, Feb 17, 2020 at 2:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>>>>>>>> If we add rte_graph to DPDK, we will have 2 similar libraries.
>>>>>>>>>
>>>>>>>>> I already proposed several times to move rte_pipeline in a separate
>>>>>>>>> repository for two reasons:
>>>>>>>>> 1/ it is acting at a higher API layer level
>>>>>>>>
>>>>>>>> We need to define what is the higher layer API. Is it processing beyond L2?
>>>>>>
>>>>>> My opinion is that any API which is implemented differently
>>>>>> for different hardware should be in DPDK.
>>>>>
>>>>> We need to define SIMD optimization(not HW specific to but
>>>>> architecture-specific)
>>>>> treatment as well, as the graph and node library will have SIMD
>>>>> optimization as well.
>>>>
>>>> I think SIMD optimization is generic to any performance-related project,
>>>> not specific to DPDK.
>>>>
>>>>
>>>>> In general, by the above policy enforced, we need to split DPDK like below,
>>>>> dpdk.git
>>>>> ----------
>>>>> librte_compressdev
>>>>> librte_bbdev
>>>>> librte_eventdev
>>>>> librte_pci
>>>>> librte_rawdev
>>>>> librte_eal
>>>>> librte_security
>>>>> librte_mempool
>>>>> librte_mbuf
>>>>> librte_cryptodev
>>>>> librte_ethdev
>>>>>
>>>>> other repo(s).
>>>>> ----------------
>>>>> librte_cmdline
>>>>> librte_cfgfile
>>>>> librte_bitratestats
>>>>> librte_efd
>>>>> librte_latencystats
>>>>> librte_kvargs
>>>>> librte_jobstats
>>>>> librte_gso
>>>>> librte_gro
>>>>> librte_flow_classify
>>>>> librte_pipeline
>>>>> librte_net
>>>>> librte_metrics
>>>>> librte_meter
>>>>> librte_member
>>>>> librte_table
>>>>> librte_stack
>>>>> librte_sched
>>>>> librte_rib
>>>>> librte_reorder
>>>>> librte_rcu
>>>>> librte_power
>>>>> librte_distributor
>>>>> librte_bpf
>>>>> librte_ip_frag
>>>>> librte_hash
>>>>> librte_fib
>>>>> librte_timer
>>>>> librte_telemetry
>>>>> librte_port
>>>>> librte_pdump
>>>>> librte_kni
>>>>> librte_acl
>>>>> librte_vhost
>>>>> librte_ring
>>>>> librte_lpm
>>>>> librte_ipsec
>>>>
>>>> I think it is a fair conclusion of the scope I am arguing, yes.
>>>
>>> OK. See below.
>>>
>>>>>> What is expected to be maintained, tested, etc.
>>>>>
>>>>> We need to maintain and test other code in OTHER dpdk repo as well.
>>>>
>>>> Yes but the ones responsible are not the same.
>>>
>>> I see your point. Can I interpret it as you would like to NOT take
>>> responsibility
>>> of SW libraries(Items enumerated in the second list)?
>>
>> It's not only about me. This is a community decision.
>
> OK. Let wait for community feedback.
> Probably we discuss more in public TB meeting in 26th Feb.
>
>>
>>
>>> I think, the main question would be, how it will deliver to distros
>>> and/or end-users
>>> and what will be part of the dpdk release?
>>>
>>> I can think of two options. Maybe distro folks have better view on this.
>>>
>>> options 1:
>>> - Split dpdk to dpdk-core.git, dpdk-algo.git etc based on the
>>> functionalities and maintainer's availability.
>>> - Follow existing release cadence and deliver single release tarball
>>> with content from the above repos.
>>>
>>> options 2:
>>> - Introduce more subtrees(dpdk-next-algo.git etc) based on the
>>> functionalities and maintainer's availability.
>>> - Follow existing release cadence and have a pull request to main
>>> dpdk.git just like Linux kernel or existing scheme of things.
>>>
>>> I am for option 2.
>>>
>>> NOTE: This new graph and node library, I would like to make its new
>>> subtree in the existing scheme of
>>> things so that it will NOT be a burden for you to manage.
>>
>> The option 2 is to make maintainers life easier.
>> Keeping all libraries in the same repository allows to have
>> an unique release and a central place for the apps and docs.
>>
>> The option 1 may make contributors life easier if we consider
>> adding new libraries can make contributions harder in case of dependencies.
>> The option 1 makes also repositories smaller, so maybe easier to approach.
>> It makes easier to fully validate testing and quality of a repository.
>> Having separate packages makes easier to select what is distributed and supported.
>
> If the final dpdk release tarball looks same for option1 and option2
> then I think,
> option 1 is overhead to manage intra repo dependency.
>
> I agree with Thomas, it is better to decide as a community what
> direction we need
> to take and align existing and new libraries with that scheme.
>
+1 to Option 2.
As Jerin points out, it has allowed other larger communities to scale effectively.
>
>>
>> After years thinking about the scope of DPDK repository,
>> I am still not sure which solution is best.
>> I really would like to see more opinions, thanks.
>
> Yes.
>
>>
>>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-01-31 17:01 [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem jerinj
` (5 preceding siblings ...)
2020-01-31 18:34 ` [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem Ray Kinsella
@ 2020-02-25 5:22 ` Honnappa Nagarahalli
2020-02-25 6:14 ` Jerin Jacob
6 siblings, 1 reply; 31+ messages in thread
From: Honnappa Nagarahalli @ 2020-02-25 5:22 UTC (permalink / raw)
To: jerinj, dev
Cc: pkapoor, ndabilpuram, kirankumark, pbhagavatula, pathreya,
nsaxena, sshankarnara, thomas, david.marchand, ferruh.yigit,
arybchenko, Ajit Khaparde (ajit.khaparde@broadcom.com),
xiaolong.ye, rasland, maxime.coquelin, Akhil.goyal@nxp.com,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, Gavin Hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard, jerinj, Honnappa Nagarahalli,
nd, nd
<snip>
>
> From: Jerin Jacob <jerinj@marvell.com>
>
> This RFC is targeted for v20.05 release.
>
> This RFC patch includes an implementation of graph architecture for packet
> processing using DPDK primitives.
>
> Using graph traversal for packet processing is a proven architecture that has
> been implemented in various open source libraries.
>
> Graph architecture for packet processing enables abstracting the data
> processing functions as “nodes” and “links” them together to create a complex
> “graph” to create reusable/modular data processing functions.
>
> The RFC patch further includes performance enhancements and modularity to
> the DPDK as discussed in more detail below.
>
> What this RFC patch contains:
> -----------------------------
> 1) The API definition to "create" nodes and "link" together to create a "graph"
> for packet processing. See, lib/librte_graph/rte_graph.h
>
> 2) The Fast path API definition for the graph walker and enqueue function
> used by the workers. See, lib/librte_graph/rte_graph_worker.h
>
> 3) Optimized SW implementation for (1) and (2). See, lib/librte_graph/
>
> 4) Test case to verify the graph infrastructure functionality See,
> app/test/test_graph.c
>
> 5) Performance test cases to evaluate the cost of graph walker and nodes
> enqueue fast-path function for various combinations.
>
> See app/test/test_graph_perf.c
>
> 6) Packet processing nodes(Null, Rx, Tx, Pkt drop, IPV4 rewrite, IPv4 lookup)
> using graph infrastructure. See lib/librte_node/*
>
> 7) An example application to showcase l3fwd (functionality same as existing
> examples/l3fwd) using graph infrastructure and use packets processing nodes
> (item (6)). See examples/l3fwd-graph/.
>
> Performance
> -----------
> 1) Graph walk and node enqueue overhead can be tested with performance
> test case application [1] # If all packets go from a node to another node (we
> call it as "homerun") then it will be just a pointer swap for a burst of packets.
> # In the worst case, a couple of handful cycles to move an object from a node
> to another node.
>
> 2) Performance comparison with existing l3fwd (The complete static code with
> out any nodes) vs modular l3fwd-graph with 5 nodes (ip4_lookup, ip4_rewrite,
> ethdev_tx, ethdev_rx, pkt_drop).
> Here is graphical representation of the l3fwd-graph as Graphviz dot file:
> http://bit.ly/39UPPGm
>
> # l3fwd-graph performance is -2.5% wrt static l3fwd.
>
> # We have simulated the similar test with existing librte_pipeline application
> [4].
> ip_pipline application is -48.62% wrt static l3fwd.
>
> The above results are on octeontx2. It may vary on other platforms.
> The platforms with higher L1 and L2 caches will have further better
> performance.
>
> Tested architectures:
> --------------------
> 1) AArch64
> 2) X86
>
>
> Graph library Features
> ----------------------
> 1) Nodes as plugins
> 2) Support for out of tree nodes
> 3) Multi-process support.
> 4) Low overhead graph walk and node enqueue
> 5) Low overhead statistics collection infrastructure
> 6) Support to export the graph as a Graphviz dot file.
> See rte_graph_export()
> Example of exported graph: http://bit.ly/2PqbqOy
> 7) Allow having another graph walk implementation in the future by
> segregating the fast path and slow path code.
>
>
> Advantages of Graph architecture:
> ---------------------------------
>
> 1) Memory latency is the enemy for high-speed packet processing, moving the
> similar packet processing code to a node will reduce the I cache and D caches
> misses.
> 2) Exploits the probability that most packets will follow the same nodes in the
> graph.
> 3) Allow SIMD instructions for packet processing of the node.
> 4) The modular scheme allows having reusable nodes for the consumers.
> 5) The modular scheme allows us to abstract the vendor HW specific
> optimizations as a node.
>
>
> What is different than existing libpipeline library
> ---------------------------------------------------
> At a very high level, libpipeline created to allow modular plugin interface.
> Based on our analysis the performance is better in the graph model.
> Check the details under the Performance section, Item (2).
>
> This rte_graph implementation has taken care of fixing some of the
> architecture/implementations limitations with libpipeline.
>
> 1) Use cases like IP fragmentation, TCP ACK processing (with new TCP data
> sent out in the same context) have a problem as rte_pipeline_run() passes just
> pkt_mask of 64 bits to different tables and packet pointers are stored in the
> single array in struct rte_pipeline_run.
>
> In Graph architecture, The node has complete control of how many packets
> are output to next node seamlessly.
>
> 2) Since pktmask is passed to different tables, it takes multiple for loops to
> extract pkts out of fragmented pkts_mask. This makes it difficult to prefetch
> ahead a set of packets. This issue does not exist in Graph architecture.
>
> 3) Every table have two/three function pointers unlike graph architecture that
> has a single function pointer for node.
>
> 4) The current libpipeline main fast-path function doesn't support tree-like
> topology where 64 packets can be redirected to 64 different tables.
> It is currently limited to table-based next table id instead of per-packet action
> based next table id. So in a typical case, we need to cascade tables and
> sequentially go through all the tables to reach the last table.
>
> 5) pkt_mask limit is 64 bits which is the max burst size possible.
> The graph library supports up to 256.
>
> In short, both are significantly different architectures.
> Allowing the end-user to choose the model would be a more appropriate
> decision by keeping both in DPDK.
>
>
> Why this RFC
> ------------
> 1) We believe, Graph architecture provides the best performance for
> reusable/modular packet processing framework.
> Since DPDK does not have it, it is good to have it in DPDK.
>
> 2) Based on our experience, NPU HW accelerates are so different than one
> vendor to another vendor. Going forward, We believe, API abstraction may
> not be enough abstract the difference in HW. The Vendor-specific nodes can
> abstract the HW differences and reuse generic the nodes as needed.
> This would help both the silicon vendors and DPDK end users.
If you are proposing this as a new way to provide HW abstractions, then we will be restricting the application programming model to follow graph subsystem. IMO, the HW abstractions should be available irrespective of the programming model.
Graph model of packet processing might not be applicable for all use cases.
>
> 3) The framework enables the protocol stack as use native mbuf for graph
> processing to avoid any conversion between the formats for better
> performance.
>
> 4) DPDK becomes the "goto library" for userspace HW acceleration.
> It is good to have native Graph packet processing library in DPDK.
>
> 5) Obviously, Our customers are interested in Graph library in DPDK :-)
>
> Identified tweaking for better performance on different targets
> ---------------------------------------------------------------
> 1) Test with various burst size values (256, 128, 64, 32) using
> CONFIG_RTE_GRAPH_BURST_SIZE config option.
> Based on our testing, on x86 and arm64 servers, The sweet spot is 256 burst
> size.
> While on arm64 embedded SoCs, it is either 64 or 128.
>
> 2) Disable node statistics (use CONFIG_RTE_LIBRTE_GRAPH_STATS config
> option) if not needed.
>
> 3) Use arm64 optimized memory copy for arm64 architecture by selecting
> CONFIG_RTE_ARCH_ARM64_MEMCPY.
>
> Commands to run tests
> ---------------------
>
> [1]
> perf test:
> echo "graph_perf_autotest" | sudo ./build/app/test/dpdk-test -c 0x30
>
> [2]
> functionality test:
> echo "graph_autotest" | sudo ./build/app/test/dpdk-test -c 0x30
>
> [3]
> l3fwd-graph:
> ./l3fwd-graph -c 0x100 -- -p 0x3 --config="(0, 0, 8)" -P
>
> [4]
> # ./ip_pipeline --c 0xff0000 -- -s route.cli
>
> Route.cli: (Copy paste to the shell to avoid dos format issues)
>
> https://pastebin.com/raw/B4Ktx7TT
>
>
> Next steps
> -----------------------------
> 1) Feedback from the community on the library.
> 2) Collect the API requirements from the community.
> 3) Sending the next version by addressing the community initial.
> feedback and fixing the following identified "pending items".
>
>
> Pending items (Will be addressed in next revision)
> -------------------------------------------------
> 1) Add documentation as a patch
> 2) Add Doxygen API documentation
> 3) Split the patches at a more logical level for a better review.
> 4) code cleanup
> 5) more optimizations in the nodes and graph infrastructure.
>
>
> Programming guide and API walk-through
> --------------------------------------
> # Anatomy of Node:
> ~~~~~~~~~~~~~~~~~
> See the
> https://github.com/jerinjacobk/share/blob/master/Anatomy_of_a_node.svg
>
> The above diagram depicts the anatomy of a node.
> The node is the basic building block of the graph framework.
>
> A node consists of:
> a) process():
>
> The callback function will be invoked by worker thread using
> rte_graph_walk() function when there is data to be processed by the node.
> A graph node process the function using process() and enqueue to next
> downstream node using rte_node_enqueue*() function.
>
> b) Context memory:
>
> It is memory allocated by the library to store the node-specific context
> information. which will be used by process(), init(), fini() callbacks.
>
> c) init():
>
> The callback function which will be invoked by rte_graph_create() on when a
> node gets attached to a graph.
>
> d) fini():
>
> The callback function which will be invoked by rte_graph_destroy() on when a
> node gets detached to a graph.
>
>
> e) Node name:
>
> It is the name of the node. When a node registers to graph library, the library
> gives the ID as rte_node_t type. Both ID or Name shall be used lookup the
> node.
> rte_node_from_name(), rte_node_id_to_name() are the node lookup
> functions.
>
> f) nb_edges:
>
> Number of downstream nodes connected to this node. The next_nodes[]
> stores the
> downstream nodes objects. rte_node_edge_update() and
> rte_node_edge_shrink()
> functions shall be used to update the next_node[] objects. Consumers of the
> node
> APIs are free to update the next_node[] objects till rte_graph_create() invoked.
>
> g) next_node[]:
>
> The dynamic array to store the downstream nodes connected to this node.
>
>
> # Node creation and registration
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> a) Node implementer creates the node by implementing ops and attributes of
> 'struct rte_node_register'
> b) The library registers the node by invoking RTE_NODE_REGISTER on library
> load
> using the constructor scheme.
> The constructor scheme used here to support multi-process.
>
>
> # Link the Nodes to create the graph topology
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> See the
> https://github.com/jerinjacobk/share/blob/master/Link_the_nodes.svg
>
> The above diagram shows a graph topology after linking the N nodes.
>
> Once nodes are available to the program, Application or node public API
> functions
> can links them together to create a complex packet processing graph.
>
> There are multiple different types of strategies to link the nodes.
>
> Method a) Provide the next_nodes[] at the node registration time.
> See 'struct rte_node_register::nb_edges'. This is a use case to address the
> static
> node scheme where one knows upfront the next_nodes[] of the node.
>
> Method b) Use rte_node_edge_get(), rte_node_edge_update(),
> rte_node_edge_shrink() to
> Update the next_nodes[] links for the node dynamically.
>
> Method c) Use rte_node_clone() to clone a already existing node.
> When rte_node_clone() invoked, The library, would clone all the attributes
> of the node and creates a new one. The name for cloned node shall be
> "parent_node_name-user_provided_name". This method enables the use
> case of Rx and Tx
> nodes where multiple of those nodes need to be cloned based on the number
> of CPU
> available in the system. The cloned nodes will be identical, except the "context
> memory".
> Context memory will have information of port, queue pair incase of Rx and Tx
> ethdev nodes.
>
> # Create the graph object
> ~~~~~~~~~~~~~~~~~~~~~~~~~
> Now that the nodes are linked, Its time to create a graph by including
> the required nodes. The application can provide a set of node patterns to
> form a graph object.
> The fnmatch() API used underneath for the pattern matching to include
> the required nodes.
>
> The rte_graph_create() API shall be used to create the graph.
>
> Example of a graph object creation:
>
> {"ethdev_rx_0_0", ipv4-*, ethdev_tx_0_*"}
>
> In the above example, A graph object will be created with ethdev Rx
> node of port 0 and queue 0, all ipv4* nodes in the system,
> and ethdev tx node of port 0 with all queues.
>
>
> # Multi core graph processing
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> In the current graph library implementation, specifically,
> rte_graph_walk() and rte_node_enqueue* fast path API functions
> are designed to work on single-core to have better performance.
> The fast path API works on graph object, So the multi-core graph
> processing strategy would be to create graph object PER WORKER.
>
>
> # In fast path:
> ~~~~~~~~~~~~~~~
>
> Typical fast-path code looks like below, where the application
> gets the fast-path graph object through rte_graph_lookup()
> on the worker thread and run the rte_graph_walk() in a tight loop.
>
> struct rte_graph *graph = rte_graph_lookup("worker0");
>
> while (!done) {
> rte_graph_walk(graph);
> }
>
> # Context update when graph walk in action
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The fast-path object for the node is `struct rte_node`.
>
> It may be possible that in slow-path or after the graph walk-in action,
> the user needs to update the context of the node hence access to
> struct rte_node * memory.
>
> rte_graph_foreach_node(), rte_graph_node_get(),
> rte_graph_node_get_by_name()
> APIs can be used to to get the struct rte_node*. rte_graph_foreach_node()
> iterator
> function works on struct rte_graph * fast-path graph object while others
> works on graph ID or name.
>
>
> # Get the node statistics using graph cluster
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The user may need to know the aggregate stats of the node across
> multiple graph objects. Especially the situation where each
> graph object bound to a worker thread.
>
> Introduced a graph cluster object for statistics.
> rte_graph_cluster_stats_create()
> shall be used for creating a graph cluster with multiple graph objects and
> rte_graph_cluster_stats_get() to get the aggregate node statistics.
>
> An example statistics output from rte_graph_cluster_stats_get()
>
> +-----------+------------+-------------+---------------+------------+---------------+-----------+
> |Node |calls |objs |realloc_count |objs/call |objs/sec(10E6)
> |cycles/call|
> +------------------------+-------------+---------------+------------+---------------+-----------+
> |node0 |12977424 |3322220544 |5 |256.000 |3047.151872
> |20.0000 |
> |node1 |12977653 |3322279168 |0 |256.000 |3047.210496
> |17.0000 |
> |node2 |12977696 |3322290176 |0 |256.000 |3047.221504
> |17.0000 |
> |node3 |12977734 |3322299904 |0 |256.000 |3047.231232
> |17.0000 |
> |node4 |12977784 |3322312704 |1 |256.000 |3047.243776
> |17.0000 |
> |node5 |12977825 |3322323200 |0 |256.000 |3047.254528
> |17.0000 |
> +-----------+------------+-------------+---------------+------------+---------------+-----------+
>
> # Node writing guide lines
> ~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The process() function of a node is fast-path function and that needs to be
> written
> carefully to achieve max performance.
>
> Broadly speaking, there are two different types of nodes.
>
> 1) First kind of nodes are those that have a fixed next_nodes[] for the
> complete burst (like ethdev_rx, ethdev_tx) and it is simple to write.
> Process() function can move the obj burst to the next node either using
> rte_node_next_stream_move() or using rte_node_next_stream_get() and
> rte_node_next_stream_put().
>
>
> 2) The second kind of such node is `intermediate nodes` that decide what is
> the next_node[]
> to send to on a per-packet basis. In these nodes,
>
> a) Firstly, there has to be the best possible packet processing logic.
> b) Secondly, each packet needs to be queued to its next node.
>
> At least on some architectures, we get around ~10% more performance if we
> can avoid copying of
> packet pointers from one node to next as it is ~= memcpy(BURST_SIZE x
> sizeof(void *)) x NODE_COUNT.
>
> This can be avoided only in the case where all the packets are destined to the
> same
> next node. We call this as home run case and we use
> rte_node_next_stream_move() to
> just move burst of object array by swapping the pointer. a.k.a move stream
> from one node to next node
> with least number of cycles.
>
> Example of intermediate node implementation with home run:
> a) Start with speculation that next_node = ctx->next_node.
> This could be the next_node application used in the previous function call of
> this node.
> b) Get the next_node stream array and space using
> rte_node_next_stream_get(next_node, &space)
> c) while space != 0 and n_pkts_left != 0,
> prefetch next pkt_set and process current pkt_set to find their next node
> d) if all the next nodes of the current pkt_set match speculated next node,
> just count them as successfully speculated(last_spec) till now and
> continue the loop without actually moving them to the next node.
> else if there is a mismatch,
> copy all the pkt_set pointers that were last_spec and
> move the current pkt_set to their respective next's nodes using
> rte_enqueue_next_x1(). Also one of the next_node can be updated as
> speculated next_node if it is more probable. Also set last_spec = 0
> e) if n_pkts_left != 0 and space != 0
> goto c) as there is space in the speculated next_node.
> f) if last_spec == n_pkts_left,
> then we successfully speculated all the packets to right next node.
> Just call rte_node_next_stream_move(node, next_node) to just move the
> stream/obj array to next node. This is home run where we avoided
> memcpy of buffer pointers to next node.
> g) if space = 0 and n_pkts_left != 0
> goto b)
> h) Update the ctx->next_node with more probable next node.
>
> # In-tree node documentation
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> a) librte_node/ethdev_rx.c:
> This node does rte_eth_rx_burst() into stream buffer acquired using
> rte_node_next_stream_get() and does rte_node_next_stream_put(count)
> only when there are packets received. Each rte_node works on only on
> one rx port and queue that it gets from node->context.
> For each (port X, rx_queue Y), a rte_node is cloned from
> ethdev_rx_base_node
> as "ethdev_rx-X-Y" in rte_node_eth_config() along with updating
> node->context. Each graph needs to be associated with a unique
> rte_node for a (port, rx_queue).
>
> b) librte_node/ethdev_tx.c:
> This node does rte_eth_tx_burst() for a burst of objs received by it.
> It sends the burst to a fixed Tx Port and Queue information from
> node->context. For each (port X), this rte_node is cloned from
> ethdev_tx_node_base as "ethdev_tx-X" in rte_node_eth_config()
> along with updating node->context.
> Since each graph doesn't need more than one Txq, per port,
> a Txq is assigned based on graph id to each rte_node instance.
> Each graph needs to be associated with a rte_node for each (port).
>
> c) librte_node/pkt_drop.c:
> This node frees all the objects that are passed to it.
>
> d) librte_node/ip4_lookup.c:
> This node is an intermediate node that does lpm lookup for the received
> ipv4 packets and the result determines each packets next node.
> a) On successful lpm lookup, the result contains the nex_node id and
> next-hop id with which the packet needs to be further processed.
> b) On lpm lookup failure, objects are redirected to pkt_drop node.
> rte_node_ip4_route_add() is control path API to add ipv4 routes.
> To achieve home run, we use rte_node_stream_move() as mentioned in
> above
> sections.
>
> e) librte_node/ip4_rewrite.c:
> This node gets packets from ip4_lookup node with next-hop id for each
> packet is embedded in rte_node_mbuf_priv1(mbuf)->nh. This id is used
> to determine the L2 header to be written to the pkt before sending
> the pkt out to a particular ethdev_tx node.
> rte_node_ip4_rewrite_add() is control path API to add next-hop info.
>
> Jerin Jacob (1):
> graph: introduce graph subsystem
>
> Kiran Kumar K (1):
> test: add graph functional tests
>
> Nithin Dabilpuram (2):
> node: add packet processing nodes
> example/l3fwd_graph: l3fwd using graph architecture
>
> Pavan Nikhilesh (1):
> test: add graph performance test cases.
>
> app/test/Makefile | 5 +
> app/test/meson.build | 10 +-
> app/test/test_graph.c | 820 +++++++++++++++++
> app/test/test_graph_perf.c | 888 +++++++++++++++++++
> config/common_base | 13 +
> config/rte_config.h | 4 +
> examples/Makefile | 3 +
> examples/l3fwd-graph/Makefile | 58 ++
> examples/l3fwd-graph/main.c | 1131 ++++++++++++++++++++++++
> examples/l3fwd-graph/meson.build | 13 +
> examples/meson.build | 6 +-
> lib/Makefile | 6 +
> lib/librte_graph/Makefile | 28 +
> lib/librte_graph/graph.c | 578 ++++++++++++
> lib/librte_graph/graph_debug.c | 81 ++
> lib/librte_graph/graph_ops.c | 163 ++++
> lib/librte_graph/graph_populate.c | 224 +++++
> lib/librte_graph/graph_private.h | 113 +++
> lib/librte_graph/graph_stats.c | 396 +++++++++
> lib/librte_graph/meson.build | 11 +
> lib/librte_graph/node.c | 419 +++++++++
> lib/librte_graph/rte_graph.h | 277 ++++++
> lib/librte_graph/rte_graph_version.map | 46 +
> lib/librte_graph/rte_graph_worker.h | 280 ++++++
> lib/librte_node/Makefile | 30 +
> lib/librte_node/ethdev_ctrl.c | 106 +++
> lib/librte_node/ethdev_rx.c | 218 +++++
> lib/librte_node/ethdev_rx.h | 17 +
> lib/librte_node/ethdev_rx_priv.h | 45 +
> lib/librte_node/ethdev_tx.c | 74 ++
> lib/librte_node/ethdev_tx_priv.h | 33 +
> lib/librte_node/ip4_lookup.c | 657 ++++++++++++++
> lib/librte_node/ip4_lookup_priv.h | 17 +
> lib/librte_node/ip4_rewrite.c | 340 +++++++
> lib/librte_node/ip4_rewrite_priv.h | 44 +
> lib/librte_node/log.c | 14 +
> lib/librte_node/meson.build | 8 +
> lib/librte_node/node_private.h | 61 ++
> lib/librte_node/null.c | 23 +
> lib/librte_node/pkt_drop.c | 26 +
> lib/librte_node/rte_node_eth_api.h | 31 +
> lib/librte_node/rte_node_ip4_api.h | 33 +
> lib/librte_node/rte_node_version.map | 9 +
> lib/meson.build | 5 +-
> meson.build | 1 +
> mk/rte.app.mk | 2 +
> 46 files changed, 7362 insertions(+), 5 deletions(-)
> create mode 100644 app/test/test_graph.c
> create mode 100644 app/test/test_graph_perf.c
> create mode 100644 examples/l3fwd-graph/Makefile
> create mode 100644 examples/l3fwd-graph/main.c
> create mode 100644 examples/l3fwd-graph/meson.build
> create mode 100644 lib/librte_graph/Makefile
> create mode 100644 lib/librte_graph/graph.c
> create mode 100644 lib/librte_graph/graph_debug.c
> create mode 100644 lib/librte_graph/graph_ops.c
> create mode 100644 lib/librte_graph/graph_populate.c
> create mode 100644 lib/librte_graph/graph_private.h
> create mode 100644 lib/librte_graph/graph_stats.c
> create mode 100644 lib/librte_graph/meson.build
> create mode 100644 lib/librte_graph/node.c
> create mode 100644 lib/librte_graph/rte_graph.h
> create mode 100644 lib/librte_graph/rte_graph_version.map
> create mode 100644 lib/librte_graph/rte_graph_worker.h
> create mode 100644 lib/librte_node/Makefile
> create mode 100644 lib/librte_node/ethdev_ctrl.c
> create mode 100644 lib/librte_node/ethdev_rx.c
> create mode 100644 lib/librte_node/ethdev_rx.h
> create mode 100644 lib/librte_node/ethdev_rx_priv.h
> create mode 100644 lib/librte_node/ethdev_tx.c
> create mode 100644 lib/librte_node/ethdev_tx_priv.h
> create mode 100644 lib/librte_node/ip4_lookup.c
> create mode 100644 lib/librte_node/ip4_lookup_priv.h
> create mode 100644 lib/librte_node/ip4_rewrite.c
> create mode 100644 lib/librte_node/ip4_rewrite_priv.h
> create mode 100644 lib/librte_node/log.c
> create mode 100644 lib/librte_node/meson.build
> create mode 100644 lib/librte_node/node_private.h
> create mode 100644 lib/librte_node/null.c
> create mode 100644 lib/librte_node/pkt_drop.c
> create mode 100644 lib/librte_node/rte_node_eth_api.h
> create mode 100644 lib/librte_node/rte_node_ip4_api.h
> create mode 100644 lib/librte_node/rte_node_version.map
>
> --
> 2.24.1
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem
2020-02-25 5:22 ` Honnappa Nagarahalli
@ 2020-02-25 6:14 ` Jerin Jacob
0 siblings, 0 replies; 31+ messages in thread
From: Jerin Jacob @ 2020-02-25 6:14 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: jerinj, dev, pkapoor, ndabilpuram, kirankumark, pbhagavatula,
pathreya, nsaxena, sshankarnara, thomas, david.marchand,
ferruh.yigit, arybchenko,
Ajit Khaparde (ajit.khaparde@broadcom.com),
xiaolong.ye, rasland, maxime.coquelin, Akhil.goyal@nxp.com,
cristian.dumitrescu, john.mcnamara, bruce.richardson,
anatoly.burakov, Gavin Hu, drc, konstantin.ananyev,
pallavi.kadam, olivier.matz, gage.eads, nikhil.rao,
erik.g.carrillo, hemant.agrawal, artem.andreev, sthemmin,
shahafs, keith.wiles, mattias.ronnblom, jasvinder.singh,
vladimir.medvedkin, mdr, techboard, nd
On Tue, Feb 25, 2020 at 10:53 AM Honnappa Nagarahalli
<Honnappa.Nagarahalli@arm.com> wrote:
> > 2) Based on our experience, NPU HW accelerates are so different than one
> > vendor to another vendor. Going forward, We believe, API abstraction may
> > not be enough abstract the difference in HW. The Vendor-specific nodes can
> > abstract the HW differences and reuse generic the nodes as needed.
> > This would help both the silicon vendors and DPDK end users.
> If you are proposing this as a new way to provide HW abstractions, then we will be restricting the application programming model to follow graph subsystem. IMO, the HW abstractions should be available irrespective of the programming model.
> Graph model of packet processing might not be applicable for all use cases.
No, I am not proposing this is the new way to provide HW abstraction
in DPDK. API based HW abstraction will continue as it was done
earlier.
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2020-02-25 6:14 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-31 17:01 [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 1/5] " jerinj
2020-02-02 10:34 ` Stephen Hemminger
2020-02-02 10:35 ` Stephen Hemminger
2020-02-02 11:08 ` Jerin Jacob
2020-02-02 10:38 ` Stephen Hemminger
2020-02-02 11:21 ` Jerin Jacob
2020-02-03 9:14 ` Gaetan Rivet
2020-02-03 9:49 ` Jerin Jacob
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 2/5] node: add packet processing nodes jerinj
2020-01-31 17:01 ` [dpdk-dev] [RFC PATCH 3/5] test: add graph functional tests jerinj
2020-01-31 17:02 ` [dpdk-dev] [RFC PATCH 4/5] test: add graph performance test cases jerinj
2020-01-31 17:02 ` [dpdk-dev] [RFC PATCH 5/5] example/l3fwd_graph: l3fwd using graph architecture jerinj
2020-01-31 18:34 ` [dpdk-dev] [RFC PATCH 0/5] graph: introduce graph subsystem Ray Kinsella
2020-02-01 5:44 ` Jerin Jacob
2020-02-17 7:19 ` Jerin Jacob
2020-02-17 8:38 ` Thomas Monjalon
2020-02-17 10:58 ` Jerin Jacob
2020-02-21 10:30 ` Jerin Jacob
2020-02-21 11:10 ` Thomas Monjalon
2020-02-21 15:38 ` Mattias Rönnblom
2020-02-21 15:53 ` dave
2020-02-21 16:04 ` Thomas Monjalon
2020-02-21 15:56 ` Jerin Jacob
2020-02-21 16:14 ` Thomas Monjalon
2020-02-22 9:05 ` Jerin Jacob
2020-02-22 9:52 ` Thomas Monjalon
2020-02-22 10:24 ` Jerin Jacob
2020-02-24 10:59 ` Ray Kinsella
2020-02-25 5:22 ` Honnappa Nagarahalli
2020-02-25 6:14 ` Jerin Jacob
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).