From: Nitin Saxena <nsaxena@marvell.com>
To: Jerin Jacob <jerinj@marvell.com>,
Kiran Kumar K <kirankumark@marvell.com>,
Nithin Dabilpuram <ndabilpuram@marvell.com>,
Zhirun Yan <yanzhirun_163@163.com>,
Robin Jarry <rjarry@redhat.com>,
Christophe Fontaine <cfontain@redhat.com>
Cc: <dev@dpdk.org>, Nitin Saxena <nsaxena16@gmail.com>
Subject: [PATCH v6 2/4] graph: add feature arc abstraction
Date: Fri, 3 Jan 2025 11:36:05 +0530 [thread overview]
Message-ID: <20250103060612.2671836-3-nsaxena@marvell.com> (raw)
In-Reply-To: <20250103060612.2671836-1-nsaxena@marvell.com>
Feature arc abstraction allows rte_graph based applications to
- Hook feature nodes between start_node and end_node of an arc
- Feature arc's are created via RTE_GRAPH_FEATURE_ARC_REGISTER()
- Feature nodes are added to an arc via RTE_GRAPH_FEATURE_REGISTER()
- If application explicitly calls rte_graph_feature_arc_init(), before
rte_graph_create(), all features arcs and associated feature nodes
are automatically connected
- If rte_graph_feature_arc_init() is not called, feature arc module has
no affect
- Packet path towards feature node(s) is enabled/disabled at
runtime on per interface basis.
- More than one feature nodes can be added/enabled in an arc
- If any feature node is enabled on any interface, feature arc fast path
APIs provide next edge for each mbuf
Once DPDK inbuilt nodes adopts feature arc abstraction, out-of-tree
nodes can be hooked in a generic manner
Signed-off-by: Nitin Saxena <nsaxena@marvell.com>
---
doc/api/doxy-api-index.md | 2 +
doc/guides/rel_notes/release_25_03.rst | 10 +
lib/graph/graph_feature_arc.c | 1780 ++++++++++++++++++++++
lib/graph/graph_private.h | 4 +
lib/graph/meson.build | 4 +-
lib/graph/rte_graph_feature_arc.h | 552 +++++++
lib/graph/rte_graph_feature_arc_worker.h | 608 ++++++++
lib/graph/version.map | 20 +
8 files changed, 2979 insertions(+), 1 deletion(-)
create mode 100644 lib/graph/graph_feature_arc.c
create mode 100644 lib/graph/rte_graph_feature_arc.h
create mode 100644 lib/graph/rte_graph_feature_arc_worker.h
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f0193502bc..b6a5dedee5 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -213,6 +213,8 @@ The public API headers are grouped by topics:
[table_wm](@ref rte_swx_table_wm.h)
* [graph](@ref rte_graph.h):
[graph_worker](@ref rte_graph_worker.h)
+ [graph_feature_arc](@ref rte_graph_feature_arc.h)
+ [graph_feature_arc_worker](@ref rte_graph_feature_arc_worker.h)
* graph_nodes:
[eth_node](@ref rte_node_eth_api.h),
[ip4_node](@ref rte_node_ip4_api.h),
diff --git a/doc/guides/rel_notes/release_25_03.rst b/doc/guides/rel_notes/release_25_03.rst
index 426dfcd982..205215b5de 100644
--- a/doc/guides/rel_notes/release_25_03.rst
+++ b/doc/guides/rel_notes/release_25_03.rst
@@ -55,6 +55,16 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added feature arc abstraction in graph library.**
+
+ Feature arc abstraction helps ``rte_graph`` based applications to steer
+ packets across different node path(s) based on the features (or protocols)
+ enabled on interfaces. Different feature node paths can be enabled/disabled
+ at runtime on some or on all interfaces. This abstraction also help
+ applications to hook ``out-of-tree nodes`` in in-built DPDK node paths
+ in a generic manner.
+
+ * Added ``ip4_output`` feature arc processing in ``ip4_rewrite`` node.
Removed Items
-------------
diff --git a/lib/graph/graph_feature_arc.c b/lib/graph/graph_feature_arc.c
new file mode 100644
index 0000000000..895ec68f86
--- /dev/null
+++ b/lib/graph/graph_feature_arc.c
@@ -0,0 +1,1780 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell International Ltd.
+ */
+
+#include "graph_private.h"
+#include <rte_graph_feature_arc_worker.h>
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+
+#define GRAPH_FEATURE_ARC_INITIALIZER UINT64_MAX
+#define GRAPH_FEATURE_MAX_NUM_PER_ARC (64)
+
+#define connect_graph_nodes(node1, node2, edge, arc_name) \
+ __connect_graph_nodes(node1, node2, edge, arc_name, __LINE__)
+
+#define FEATURE_ARC_MEMZONE_NAME "__rte_feature_arc_main_mz"
+
+#define graph_uint_cast(f) ((unsigned int)f)
+
+#define fdata_from_feat(arc, feat, index) \
+ RTE_GRAPH_FEATURE_TO_FEATURE_DATA(arc, feat, index)
+
+#define feat_dbg graph_dbg
+
+#define FEAT_COND_ERR(cond, ...) \
+ do { \
+ if (cond) \
+ graph_err(__VA_ARGS__); \
+ } while (0)
+
+#define FEAT_ERR(fn, ln, ...) \
+ GRAPH_LOG2(ERR, fn, ln, __VA_ARGS__)
+
+#define FEAT_ERR_JMP(_err, fn, ln, ...) \
+ do { \
+ FEAT_ERR(fn, ln, __VA_ARGS__); \
+ rte_errno = _err; \
+ } while (0)
+
+static struct rte_mbuf_dynfield rte_graph_feature_arc_mbuf_desc = {
+ .name = RTE_GRAPH_FEATURE_ARC_DYNFIELD_NAME,
+ .size = sizeof(struct rte_graph_feature_arc_mbuf_dynfields),
+ .align = alignof(struct rte_graph_feature_arc_mbuf_dynfields),
+};
+
+rte_graph_feature_arc_main_t *__rte_graph_feature_arc_main;
+int __rte_graph_feature_arc_mbuf_dyn_offset = -1;
+
+/* global feature arc list */
+static STAILQ_HEAD(, rte_graph_feature_arc_register) feature_arc_list =
+ STAILQ_HEAD_INITIALIZER(feature_arc_list);
+
+/* global feature arc list */
+static STAILQ_HEAD(, rte_graph_feature_register) feature_list =
+ STAILQ_HEAD_INITIALIZER(feature_list);
+
+/* feature registration validate */
+static int
+feature_registration_validate(struct rte_graph_feature_register *feat_entry,
+ const char *caller_name, int lineno,
+ int check_node_reg_id, /* check feature_node->id */
+ int check_feat_reg_id /* check feature->feature_node_id */)
+{
+ if (!feat_entry) {
+ FEAT_ERR(caller_name, lineno, "NULL feature reg");
+ return -1;
+ }
+
+ if (!feat_entry->feature_name) {
+ FEAT_ERR(caller_name, lineno,
+ "NULL feature name %p", feat_entry);
+ return -1;
+ }
+
+ if (!feat_entry->arc_name) {
+ FEAT_ERR(caller_name, lineno,
+ "No associated arc provided for feature: %s",
+ feat_entry->feature_name);
+ return -1;
+ }
+
+ if (!feat_entry->feature_process_fn) {
+ FEAT_ERR(caller_name, lineno,
+ "No process function provided for feature: %s",
+ feat_entry->feature_name);
+ return -1;
+ }
+
+ if (!feat_entry->feature_node) {
+ FEAT_ERR(caller_name, lineno,
+ "No feature node provided for feature: %s",
+ feat_entry->feature_name);
+ return -1;
+ }
+
+ if (check_node_reg_id && (feat_entry->feature_node->id == RTE_NODE_ID_INVALID)) {
+ FEAT_ERR(caller_name, lineno,
+ "feature_node with invalid node id found for feature: %s",
+ feat_entry->feature_name);
+ return -1;
+ }
+
+ if (check_feat_reg_id && (feat_entry->feature_node_id == RTE_NODE_ID_INVALID)) {
+ FEAT_ERR(caller_name, lineno,
+ "feature_node_id found invalid for feature: %s",
+ feat_entry->feature_name);
+ return -1;
+ }
+
+ return 0;
+
+}
+
+/* validate arc registration */
+static int
+arc_registration_validate(struct rte_graph_feature_arc_register *reg,
+ const char *caller_name, int lineno)
+{
+ if (!reg->arc_name) {
+ FEAT_ERR_JMP(EINVAL, caller_name, lineno,
+ "feature_arc name cannot be NULL");
+ return -1;
+ }
+
+ if (reg->max_features > GRAPH_FEATURE_MAX_NUM_PER_ARC) {
+ FEAT_ERR_JMP(EAGAIN, caller_name, lineno,
+ "arc: %s, number of features are more than max",
+ reg->arc_name);
+ return -1;
+ }
+
+ if (!reg->max_indexes) {
+ FEAT_ERR_JMP(EINVAL, caller_name, lineno,
+ "Zero max_indexes found for arc: %s",
+ reg->arc_name);
+ return -1;
+ }
+
+ if (!reg->start_node) {
+ FEAT_ERR_JMP(EINVAL, caller_name, lineno,
+ "start node cannot be NULL for arc: %s",
+ reg->arc_name);
+ return -1;
+ }
+
+ if (!reg->start_node_feature_process_fn) {
+ FEAT_ERR_JMP(EINVAL, caller_name, lineno,
+ "start node feature_process_fn() cannot be NULL for arc: %s",
+ reg->arc_name);
+ return -1;
+ }
+
+ return (feature_registration_validate(reg->end_feature, caller_name, lineno, 0, 0));
+}
+
+/* lookup arc registration by name */
+static int arc_registration_num(void)
+{
+ struct rte_graph_feature_arc_register *entry = NULL;
+ int num = 0;
+
+ STAILQ_FOREACH(entry, &feature_arc_list, next_arc)
+ num++;
+
+ return num;
+}
+
+
+/* lookup arc registration by name */
+static int arc_registration_lookup(const char *arc_name,
+ struct rte_graph_feature_arc_register **arc_entry)
+{
+ struct rte_graph_feature_arc_register *entry = NULL;
+
+ STAILQ_FOREACH(entry, &feature_arc_list, next_arc) {
+ if (arc_registration_validate(entry, __func__, __LINE__) < 0)
+ continue;
+
+ if (!strncmp(entry->arc_name, arc_name, RTE_GRAPH_FEATURE_ARC_NAMELEN)) {
+ if (arc_entry)
+ *arc_entry = entry;
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+
+/* Number of features registered for an ARC
+ *
+ * i.e number of RTE_GRAPH_FEATURE_REGISTER() for an arc
+ */
+static int
+arc_registered_features_num(const char *arc_name, uint32_t *num_features)
+{
+ struct rte_graph_feature_arc_register *arc_reg = NULL;
+ struct rte_graph_feature_register *feat_entry = NULL;
+ uint32_t num = 0;
+
+ /* Check if arc is registered with end_feature */
+ if (!arc_registration_lookup(arc_name, &arc_reg))
+ return -1;
+
+ if (arc_reg->end_feature)
+ num++;
+
+ /* Calculate features other than end_feature added in arc */
+ STAILQ_FOREACH(feat_entry, &feature_list, next_feature) {
+ if (feature_registration_validate(feat_entry, __func__, __LINE__, 1, 0) < 0)
+ continue;
+
+ if (!strncmp(feat_entry->arc_name, arc_name, strlen(feat_entry->arc_name)))
+ num++;
+ }
+
+ if (num_features)
+ *num_features = num;
+
+ return 0;
+}
+
+/* calculate arc size to be allocated */
+static int
+feature_arc_reg_calc_size(struct rte_graph_feature_arc_register *reg, size_t *sz,
+ uint16_t *feat_off, uint16_t *fdata_off, uint32_t *fsz)
+{
+ size_t ff_size = 0, fdata_size = 0;
+
+ /* first feature array per index */
+ ff_size = RTE_ALIGN_CEIL(sizeof(rte_graph_feature_t) * reg->max_indexes,
+ RTE_CACHE_LINE_SIZE);
+
+ /* fdata size per feature */
+ *fsz = (uint32_t)RTE_ALIGN_CEIL(sizeof(struct rte_graph_feature_data) * reg->max_indexes,
+ RTE_CACHE_LINE_SIZE);
+
+ /* Allocate feature_data extra by 1. Used in feature_disable */
+ fdata_size = (*fsz) * (reg->max_features + 1);
+
+ if (sz)
+ *sz = fdata_size + ff_size + sizeof(struct rte_graph_feature_arc);
+ if (feat_off)
+ *feat_off = sizeof(struct rte_graph_feature_arc);
+ if (fdata_off)
+ *fdata_off = ff_size + sizeof(struct rte_graph_feature_arc);
+
+ return 0;
+}
+
+static rte_graph_feature_t *
+graph_first_feature_ptr_get(struct rte_graph_feature_arc *arc,
+ uint32_t index)
+{
+ return (rte_graph_feature_t *)((uint8_t *)arc + arc->fp_first_feature_offset +
+ (sizeof(rte_graph_feature_t) * index));
+}
+
+static int
+feature_arc_data_reset(struct rte_graph_feature_arc *arc)
+{
+ rte_graph_feature_data_t first_fdata;
+ struct rte_graph_feature_data *fdata;
+ rte_graph_feature_t iter, *f = NULL;
+ uint16_t index;
+
+ arc->runtime_enabled_features = 0;
+
+ for (index = 0; index < arc->max_indexes; index++) {
+ f = graph_first_feature_ptr_get(arc, index);
+ *f = RTE_GRAPH_FEATURE_INVALID;
+ }
+
+ for (iter = 0; iter < arc->max_features; iter++) {
+ first_fdata = fdata_from_feat(arc, iter, 0);
+ for (index = 0; index < arc->max_indexes; index++) {
+ fdata = rte_graph_feature_data_get(arc, first_fdata + index);
+ fdata->next_feature_data = RTE_GRAPH_FEATURE_INVALID;
+ fdata->app_cookie = UINT32_MAX;
+ fdata->next_edge = RTE_EDGE_ID_INVALID;
+ }
+ }
+ return 0;
+}
+
+/*
+ * lookup feature name and get control path node_list as well as feature index
+ * at which it is inserted
+ */
+static int
+nodeinfo_lkup_by_name(struct rte_graph_feature_arc *arc, const char *feat_name,
+ struct rte_graph_feature_node_list **ffinfo, uint32_t *slot)
+{
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t fi = 0;
+
+ if (!feat_name)
+ return -1;
+
+ if (slot)
+ *slot = UINT32_MAX;
+
+ STAILQ_FOREACH(finfo, &arc->all_features, next_feature) {
+ RTE_VERIFY(finfo->feature_arc == arc);
+ if (!strncmp(finfo->feature_name, feat_name, strlen(finfo->feature_name))) {
+ if (ffinfo)
+ *ffinfo = finfo;
+ if (slot)
+ *slot = fi;
+ return 0;
+ }
+ fi++;
+ }
+ return -1;
+}
+
+/* Lookup used only during rte_graph_feature_add() */
+static int
+nodeinfo_add_lookup(struct rte_graph_feature_arc *arc, const char *feat_node_name,
+ struct rte_graph_feature_node_list **ffinfo, uint32_t *slot)
+{
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t fi = 0;
+
+ if (!feat_node_name)
+ return -1;
+
+ if (slot)
+ *slot = 0;
+
+ STAILQ_FOREACH(finfo, &arc->all_features, next_feature) {
+ RTE_VERIFY(finfo->feature_arc == arc);
+ if (!strncmp(finfo->feature_name, feat_node_name, strlen(finfo->feature_name))) {
+ if (ffinfo)
+ *ffinfo = finfo;
+ if (slot)
+ *slot = fi;
+ return 0;
+ }
+ /* Update slot where new feature can be added */
+ if (slot)
+ *slot = fi;
+ fi++;
+ }
+
+ return -1;
+}
+
+/* Get control path node info from provided input feature_index */
+static int
+nodeinfo_lkup_by_index(struct rte_graph_feature_arc *arc, uint32_t feature_index,
+ struct rte_graph_feature_node_list **ppfinfo,
+ const int do_sanity_check)
+{
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t index = 0;
+
+ if (!ppfinfo)
+ return -1;
+
+ *ppfinfo = NULL;
+ STAILQ_FOREACH(finfo, &arc->all_features, next_feature) {
+ /* Check sanity */
+ if (do_sanity_check)
+ if (finfo->finfo_index != index)
+ RTE_VERIFY(0);
+ if (index == feature_index) {
+ *ppfinfo = finfo;
+ return 0;
+ }
+ index++;
+ }
+ return -1;
+}
+
+/* get existing edge from parent_node -> child_node */
+static int
+get_existing_edge(const char *arc_name, rte_node_t parent_node,
+ rte_node_t child_node, rte_edge_t *_edge)
+{
+ char **next_edges = NULL;
+ uint32_t i, count = 0;
+
+ RTE_SET_USED(arc_name);
+
+ count = rte_node_edge_get(parent_node, NULL);
+
+ if (!count)
+ return -1;
+
+ next_edges = malloc(count);
+
+ if (!next_edges)
+ return -1;
+
+ count = rte_node_edge_get(parent_node, next_edges);
+ for (i = 0; i < count; i++) {
+ if (strstr(rte_node_id_to_name(child_node), next_edges[i])) {
+ if (_edge)
+ *_edge = (rte_edge_t)i;
+
+ free(next_edges);
+ return 0;
+ }
+ }
+ free(next_edges);
+
+ return -1;
+}
+
+
+/* prepare feature arc after addition of all features */
+static int
+prepare_feature_arc_before_first_enable(struct rte_graph_feature_arc *arc)
+{
+ struct rte_graph_feature_node_list *lfinfo = NULL;
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t index = 0, iter;
+ rte_edge_t edge;
+
+ STAILQ_FOREACH(lfinfo, &arc->all_features, next_feature) {
+ lfinfo->finfo_index = index;
+ index++;
+ }
+ if (!index) {
+ graph_err("No feature added to arc: %s", arc->feature_arc_name);
+ return -1;
+ }
+
+ nodeinfo_lkup_by_index(arc, index - 1, &lfinfo, 0);
+
+ /* lfinfo should be the info corresponding to end_feature
+ * Add edge from all features to end feature node to have exception path
+ * in fast path from all feature nodes to end feature node during enable/disable
+ */
+ if (lfinfo->feature_node_id != arc->end_feature.feature_node_id) {
+ graph_err("end_feature node mismatch [found-%s: exp-%s]",
+ rte_node_id_to_name(lfinfo->feature_node_id),
+ rte_node_id_to_name(arc->end_feature.feature_node_id));
+ return -1;
+ }
+
+ STAILQ_FOREACH(finfo, &arc->all_features, next_feature) {
+ if (get_existing_edge(arc->feature_arc_name, arc->start_node->id,
+ finfo->feature_node_id, &edge)) {
+ graph_err("No edge found from %s to %s",
+ rte_node_id_to_name(arc->start_node->id),
+ rte_node_id_to_name(finfo->feature_node_id));
+ return -1;
+ }
+ finfo->edge_to_this_feature = edge;
+
+ if (finfo == lfinfo)
+ continue;
+
+ if (get_existing_edge(arc->feature_arc_name, finfo->feature_node_id,
+ lfinfo->feature_node_id, &edge)) {
+ graph_err("No edge found from %s to %s",
+ rte_node_id_to_name(finfo->feature_node_id),
+ rte_node_id_to_name(lfinfo->feature_node_id));
+ return -1;
+ }
+ finfo->edge_to_last_feature = edge;
+ }
+ /**
+ * Enable end_feature in control bitmask
+ * (arc->feature_bit_mask_by_index) but not in fast path bitmask
+ * arc->fp_feature_enable_bitmask. This is due to:
+ * 1. Application may not explicitly enabling end_feature node
+ * 2. However it should be enabled internally so that when a feature is
+ * disabled (say on an interface), next_edge of data should be
+ * updated to end_feature node hence packet can exit arc.
+ * 3. We do not want to set bit for end_feature in fast path bitmask as
+ * it will void the purpose of fast path APIs
+ * rte_graph_feature_arc_is_any_feature_enabled() and
+ * rte_graph_feature_arc_is_feature_enabled(). Since enabling
+ * end_feature would make these APIs to always return "true"
+ */
+ for (iter = 0; iter < arc->max_indexes; iter++)
+ arc->feature_bit_mask_by_index[iter] |= (1 << lfinfo->finfo_index);
+
+ return 0;
+}
+
+/* feature arc sanity */
+static int
+feature_arc_sanity(rte_graph_feature_arc_t _arc)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main;
+ uint16_t iter;
+
+ if (!__rte_graph_feature_arc_main)
+ return -1;
+
+ if (!arc)
+ return -1;
+
+ for (iter = 0; iter < dm->max_feature_arcs; iter++) {
+ if (arc == rte_graph_feature_arc_get(iter)) {
+ if (arc->feature_arc_index != iter)
+ return -1;
+ if (arc->feature_arc_main != dm)
+ return -1;
+
+ return 0;
+ }
+ }
+ return -1;
+}
+
+/* create or retrieve already existing edge from parent_node -> child_node */
+static int
+__connect_graph_nodes(rte_node_t parent_node, rte_node_t child_node,
+ rte_edge_t *_edge, char *arc_name, int lineno)
+{
+ const char *next_node = NULL;
+ rte_edge_t edge;
+
+ if (!get_existing_edge(arc_name, parent_node, child_node, &edge)) {
+ feat_dbg("\t%s/%d: %s[%u]: \"%s\", edge reused", arc_name, lineno,
+ rte_node_id_to_name(parent_node), edge, rte_node_id_to_name(child_node));
+
+ if (_edge)
+ *_edge = edge;
+
+ return 0;
+ }
+
+ /* Node to be added */
+ next_node = rte_node_id_to_name(child_node);
+
+ edge = rte_node_edge_update(parent_node, RTE_EDGE_ID_INVALID, &next_node, 1);
+
+ if (edge == RTE_EDGE_ID_INVALID) {
+ graph_err("edge invalid");
+ return -1;
+ }
+ edge = rte_node_edge_count(parent_node) - 1;
+
+ feat_dbg("\t%s/%d: %s[%u]: \"%s\", new edge added", arc_name, lineno,
+ rte_node_id_to_name(parent_node), edge, rte_node_id_to_name(child_node));
+
+ if (_edge)
+ *_edge = edge;
+
+ return 0;
+}
+
+/* feature arc initialization */
+static int
+feature_arc_main_init(rte_graph_feature_arc_main_t **pfl, uint32_t max_feature_arcs)
+{
+ rte_graph_feature_arc_main_t *pm = NULL;
+ const struct rte_memzone *mz = NULL;
+ uint32_t i;
+ size_t sz;
+
+ if (!pfl) {
+ graph_err("Invalid input");
+ return -1;
+ }
+
+ __rte_graph_feature_arc_mbuf_dyn_offset =
+ rte_mbuf_dynfield_register(&rte_graph_feature_arc_mbuf_desc);
+
+ if (__rte_graph_feature_arc_mbuf_dyn_offset < 0) {
+ graph_err("rte_graph_feature_arc_dynfield_register failed");
+ return -1;
+ }
+
+ sz = sizeof(rte_graph_feature_arc_main_t) +
+ (sizeof(pm->feature_arcs[0]) * max_feature_arcs);
+
+ mz = rte_memzone_reserve(FEATURE_ARC_MEMZONE_NAME, sz, SOCKET_ID_ANY, 0);
+ if (!mz) {
+ graph_err("memzone reserve failed for feature arc main");
+ return -1;
+ }
+
+ pm = mz->addr;
+ memset(pm, 0, sz);
+
+ for (i = 0; i < max_feature_arcs; i++)
+ pm->feature_arcs[i] = GRAPH_FEATURE_ARC_INITIALIZER;
+
+ pm->max_feature_arcs = max_feature_arcs;
+
+ *pfl = pm;
+
+ return 0;
+}
+
+/* feature arc initialization, public API */
+int
+rte_graph_feature_arc_init(void)
+{
+ struct rte_graph_feature_arc_register *arc_reg = NULL;
+ struct rte_graph_feature_register *feat_reg = NULL;
+ const struct rte_memzone *mz = NULL;
+ int max_feature_arcs;
+ int rc = -1;
+
+ max_feature_arcs = arc_registration_num();
+
+ if (!max_feature_arcs) {
+ graph_err("No feature arcs registered");
+ return -1;
+ }
+
+ if (!__rte_graph_feature_arc_main) {
+ mz = rte_memzone_lookup(FEATURE_ARC_MEMZONE_NAME);
+ if (mz) {
+ __rte_graph_feature_arc_main = mz->addr;
+ __rte_graph_feature_arc_mbuf_dyn_offset =
+ rte_mbuf_dynfield_lookup(RTE_GRAPH_FEATURE_ARC_DYNFIELD_NAME,
+ &rte_graph_feature_arc_mbuf_desc);
+ } else {
+ rc = feature_arc_main_init(&__rte_graph_feature_arc_main, max_feature_arcs);
+ if (rc < 0)
+ return rc;
+ }
+ }
+
+ STAILQ_FOREACH(arc_reg, &feature_arc_list, next_arc) {
+ /* validate end feature */
+ if (feature_registration_validate(arc_reg->end_feature,
+ __func__, __LINE__, 1, 0) < 0)
+ continue;
+
+ if (strncmp(arc_reg->arc_name, arc_reg->end_feature->arc_name,
+ RTE_GRAPH_FEATURE_ARC_NAMELEN)) {
+ feat_dbg("arc-%s: mismatch in arc_name for end_feature: %s",
+ arc_reg->arc_name, arc_reg->end_feature->arc_name);
+ continue;
+ }
+
+ if (!arc_registration_lookup(arc_reg->arc_name, NULL))
+ continue;
+
+ /* If feature name not set, use node name as feature */
+ if (!arc_reg->end_feature->feature_name)
+ arc_reg->end_feature->feature_name =
+ rte_node_id_to_name(arc_reg->end_feature->feature_node_id);
+
+ /* If max_features not set, calculate number of static feature registrations */
+ if (!arc_reg->max_features)
+ arc_registered_features_num(arc_reg->arc_name, &arc_reg->max_features);
+
+ arc_reg->end_feature->feature_node_id = arc_reg->end_feature->feature_node->id;
+
+ rc = rte_graph_feature_arc_create(arc_reg, NULL);
+
+ if (rc < 0)
+ goto arc_cleanup;
+
+ rc = rte_graph_feature_add(arc_reg->end_feature);
+
+ if (rc < 0)
+ goto arc_cleanup;
+ }
+
+ /* First add those features which has no runs_after and runs_before restriction */
+ STAILQ_FOREACH(feat_reg, &feature_list, next_feature) {
+ if (feat_reg->runs_after || feat_reg->runs_before)
+ continue;
+
+ if (feature_registration_validate(feat_reg, __func__, __LINE__, 1, 0) < 0)
+ continue;
+
+ feat_reg->feature_node_id = feat_reg->feature_node->id;
+
+ rc = rte_graph_feature_add(feat_reg);
+
+ if (rc < 0)
+ goto arc_cleanup;
+ }
+ /* Add those features which has either runs_after or runs_before restrictions */
+ STAILQ_FOREACH(feat_reg, &feature_list, next_feature) {
+ if (!feat_reg->runs_after && !feat_reg->runs_before)
+ continue;
+
+ if (feat_reg->runs_after && feat_reg->runs_before)
+ continue;
+
+ if (feature_registration_validate(feat_reg, __func__, __LINE__, 1, 0) < 0)
+ continue;
+
+ feat_reg->feature_node_id = feat_reg->feature_node->id;
+
+ rc = rte_graph_feature_add(feat_reg);
+
+ if (rc < 0)
+ goto arc_cleanup;
+ }
+ /* Add those features with both runs_after and runs_before restrictions */
+ STAILQ_FOREACH(feat_reg, &feature_list, next_feature) {
+ if (!feat_reg->runs_after && !feat_reg->runs_before)
+ continue;
+
+ if ((feat_reg->runs_after && !feat_reg->runs_before) ||
+ (!feat_reg->runs_after && feat_reg->runs_before))
+ continue;
+
+ if (feature_registration_validate(feat_reg, __func__, __LINE__, 1, 0) < 0)
+ continue;
+
+ feat_reg->feature_node_id = feat_reg->feature_node->id;
+
+ rc = rte_graph_feature_add(feat_reg);
+
+ if (rc < 0)
+ goto arc_cleanup;
+ }
+
+ return 0;
+
+arc_cleanup:
+ rte_graph_feature_arc_cleanup();
+
+ return rc;
+}
+
+int
+rte_graph_feature_arc_create(struct rte_graph_feature_arc_register *reg,
+ rte_graph_feature_arc_t *_arc)
+{
+ rte_graph_feature_arc_main_t *dfm = NULL;
+ struct rte_graph_feature_arc *arc = NULL;
+ uint16_t first_feat_off, fdata_off;
+ const struct rte_memzone *mz = NULL;
+ uint16_t iter, arc_index;
+ uint32_t feat_sz = 0;
+ size_t sz;
+
+ if (arc_registration_validate(reg, __func__, __LINE__) < 0)
+ return -1;
+
+ if (!reg->max_features)
+ graph_err("Zero features found for arc \"%s\" create",
+ reg->arc_name);
+
+ if (!__rte_graph_feature_arc_main) {
+ mz = rte_memzone_lookup(FEATURE_ARC_MEMZONE_NAME);
+ if (mz) {
+ __rte_graph_feature_arc_main = mz->addr;
+ } else {
+ graph_err("Call to rte_graph_feature_arc_init() API missing");
+ return -1;
+ }
+ }
+
+ /* See if arc memory is already created */
+ mz = rte_memzone_lookup(reg->arc_name);
+ if (mz) {
+ graph_warn("Feature arc %s already created", reg->arc_name);
+ arc = mz->addr;
+ if (_arc)
+ *_arc = arc->feature_arc_index;
+
+ arc->process_ref_count++;
+
+ return 0;
+ }
+
+ dfm = __rte_graph_feature_arc_main;
+
+ /* threshold check */
+ if (dfm->num_feature_arcs > (dfm->max_feature_arcs - 1))
+ SET_ERR_JMP(EAGAIN, arc_create_err,
+ "%s: max number (%u) of feature arcs reached",
+ reg->arc_name, dfm->max_feature_arcs);
+
+ /* Find the free slot for feature arc */
+ for (iter = 0; iter < dfm->max_feature_arcs; iter++) {
+ if (dfm->feature_arcs[iter] == GRAPH_FEATURE_ARC_INITIALIZER)
+ break;
+ }
+ arc_index = iter;
+
+ if (arc_index >= dfm->max_feature_arcs) {
+ graph_err("No free slot found for num_feature_arc");
+ return -1;
+ }
+
+ /* This should not happen */
+ if (dfm->feature_arcs[arc_index] != GRAPH_FEATURE_ARC_INITIALIZER) {
+ graph_err("Free arc_index: %u is not found free: %p",
+ arc_index, (void *)dfm->feature_arcs[arc_index]);
+ return -1;
+ }
+
+ /* Calculate size of feature arc */
+ feature_arc_reg_calc_size(reg, &sz, &first_feat_off, &fdata_off, &feat_sz);
+
+ mz = rte_memzone_reserve(reg->arc_name, sz, SOCKET_ID_ANY, 0);
+
+ if (!mz) {
+ graph_err("memzone reserve failed for arc: %s of size: %lu",
+ reg->arc_name, sz);
+ return -1;
+ }
+
+ arc = mz->addr;
+
+ memset(arc, 0, sz);
+
+ arc->feature_bit_mask_by_index = rte_malloc(reg->arc_name,
+ sizeof(uint64_t) * reg->max_indexes, 0);
+
+ if (!arc->feature_bit_mask_by_index) {
+ graph_err("%s: rte_malloc failed for feature_bit_mask_alloc", reg->arc_name);
+ rte_memzone_free(mz);
+ return -1;
+ }
+
+ memset(arc->feature_bit_mask_by_index, 0, sizeof(uint64_t) * reg->max_indexes);
+
+ /* override process function with start_node */
+ if (node_override_process_func(reg->start_node->id, reg->start_node_feature_process_fn)) {
+ graph_err("node_override_process_func failed for %s", reg->start_node->name);
+ rte_free(arc->feature_bit_mask_by_index);
+ rte_memzone_free(mz);
+ return -1;
+ }
+ feat_dbg("arc-%s: node-%s process() overridden with %p",
+ reg->arc_name, reg->start_node->name,
+ reg->start_node_feature_process_fn);
+
+ /* Initialize rte_graph port group fixed variables */
+ STAILQ_INIT(&arc->all_features);
+ rte_strscpy(arc->feature_arc_name, reg->arc_name, RTE_GRAPH_FEATURE_ARC_NAMELEN - 1);
+ arc->feature_arc_main = (void *)dfm;
+ arc->start_node = reg->start_node;
+ memcpy(&arc->end_feature, reg->end_feature, sizeof(arc->end_feature));
+ arc->arc_start_process = reg->start_node_feature_process_fn;
+ arc->feature_arc_index = arc_index;
+ arc->arc_size = sz;
+
+ /* reset fast path arc variables */
+ arc->max_features = reg->max_features;
+ arc->max_indexes = reg->max_indexes;
+ arc->fp_first_feature_offset = first_feat_off;
+ arc->fp_feature_data_offset = fdata_off;
+ arc->fp_feature_size = feat_sz;
+
+ arc->process_ref_count++;
+
+ feature_arc_data_reset(arc);
+
+ dfm->feature_arcs[arc->feature_arc_index] = (uintptr_t)arc;
+ dfm->num_feature_arcs++;
+
+ if (_arc)
+ *_arc = (rte_graph_feature_arc_t)arc_index;
+
+ feat_dbg("Feature arc %s[%p] created with max_features: %u and indexes: %u",
+ arc->feature_arc_name, (void *)arc, arc->max_features, arc->max_indexes);
+
+ return 0;
+
+arc_create_err:
+ return -1;
+}
+
+int
+rte_graph_feature_add(struct rte_graph_feature_register *freg)
+{
+ struct rte_graph_feature_node_list *after_finfo = NULL, *before_finfo = NULL;
+ struct rte_graph_feature_node_list *temp = NULL, *finfo = NULL;
+ char feature_name[3 * RTE_GRAPH_FEATURE_ARC_NAMELEN];
+ const char *runs_after = NULL, *runs_before = NULL;
+ struct rte_graph_feature_arc *arc = NULL;
+ uint32_t slot = UINT32_MAX, add_flag;
+ rte_graph_feature_arc_t _arc;
+ uint32_t num_features = 0;
+ const char *nodename = NULL;
+ rte_edge_t edge = -1;
+ int rc = 0;
+
+ if (feature_registration_validate(freg, __func__, __LINE__, 0, 1) < 0)
+ return -1;
+
+ /* arc is valid */
+ if (rte_graph_feature_arc_lookup_by_name(freg->arc_name, &_arc)) {
+ graph_err("%s_add: feature arc %s not found",
+ freg->feature_name, freg->arc_name);
+ return -1;
+ }
+
+ if (feature_arc_sanity(_arc)) {
+ graph_err("invalid feature arc: 0x%x", _arc);
+ return -1;
+ }
+
+ arc = rte_graph_feature_arc_get(_arc);
+
+ if (arc->runtime_enabled_features) {
+ graph_err("adding features after enabling any one of them is not supported");
+ return -1;
+ }
+
+ /* When application calls rte_graph_feature_add() directly*/
+ if (freg->feature_node_id == RTE_NODE_ID_INVALID) {
+ graph_err("%s/%s: Invalid feature_node_id set for %s",
+ freg->arc_name, freg->feature_name, __func__);
+ return -1;
+ }
+
+ if ((freg->runs_after != NULL) && (freg->runs_before != NULL) &&
+ (freg->runs_after == freg->runs_before)) {
+ graph_err("runs_after and runs_before cannot be same '%s:%s]", freg->runs_after,
+ freg->runs_before);
+ return -1;
+ }
+
+ num_features = rte_graph_feature_arc_num_features(_arc);
+ nodeinfo_lkup_by_index(arc, num_features - 1, &temp, 0);
+
+ /* Check if feature is not added after end_feature */
+ if (num_features && (freg->runs_after != NULL) &&
+ (strncmp(freg->runs_after, temp->feature_name,
+ RTE_GRAPH_FEATURE_ARC_NAMELEN) == 0)) {
+ graph_err("Feature %s cannot be added after end_feature %s",
+ freg->feature_name, freg->runs_after);
+ return -1;
+ }
+
+ if (!nodeinfo_add_lookup(arc, freg->feature_name, &finfo, &slot)) {
+ graph_err("%s/%s feature already added", arc->feature_arc_name, freg->feature_name);
+ return -1;
+ }
+
+ if (slot >= arc->max_features) {
+ graph_err("%s: Max features %u added to feature arc",
+ arc->feature_arc_name, slot);
+ return -1;
+ }
+
+ if (freg->feature_node_id == arc->start_node->id) {
+ graph_err("%s/%s: Feature node and start node are same %u",
+ freg->arc_name, freg->feature_name, freg->feature_node_id);
+ return -1;
+ }
+
+ nodename = rte_node_id_to_name(freg->feature_node_id);
+
+ feat_dbg("%s: adding feature node: %s at feature index: %u", arc->feature_arc_name,
+ nodename, slot);
+
+ if (connect_graph_nodes(arc->start_node->id, freg->feature_node_id, &edge,
+ arc->feature_arc_name)) {
+ graph_err("unable to connect %s -> %s", arc->start_node->name, nodename);
+ return -1;
+ }
+
+ snprintf(feature_name, sizeof(feature_name), "%s-%s-finfo",
+ arc->feature_arc_name, freg->feature_name);
+
+ finfo = rte_malloc(feature_name, sizeof(*finfo), 0);
+ if (!finfo) {
+ graph_err("%s/%s: rte_malloc failed", arc->feature_arc_name, freg->feature_name);
+ return -1;
+ }
+
+ memset(finfo, 0, sizeof(*finfo));
+
+ rte_strscpy(finfo->feature_name, freg->feature_name, RTE_GRAPH_FEATURE_ARC_NAMELEN - 1);
+ finfo->feature_arc = (void *)arc;
+ finfo->feature_node_id = freg->feature_node_id;
+ finfo->feature_node_process_fn = freg->feature_process_fn;
+ finfo->edge_to_this_feature = RTE_EDGE_ID_INVALID;
+ finfo->edge_to_last_feature = RTE_EDGE_ID_INVALID;
+ finfo->notifier_cb = freg->notifier_cb;
+
+ runs_before = freg->runs_before;
+ runs_after = freg->runs_after;
+
+ /*
+ * if no constraints given and provided feature is not the first feature,
+ * explicitly set "runs_before" as end_feature.
+ *
+ * Handles the case:
+ * arc_create(f1);
+ * add(f2, NULL, NULL);
+ */
+ if (!runs_after && !runs_before && num_features)
+ runs_before = rte_graph_feature_arc_feature_to_name(_arc, num_features - 1);
+
+ /* Check for before and after constraints */
+ if (runs_before) {
+ /* runs_before sanity */
+ if (nodeinfo_lkup_by_name(arc, runs_before, &before_finfo, NULL))
+ SET_ERR_JMP(EINVAL, finfo_free,
+ "Invalid before feature name: %s", runs_before);
+
+ if (!before_finfo)
+ SET_ERR_JMP(EINVAL, finfo_free,
+ "runs_before %s does not exist", runs_before);
+
+ /*
+ * Starting from 0 to runs_before, continue connecting edges
+ */
+ add_flag = 1;
+ STAILQ_FOREACH(temp, &arc->all_features, next_feature) {
+ if (!add_flag)
+ /* Nodes after seeing "runs_before", finfo connects to temp*/
+ connect_graph_nodes(finfo->feature_node_id, temp->feature_node_id,
+ NULL, arc->feature_arc_name);
+ /*
+ * As soon as we see runs_before. stop adding edges
+ */
+ if (!strncmp(temp->feature_name, runs_before, RTE_GRAPH_NAMESIZE)) {
+ if (!connect_graph_nodes(finfo->feature_node_id,
+ temp->feature_node_id,
+ &edge, arc->feature_arc_name))
+ add_flag = 0;
+ }
+
+ if (add_flag)
+ /* Nodes before seeing "run_before" are connected to finfo */
+ connect_graph_nodes(temp->feature_node_id, finfo->feature_node_id,
+ NULL, arc->feature_arc_name);
+ }
+ }
+
+ if (runs_after) {
+ if (nodeinfo_lkup_by_name(arc, runs_after, &after_finfo, NULL))
+ SET_ERR_JMP(EINVAL, finfo_free,
+ "Invalid after feature_name %s", runs_after);
+
+ if (!after_finfo)
+ SET_ERR_JMP(EINVAL, finfo_free,
+ "runs_after %s does not exist", runs_after);
+
+ /* Starting from runs_after to end continue connecting edges */
+ add_flag = 0;
+ STAILQ_FOREACH(temp, &arc->all_features, next_feature) {
+ if (add_flag)
+ /* We have already seen runs_after now */
+ /* Add all features as next node to current feature*/
+ connect_graph_nodes(finfo->feature_node_id, temp->feature_node_id,
+ NULL, arc->feature_arc_name);
+ else
+ /* Connect initial nodes to newly added node*/
+ connect_graph_nodes(temp->feature_node_id, finfo->feature_node_id,
+ NULL, arc->feature_arc_name);
+
+ /* as soon as we see runs_after. start adding edges
+ * from next iteration
+ */
+ if (!strncmp(temp->feature_name, runs_after, RTE_GRAPH_NAMESIZE))
+ add_flag = 1;
+ }
+
+ /* add feature next to runs_after */
+ STAILQ_INSERT_AFTER(&arc->all_features, after_finfo, finfo, next_feature);
+ } else {
+ if (before_finfo) {
+ /* add finfo before "before_finfo" element in the list */
+ after_finfo = NULL;
+ STAILQ_FOREACH(temp, &arc->all_features, next_feature) {
+ if (before_finfo == temp) {
+ if (after_finfo)
+ STAILQ_INSERT_AFTER(&arc->all_features, after_finfo,
+ finfo, next_feature);
+ else
+ STAILQ_INSERT_HEAD(&arc->all_features, finfo,
+ next_feature);
+
+ return 0;
+ }
+ after_finfo = temp;
+ }
+ } else {
+ /* Very first feature just needs to be added to list */
+ STAILQ_INSERT_TAIL(&arc->all_features, finfo, next_feature);
+ }
+ }
+ /* override node_process_fn */
+ rc = node_override_process_func(finfo->feature_node_id, freg->feature_process_fn);
+ if (rc < 0) {
+ graph_err("node_override_process_func failed for %s", freg->feature_name);
+ goto finfo_free;
+ }
+
+ if (freg->feature_node)
+ feat_dbg("arc-%s: feature %s node %s process() overridden with %p",
+ freg->arc_name, freg->feature_name, freg->feature_node->name,
+ freg->feature_process_fn);
+ else
+ feat_dbg("arc-%s: feature %s nodeid %u process() overriding with %p",
+ freg->arc_name, freg->feature_name,
+ freg->feature_node_id, freg->feature_process_fn);
+
+ return 0;
+finfo_free:
+ rte_free(finfo);
+
+ return -1;
+}
+
+int
+rte_graph_feature_lookup(rte_graph_feature_arc_t _arc, const char *feature_name,
+ rte_graph_feature_t *feat)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t slot;
+
+ if (!arc)
+ return -1;
+
+ if (!nodeinfo_lkup_by_name(arc, feature_name, &finfo, &slot)) {
+ *feat = (rte_graph_feature_t) slot;
+ return 0;
+ }
+
+ return -1;
+}
+
+static int
+feature_enable_disable_validate(rte_graph_feature_arc_t _arc, uint32_t index,
+ const char *feature_name,
+ int is_enable_disable, bool emit_logs)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t slot, last_end_feature;
+
+ if (!arc)
+ return -EINVAL;
+
+ /* validate _arc */
+ if (arc->feature_arc_main != __rte_graph_feature_arc_main) {
+ FEAT_COND_ERR(emit_logs, "invalid feature arc: 0x%x", _arc);
+ return -EINVAL;
+ }
+
+ /* validate index */
+ if (index >= arc->max_indexes) {
+ FEAT_COND_ERR(emit_logs, "%s: Invalid provided index: %u >= %u configured",
+ arc->feature_arc_name, index, arc->max_indexes);
+ return -1;
+ }
+
+ /* validate feature_name is already added or not */
+ if (nodeinfo_lkup_by_name(arc, feature_name, &finfo, &slot)) {
+ FEAT_COND_ERR(emit_logs, "%s: No feature %s added",
+ arc->feature_arc_name, feature_name);
+ return -EINVAL;
+ }
+
+ if (!finfo) {
+ FEAT_COND_ERR(emit_logs, "%s: No feature: %s found to enable/disable",
+ arc->feature_arc_name, feature_name);
+ return -EINVAL;
+ }
+
+ /* slot should be in valid range */
+ if (slot >= arc->max_features) {
+ FEAT_COND_ERR(emit_logs, "%s/%s: Invalid free slot %u(max=%u) for feature",
+ arc->feature_arc_name, feature_name, slot, arc->max_features);
+ return -EINVAL;
+ }
+
+ /* slot should be in range of 0 - 63 */
+ if (slot > (GRAPH_FEATURE_MAX_NUM_PER_ARC - 1)) {
+ FEAT_COND_ERR(emit_logs, "%s/%s: Invalid slot: %u", arc->feature_arc_name,
+ feature_name, slot);
+ return -EINVAL;
+ }
+
+ last_end_feature = rte_fls_u64(arc->feature_bit_mask_by_index[index]);
+ /* if enabled feature is not end feature node */
+ if (is_enable_disable &&
+ (arc->feature_bit_mask_by_index[index] & RTE_BIT64(slot)) &&
+ (RTE_BIT64(slot) != last_end_feature)) {
+ FEAT_COND_ERR(emit_logs, "%s: %s already enabled on index: %u",
+ arc->feature_arc_name, feature_name, index);
+ return -1;
+ }
+
+ if (!is_enable_disable && !arc->runtime_enabled_features) {
+ FEAT_COND_ERR(emit_logs, "%s: No feature enabled to disable",
+ arc->feature_arc_name);
+ return -1;
+ }
+
+ if (!is_enable_disable && !(arc->feature_bit_mask_by_index[index] & RTE_BIT64(slot))) {
+ FEAT_COND_ERR(emit_logs, "%s: %s not enabled in bitmask for index: %u",
+ arc->feature_arc_name, feature_name, index);
+ return -1;
+ }
+
+ /* If no feature has been enabled, avoid extra sanity checks */
+ if (!arc->runtime_enabled_features)
+ return 0;
+
+ if (finfo->finfo_index != slot) {
+ FEAT_COND_ERR(emit_logs,
+ "%s/%s: lookup slot mismatch for finfo idx: %u and lookup slot: %u",
+ arc->feature_arc_name, feature_name, finfo->finfo_index, slot);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+refill_fastpath_data(struct rte_graph_feature_arc *arc, uint32_t feature_bit,
+ uint16_t index /* array index */, int is_enable_disable)
+{
+ struct rte_graph_feature_node_list *finfo = NULL, *prev_finfo = NULL;
+ struct rte_graph_feature_data *gfd = NULL, *prev_gfd = NULL;
+ RTE_ATOMIC(rte_graph_feature_t) * first_feat = NULL;
+ uint64_t bitmask = 0, prev_bitmask, next_bitmask;
+ uint32_t fi = 0, prev_fi = 0, next_fi = 0, ffi = 0;
+ rte_edge_t edge = UINT16_MAX;
+
+ if (is_enable_disable)
+ bitmask = RTE_BIT64(feature_bit);
+
+ /* set bit from (feature_bit + 1) to 64th bit */
+ next_bitmask = UINT64_MAX << (feature_bit + 1);
+
+ /* set bits from 0 to (feature_bit - 1) */
+ prev_bitmask = ((UINT64_MAX & ~next_bitmask) & ~(RTE_BIT64(feature_bit)));
+
+ next_bitmask &= arc->feature_bit_mask_by_index[index];
+ prev_bitmask &= arc->feature_bit_mask_by_index[index];
+
+ /* Set next bit set in next_bitmask */
+ if (rte_bsf64_safe(next_bitmask, &next_fi))
+ bitmask |= RTE_BIT64(next_fi);
+
+ /* Set prev bit set in prev_bitmask*/
+ prev_fi = rte_fls_u64(prev_bitmask);
+ if (prev_fi)
+ bitmask |= RTE_BIT64(prev_fi - 1);
+
+ /* for each feature set for index, set fast path data */
+ prev_fi = RTE_GRAPH_FEATURE_INVALID;
+ while (rte_bsf64_safe(bitmask, &fi)) {
+ gfd = rte_graph_feature_data_get(arc, fdata_from_feat(arc, fi, index));
+
+ RTE_VERIFY(!nodeinfo_lkup_by_index(arc, fi, &finfo, 1));
+
+ /* Reset next edge to point to last feature node so that packet can exit from arc */
+ rte_atomic_store_explicit(&gfd->next_edge, finfo->edge_to_last_feature,
+ rte_memory_order_relaxed);
+
+ /*
+ * Reset next feature data
+ */
+ rte_atomic_store_explicit(&gfd->next_feature_data, RTE_GRAPH_FEATURE_DATA_INVALID,
+ rte_memory_order_relaxed);
+
+ /* If previous feature_index was valid in last loop */
+ if (prev_fi != RTE_GRAPH_FEATURE_INVALID) {
+ prev_gfd = rte_graph_feature_data_get(arc, fdata_from_feat(arc, prev_fi,
+ index));
+
+ /*
+ * Get edge of previous feature node connecting
+ * to this feature node
+ */
+ RTE_VERIFY(!nodeinfo_lkup_by_index(arc, prev_fi, &prev_finfo, 1));
+
+ if (!get_existing_edge(arc->feature_arc_name,
+ prev_finfo->feature_node_id,
+ finfo->feature_node_id, &edge)) {
+ feat_dbg("\t[%s/index:%2u,cookie:%u]: (%u->%u)%s[%u] = %s",
+ arc->feature_arc_name, index,
+ prev_gfd->app_cookie, prev_fi, fi,
+ rte_node_id_to_name(prev_finfo->feature_node_id),
+ edge, rte_node_id_to_name(finfo->feature_node_id));
+
+ /*
+ * In fastpath, nodes always should call
+ * rte_graph_feature_data_next_feature_get() to
+ * get_next_feature data
+ *
+ * So next feature data i.e. gfd should have edge going from
+ * prev_feature to current/next feature node
+ */
+ rte_atomic_store_explicit(&gfd->next_edge,
+ edge,
+ rte_memory_order_relaxed);
+
+ /*
+ * Fill current feature as next enabled
+ * feature to previous one
+ */
+ rte_atomic_store_explicit(&prev_gfd->next_feature_data,
+ fdata_from_feat(arc, fi, index),
+ rte_memory_order_relaxed);
+
+ prev_fi = fi;
+ } else {
+ /* Should not fail */
+ RTE_VERIFY(0);
+ }
+ }
+ /* On first feature
+ * 1. Update fdata with next_edge from start_node to feature node
+ * 2. Update first enabled feature in its index array
+ */
+ if (rte_bsf64_safe(arc->feature_bit_mask_by_index[index], &ffi)) {
+ /* If fi is first feature */
+ if (ffi == fi) {
+ feat_dbg("\t[%s/index:%2u,cookie:%u]: (->%u)%s[%u]=%s",
+ arc->feature_arc_name, index,
+ gfd->app_cookie, fi,
+ arc->start_node->name, finfo->edge_to_this_feature,
+ rte_node_id_to_name(finfo->feature_node_id));
+
+ gfd = rte_graph_feature_data_get(arc,
+ fdata_from_feat(arc, fi, index));
+
+ /* add next edge into feature data
+ * First set feature data then first feature memory
+ */
+ rte_atomic_store_explicit(&gfd->next_edge,
+ finfo->edge_to_this_feature,
+ rte_memory_order_relaxed);
+
+ first_feat = graph_first_feature_ptr_get(arc, index);
+
+ rte_atomic_store_explicit(first_feat, fi,
+ rte_memory_order_relaxed);
+ }
+
+ prev_fi = fi;
+ }
+ /* Clear current feature index */
+ bitmask &= ~RTE_BIT64(fi);
+ }
+
+ return 0;
+}
+
+int
+rte_graph_feature_enable(rte_graph_feature_arc_t _arc, uint32_t index,
+ const char *feature_name, uint32_t app_cookie,
+ struct rte_rcu_qsbr *qsbr)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ struct rte_graph_feature_node_list *finfo = NULL;
+ struct rte_graph_feature_data *gfd = NULL;
+ uint64_t bitmask;
+ uint32_t slot;
+
+ if (!arc) {
+ graph_err("Invalid feature arc: 0x%x", _arc);
+ return -1;
+ }
+
+ feat_dbg("%s: Enabling feature: %s for index: %u",
+ arc->feature_arc_name, feature_name, index);
+
+ if ((!arc->runtime_enabled_features &&
+ (prepare_feature_arc_before_first_enable(arc) < 0)))
+ return -1;
+
+ if (feature_enable_disable_validate(_arc, index, feature_name, 1 /* enable */, true))
+ return -1;
+
+ /** This should not fail as validate() has passed */
+ if (nodeinfo_lkup_by_name(arc, feature_name, &finfo, &slot))
+ RTE_VERIFY(0);
+
+ gfd = rte_graph_feature_data_get(arc, fdata_from_feat(arc, slot, index));
+
+ /* Set current app_cookie */
+ rte_atomic_store_explicit(&gfd->app_cookie, app_cookie, rte_memory_order_relaxed);
+
+ /* Set bitmask in control path bitmask */
+ rte_bit_relaxed_set64(graph_uint_cast(slot), &arc->feature_bit_mask_by_index[index]);
+
+ refill_fastpath_data(arc, slot, index, 1 /* enable */);
+
+ /* On very first feature enable instance */
+ if (!finfo->ref_count) {
+ /* If first time feature getting enabled
+ */
+ bitmask = rte_atomic_load_explicit(&arc->fp_feature_enable_bitmask,
+ rte_memory_order_relaxed);
+
+ bitmask |= RTE_BIT64(slot);
+
+ rte_atomic_store_explicit(&arc->fp_feature_enable_bitmask,
+ bitmask, rte_memory_order_relaxed);
+ }
+
+ /* Slow path updates */
+ arc->runtime_enabled_features++;
+
+ /* Increase feature node info reference count */
+ finfo->ref_count++;
+
+ if (qsbr)
+ rte_rcu_qsbr_synchronize(qsbr, RTE_QSBR_THRID_INVALID);
+
+ if (finfo->notifier_cb)
+ finfo->notifier_cb(arc->feature_arc_name, finfo->feature_name, index,
+ true /* enable */, gfd->app_cookie);
+
+ return 0;
+}
+
+int
+rte_graph_feature_disable(rte_graph_feature_arc_t _arc, uint32_t index, const char *feature_name,
+ struct rte_rcu_qsbr *qsbr)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ struct rte_graph_feature_data *gfd = NULL, *dummy_gfd = NULL;
+ struct rte_graph_feature_node_list *finfo = NULL;
+ rte_graph_feature_data_t dummy_fdata;
+ uint32_t slot, last_end_feature;
+ uint64_t bitmask;
+
+ if (!arc) {
+ graph_err("Invalid feature arc: 0x%x", _arc);
+ return -1;
+ }
+ feat_dbg("%s: Disable feature: %s for index: %u",
+ arc->feature_arc_name, feature_name, index);
+
+ if (feature_enable_disable_validate(_arc, index, feature_name, 0, true))
+ return -1;
+
+ if (nodeinfo_lkup_by_name(arc, feature_name, &finfo, &slot))
+ return -1;
+
+ /* If feature is not last feature, unset in control plane bitmask */
+ last_end_feature = rte_fls_u64(arc->feature_bit_mask_by_index[index]);
+ if (RTE_BIT64(slot) != last_end_feature)
+ rte_bit_relaxed_clear64(graph_uint_cast(slot),
+ &arc->feature_bit_mask_by_index[index]);
+
+ /* we have allocated one extra feature data space. Get dummy feature data */
+ dummy_fdata = fdata_from_feat(arc, last_end_feature + 1, index);
+ dummy_gfd = __rte_graph_feature_data_get(arc, dummy_fdata);
+ gfd = rte_graph_feature_data_get(arc, fdata_from_feat(arc, slot, index));
+
+ /*
+ * Packets may have reached to feature node which is getting disabled.
+ * We want to steer those packets to last feature node so that they can
+ * exit arc
+ * - First, reset next_edge of dummy feature data to point to last_feature_node
+ * - Secondly, reset next_feature_data of current feature getting disabled to dummy
+ * feature data
+ */
+ rte_atomic_store_explicit(&dummy_gfd->next_edge, finfo->edge_to_last_feature,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&gfd->next_feature_data, dummy_fdata,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&dummy_gfd->next_feature_data, RTE_GRAPH_FEATURE_DATA_INVALID,
+ rte_memory_order_relaxed);
+
+ /* Now we can unwire fast path*/
+ refill_fastpath_data(arc, slot, index, 0 /* disable */);
+
+ finfo->ref_count--;
+
+ /* When last feature is disabled */
+ if (!finfo->ref_count) {
+ /* If no feature enabled, reset feature in u64 fast path bitmask */
+ bitmask = rte_atomic_load_explicit(&arc->fp_feature_enable_bitmask,
+ rte_memory_order_relaxed);
+ bitmask &= ~(RTE_BIT64(slot));
+ rte_atomic_store_explicit(&arc->fp_feature_enable_bitmask, bitmask,
+ rte_memory_order_relaxed);
+ }
+
+ if (qsbr)
+ rte_rcu_qsbr_synchronize(qsbr, RTE_QSBR_THRID_INVALID);
+
+ /* Reset current gfd after rcu synchronization */
+ rte_atomic_store_explicit(&gfd->next_edge, RTE_GRAPH_FEATURE_DATA_INVALID,
+ rte_memory_order_relaxed);
+
+ /* Call notifier cb with valid app_cookie */
+ if (finfo->notifier_cb)
+ finfo->notifier_cb(arc->feature_arc_name, finfo->feature_name, index,
+ false /* disable */, gfd->app_cookie);
+
+ /* Reset app_cookie later after calling notifier_cb */
+ rte_atomic_store_explicit(&gfd->app_cookie, UINT32_MAX, rte_memory_order_relaxed);
+
+ arc->runtime_enabled_features--;
+
+ return 0;
+}
+
+int
+rte_graph_feature_arc_destroy(rte_graph_feature_arc_t _arc)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main;
+ struct rte_graph_feature_node_list *node_info = NULL;
+ int iter;
+
+ if (!arc) {
+ graph_err("Invalid feature arc: 0x%x", _arc);
+ return -1;
+ }
+ arc->process_ref_count--;
+
+ if (arc->process_ref_count)
+ return arc->process_ref_count;
+
+ while (!STAILQ_EMPTY(&arc->all_features)) {
+ node_info = STAILQ_FIRST(&arc->all_features);
+ STAILQ_REMOVE_HEAD(&arc->all_features, next_feature);
+ if (node_info->notifier_cb) {
+ for (iter = 0; iter < arc->max_indexes; iter++) {
+ if (!(arc->feature_bit_mask_by_index[iter] &
+ RTE_BIT64(node_info->finfo_index)))
+ continue;
+
+ node_info->notifier_cb(arc->feature_arc_name,
+ node_info->feature_name,
+ 0, false /* disable */, 0);
+ }
+ }
+ rte_free(node_info);
+ }
+
+ dm->feature_arcs[arc->feature_arc_index] = GRAPH_FEATURE_ARC_INITIALIZER;
+
+ rte_free(arc->feature_bit_mask_by_index);
+
+ rte_memzone_free(rte_memzone_lookup(arc->feature_arc_name));
+
+ return 0;
+}
+
+int
+rte_graph_feature_arc_cleanup(void)
+{
+ rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main;
+ int dont_free = 0;
+ uint32_t iter;
+
+ if (!__rte_graph_feature_arc_main)
+ return -1;
+
+ for (iter = 0; iter < dm->max_feature_arcs; iter++) {
+ if (dm->feature_arcs[iter] == GRAPH_FEATURE_ARC_INITIALIZER)
+ continue;
+
+ if (rte_graph_feature_arc_destroy(dm->feature_arcs[iter]) > 0)
+ dont_free++;
+ }
+ if (!dont_free) {
+ rte_memzone_free(rte_memzone_lookup(FEATURE_ARC_MEMZONE_NAME));
+ __rte_graph_feature_arc_main = NULL;
+ }
+
+ return 0;
+}
+
+int
+rte_graph_feature_arc_lookup_by_name(const char *arc_name, rte_graph_feature_arc_t *_arc)
+{
+ struct rte_graph_feature_arc *arc = NULL;
+ const struct rte_memzone *mz = NULL;
+ rte_graph_feature_arc_main_t *dm;
+ uint32_t iter;
+
+ if (_arc)
+ *_arc = RTE_GRAPH_FEATURE_ARC_INITIALIZER;
+
+ if (!__rte_graph_feature_arc_main) {
+ mz = rte_memzone_lookup(FEATURE_ARC_MEMZONE_NAME);
+ if (mz)
+ __rte_graph_feature_arc_main = mz->addr;
+ else
+ return -1;
+ }
+
+ dm = __rte_graph_feature_arc_main;
+
+ for (iter = 0; iter < dm->max_feature_arcs; iter++) {
+ arc = rte_graph_feature_arc_get(iter);
+
+ if (!arc)
+ continue;
+
+ if ((strstr(arc->feature_arc_name, arc_name)) &&
+ (strlen(arc->feature_arc_name) == strlen(arc_name))) {
+ if (_arc)
+ *_arc = arc->feature_arc_index;
+ return 0;
+ }
+ }
+
+ return -1;
+}
+
+uint32_t
+rte_graph_feature_arc_num_enabled_features(rte_graph_feature_arc_t _arc)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+
+ if (!arc) {
+ graph_err("Invalid feature arc: 0x%x", _arc);
+ return 0;
+ }
+
+ return arc->runtime_enabled_features;
+}
+
+uint32_t
+rte_graph_feature_arc_num_features(rte_graph_feature_arc_t _arc)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t count = 0;
+
+ if (!arc) {
+ graph_err("Invalid feature arc: 0x%x", _arc);
+ return 0;
+ }
+
+ STAILQ_FOREACH(finfo, &arc->all_features, next_feature)
+ count++;
+
+ return count;
+}
+
+char *
+rte_graph_feature_arc_feature_to_name(rte_graph_feature_arc_t _arc, rte_graph_feature_t feat)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t slot = feat;
+
+ if (!arc)
+ return NULL;
+
+ if (feat >= rte_graph_feature_arc_num_features(_arc)) {
+ graph_err("%s: feature %u does not exist", arc->feature_arc_name, feat);
+ return NULL;
+ }
+ if (!nodeinfo_lkup_by_index(arc, slot, &finfo, 0/* ignore sanity*/))
+ return finfo->feature_name;
+
+ return NULL;
+}
+
+int
+rte_graph_feature_arc_feature_to_node(rte_graph_feature_arc_t _arc, rte_graph_feature_t feat,
+ rte_node_t *node)
+{
+ struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc);
+ struct rte_graph_feature_node_list *finfo = NULL;
+ uint32_t slot = feat;
+
+ if (!arc)
+ return -1;
+
+ if (node)
+ *node = RTE_NODE_ID_INVALID;
+
+ if (feat >= rte_graph_feature_arc_num_features(_arc)) {
+ graph_err("%s: feature %u does not exist", arc->feature_arc_name, feat);
+ return -1;
+ }
+ if (!nodeinfo_lkup_by_index(arc, slot, &finfo, 0/* ignore sanity*/)) {
+ if (node)
+ *node = finfo->feature_node_id;
+ return 0;
+ }
+ return -1;
+}
+
+void __rte_graph_feature_arc_register(struct rte_graph_feature_arc_register *reg,
+ const char *caller_name, int lineno)
+{
+ if (!reg) {
+ FEAT_ERR(caller_name, lineno, "NULL feature arc register");
+ return;
+ }
+
+ if (!reg->arc_name) {
+ FEAT_ERR(caller_name, lineno, "NULL feature arc name");
+ return;
+ }
+
+ if (!reg->max_indexes) {
+ FEAT_ERR(caller_name, lineno, "No indexes provided for arc %s",
+ reg->arc_name);
+ return;
+ }
+
+ /* reg->max_features is calculated in rte_graph_feature_arc_init() */
+ if (!reg->start_node) {
+ FEAT_ERR(caller_name, lineno, "No start node provided for arc %s",
+ reg->arc_name);
+ return;
+ }
+
+ if (!reg->start_node_feature_process_fn) {
+ FEAT_ERR(caller_name, lineno,
+ "No start node process function provided for arc %s",
+ reg->arc_name);
+ return;
+ }
+
+ if (!reg->end_feature) {
+ FEAT_ERR(caller_name, lineno,
+ "No end feature provided for arc %s", reg->arc_name);
+ return;
+ }
+
+ /* No need to check end_feature->arc_name as it is implicit */
+ if (!reg->end_feature->feature_name) {
+ FEAT_ERR(caller_name, lineno,
+ "No end feature provided for arc %s", reg->arc_name);
+ return;
+ }
+
+ if (!reg->end_feature->feature_process_fn) {
+ FEAT_ERR(caller_name, lineno,
+ "No end feature process function provided for arc %s",
+ reg->arc_name);
+ return;
+ }
+
+ if (!reg->end_feature->arc_name) {
+ FEAT_ERR(caller_name, lineno,
+ "No end feature arc name provided for arc %s", reg->arc_name);
+ return;
+ }
+
+ STAILQ_INSERT_TAIL(&feature_arc_list, reg, next_arc);
+}
+
+void __rte_graph_feature_register(struct rte_graph_feature_register *reg,
+ const char *caller_name, int lineno)
+{
+ if (feature_registration_validate(reg, caller_name, lineno, 0, 0) < 0)
+ return;
+
+ /* Add to the feature_list*/
+ STAILQ_INSERT_TAIL(&feature_list, reg, next_feature);
+}
+
+uint32_t
+rte_graph_feature_arc_names_get(char *arc_names[])
+{
+ rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main;
+ struct rte_graph_feature_arc *arc = NULL;
+ uint32_t count, num_arcs;
+
+ if (!__rte_graph_feature_arc_main)
+ return 0;
+
+ for (count = 0, num_arcs = 0; count < dm->max_feature_arcs; count++)
+ if (dm->feature_arcs[count] != GRAPH_FEATURE_ARC_INITIALIZER)
+ num_arcs++;
+
+ if (!num_arcs)
+ return 0;
+
+ if (!arc_names)
+ return sizeof(char *) * num_arcs;
+
+ for (count = 0, num_arcs = 0; count < dm->max_feature_arcs; count++) {
+ if (dm->feature_arcs[count] != GRAPH_FEATURE_ARC_INITIALIZER) {
+ arc = rte_graph_feature_arc_get(count);
+ arc_names[num_arcs] = arc->feature_arc_name;
+ num_arcs++;
+ }
+ }
+ return num_arcs;
+}
diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h
index ceff0c8f50..a94d7c867f 100644
--- a/lib/graph/graph_private.h
+++ b/lib/graph/graph_private.h
@@ -24,6 +24,10 @@ extern int rte_graph_logtype;
RTE_LOG_LINE_PREFIX(level, GRAPH, \
"%s():%u ", __func__ RTE_LOG_COMMA __LINE__, __VA_ARGS__)
+#define GRAPH_LOG2(level, _fname, _linenum, ...) \
+ RTE_LOG_LINE_PREFIX(level, GRAPH, \
+ "%s():%u ", _fname RTE_LOG_COMMA _linenum, __VA_ARGS__)
+
#define graph_err(...) GRAPH_LOG(ERR, __VA_ARGS__)
#define graph_warn(...) GRAPH_LOG(WARNING, __VA_ARGS__)
#define graph_info(...) GRAPH_LOG(INFO, __VA_ARGS__)
diff --git a/lib/graph/meson.build b/lib/graph/meson.build
index 0cb15442ab..5d137d326e 100644
--- a/lib/graph/meson.build
+++ b/lib/graph/meson.build
@@ -15,14 +15,16 @@ sources = files(
'graph_stats.c',
'graph_populate.c',
'graph_pcap.c',
+ 'graph_feature_arc.c',
'rte_graph_worker.c',
'rte_graph_model_mcore_dispatch.c',
)
headers = files('rte_graph.h', 'rte_graph_worker.h')
+headers += files('rte_graph_feature_arc.h', 'rte_graph_feature_arc_worker.h')
indirect_headers += files(
'rte_graph_model_mcore_dispatch.h',
'rte_graph_model_rtc.h',
'rte_graph_worker_common.h',
)
-deps += ['eal', 'pcapng', 'mempool', 'ring']
+deps += ['eal', 'pcapng', 'mempool', 'ring', 'rcu']
diff --git a/lib/graph/rte_graph_feature_arc.h b/lib/graph/rte_graph_feature_arc.h
new file mode 100644
index 0000000000..80ec2f0f19
--- /dev/null
+++ b/lib/graph/rte_graph_feature_arc.h
@@ -0,0 +1,552 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell International Ltd.
+ */
+
+#ifndef _RTE_GRAPH_FEATURE_ARC_H_
+#define _RTE_GRAPH_FEATURE_ARC_H_
+
+#include <assert.h>
+#include <errno.h>
+#include <signal.h>
+#include <stddef.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_compat.h>
+#include <rte_debug.h>
+#include <rte_graph.h>
+#include <rte_rcu_qsbr.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @file
+ *
+ * rte_graph_feature_arc.h
+ *
+ * Define APIs and structures/variables with respect to feature arc
+ *
+ * - Feature arc(s)
+ * - Feature(s)
+ *
+ * A feature arc represents an ordered list of features/protocols nodes at a
+ * given networking layer. It provides a high level abstraction to
+ * enable/disable feature nodes on a given interface at runtime and steer packets
+ * across these feature nodes in a generic manner.
+ *
+ * A feature arc in a graph is represented via *start_node* and *end_node*.
+ * Feature nodes are added between start_node and end_node. Packets steering
+ * from start_node to feature nodes are controlled via
+ * rte_graph_feature_enable()/rte_graph_feature_disable().
+ *
+ * In a typical network stack, often a protocol or feature must be first
+ * enabled on a given interface, before any packet is steered towards it for
+ * feature processing. For eg: incoming IPv4 packets are sent to
+ * routing sub-system only after a valid IPv4 address is assigned to the
+ * received interface. In other words, often packets needs to be steered across
+ * features not based on the packet content but based on whether a feature is
+ * enable or disable on a given incoming/outgoing interface. Feature arc
+ * provides mechanism to enable/disable feature(s) on each interface at runtime
+ * and allow seamless packet steering across runtime enabled feature nodes in
+ * fast path.
+ *
+ * Feature arc also provides a way to steer packets from in-built nodes to
+ * out-of-tree *feature nodes* without any change in in-built node's
+ * fast path functions
+ *
+ * On a given interface multiple feature(s) might be enabled in a particular
+ * feature arc. For instance, both "ipv4-output" and "IPsec policy output"
+ * features may be enabled on "eth0" interface in "L3-output" feature arc.
+ * Similarly, "ipv6-output" and "ipsec-output" may be enabled on "eth1"
+ * interface in same "L3-output" feature arc.
+ *
+ * When multiple features are present in a given feature arc, its imperative
+ * to allow each feature processing in a particular sequential order. For
+ * instance, in "L3-input" feature arc it may be required to run "IPsec
+ * input" feature first, for packet decryption, before "ip-lookup". So a
+ * sequential order must be maintained among features present in a feature arc.
+ *
+ * Features are enabled/disabled multiple times at runtime to some or all
+ * available interfaces present in the system. Enable/disabling features on one
+ * interface is independent of other interface.
+ *
+ * A given feature might consume packet (if it's configured to consume) or may
+ * forward it to next enabled feature. For instance, "IPsec input" feature may
+ * consume/drop all packets with "Protect" policy action while all packets with
+ * policy action as "Bypass" may be forwarded to next enabled feature (with in
+ * same feature arc)
+ *
+ * This library facilitates rte graph based applications to steer packets in
+ * fast path to different feature nodes with-in a feature arc and support all
+ * functionalities described above
+ *
+ * In order to use feature-arc APIs, applications needs to do following in
+ * control path:
+ * - Create feature arc's using RTE_GRAPH_FEATURE_ARC_REGISTER()
+ * - New features can be added to an arc via RTE_GRAPH_FEATURE_REGISTER()
+ * - Before calling rte_graph_create(), rte_graph_feature_arc_init() API must
+ * be called. If rte_graph_feature_arc_init() is not called by application,
+ * feature arc library is NOP
+ * - Features can be enabled/disabled on any interface via
+ * rte_graph_feature_enable()/rte_graph_feature_disable()
+ * - Feature arc can be destroyed via rte_graph_feature_arc_destroy()
+ *
+ * In fast path, APIs are provided to steer packets towards feature path from
+ * - start_node (@ref RTE_GRAPH_FEATURE_ARC_REGISTER())
+ * - feature nodes added via RTE_GRAPH_FEATURE_REGISTER()
+ *
+ * For typical steering of packets across feature nodes, application required
+ * to know "rte_edges" which are saved in feature data object. Feature data
+ * object is unique for every interface per feature with in a feature arc.
+ *
+ * APIs used to steer packets from start_node to first enabled feature node are:
+ * - rte_graph_feature_data_first_feature_get(). Once valid feature data is
+ * returned, application can use
+ * - rte_graph_feature_data_edge_get() to get edge from start_node to first
+ * feature
+ *
+ * rte_mbuf can carry [feature_data] into feature arc specific mbuf dynamic
+ * field rte_graph_feature_arc_mbuf_dynfield_offset_get()
+ *
+ * APIs used to steer packets from one feature node to next enabled feature
+ * node
+ * - rte_graph_feature_data_app_cookie_get() to get application specific data
+ * set by application in rte_graph_feature_enable()
+ * - rte_graph_feature_data_edge_get() to get edge from current node to next
+ * feature node
+ * - mbuf->dynfield[feature_data] needs to be updated with new feature data
+ * via rte_graph_feature_data_next_feature_get()
+ *
+ * Fast path synchronization
+ * -------------------------
+ * Any feature enable/disable in control plane does not require stopping of
+ * worker cores.
+ *
+ * rte_graph_feature_enable()/rte_graph_feature_disable() APIs accepts
+ * (rte_rcu_qsbr *) as an argument to allow application releasing
+ * resources associated with features it may have allocated for per feature
+ * per interface.
+ *
+ * After every successful enable/disable, API internally calls
+ * - rte_rcu_qsbr_synchronize(rte_rcu_qsbr *) to synchronize all worker cores
+ * - Calls RTE_GRAPH_FEATURE_REGISTER()->notifier_cb() with app_cookie,
+ * provided per feature per interface in rte_graph_feature_enable()
+ */
+
+/** Length of feature arc name */
+#define RTE_GRAPH_FEATURE_ARC_NAMELEN RTE_NODE_NAMESIZE
+
+/** Initializer values for ARC, Feature, Feature data */
+#define RTE_GRAPH_FEATURE_ARC_INITIALIZER ((rte_graph_feature_arc_t)UINT16_MAX)
+#define RTE_GRAPH_FEATURE_DATA_INVALID ((rte_graph_feature_data_t)UINT16_MAX)
+#define RTE_GRAPH_FEATURE_INVALID ((rte_graph_feature_t)UINT8_MAX)
+
+/** rte_graph feature arc object */
+typedef uint16_t rte_graph_feature_arc_t;
+
+/** rte_graph feature object */
+typedef uint8_t rte_graph_feature_t;
+
+/** rte_graph feature data object */
+typedef uint16_t rte_graph_feature_data_t;
+
+/** feature notifier callback called when feature is enabled/disabled */
+typedef void (*rte_graph_feature_change_notifier_cb_t)(const char *arc_name,
+ const char *feature_name,
+ uint16_t index,
+ bool enable_disable,
+ uint32_t app_cookie);
+
+/**
+ * Feature registration structure provided to
+ * RTE_GRAPH_FEATURE_REGISTER()
+ */
+struct rte_graph_feature_register {
+ STAILQ_ENTRY(rte_graph_feature_register) next_feature;
+
+ /** Name of the arc which is registered either via
+ * RTE_GRAPH_FEATURE_ARC_REGISTER() or via
+ * rte_graph_feature_arc_create()
+ */
+ const char *arc_name;
+
+ /* Name of the feature */
+ const char *feature_name;
+
+ /**
+ * Node id of feature_node.
+ *
+ * Setting this field can be skipped if registering feature via
+ * RTE_GRAPH_FEATURE_REGISTER()
+ */
+ rte_node_t feature_node_id;
+
+ /**
+ * Feature node process() function calling feature fast path APIs.
+ *
+ * If application calls rte_graph_feature_arc_init(), node->process()
+ * provided in RTE_NODE_REGISTER() is overwritten by this
+ * function.
+ */
+ rte_node_process_t feature_process_fn;
+
+ /*
+ * Pointer to Feature node registration
+ *
+ * Used when features are registered via
+ * RTE_GRAPH_FEATURE_REGISTER().
+ */
+ struct rte_node_register *feature_node;
+
+ /** Feature ordering constraints
+ * runs_after: Name of the feature which must run before "this feature"
+ * runs_before: Name of the feature which must run after "this feature"
+ */
+ const char *runs_after;
+ const char *runs_before;
+
+ /**
+ * Callback for notifying any change in feature enable/disable state
+ */
+ rte_graph_feature_change_notifier_cb_t notifier_cb;
+};
+
+/** Feature arc registration structure */
+struct rte_graph_feature_arc_register {
+ STAILQ_ENTRY(rte_graph_feature_arc_register) next_arc;
+
+ /** Name of the feature arc */
+ const char *arc_name;
+
+ /**
+ * Maximum number of features supported in this feature arc.
+ *
+ * This field can be skipped for feature arc registration via
+ * RTE_GRAPH_FEATURE_ARC_REGISTER().
+ *
+ * API internally sets this field by calculating number of
+ * RTE_GRAPH_FEATURE_REGISTER() for every arc registration via
+ * RTE_GRAPH_FEATURE_ARC_REGISTER()
+ */
+ uint32_t max_features;
+
+ /**
+ * Maximum number of indexes supported in this feature arc
+ *
+ * Typically number of interfaces or ethdevs (For eg: RTE_MAX_ETHPORTS)
+ */
+ uint32_t max_indexes;
+
+ /** Start node of this arc */
+ struct rte_node_register *start_node;
+
+ /**
+ * Feature arc specific process() function for Start node.
+ * If application calls rte_graph_feature_arc_init(),
+ * start_node->process() is replaced by this function
+ */
+ rte_node_process_t start_node_feature_process_fn;
+
+ /** End feature node registration */
+ struct rte_graph_feature_register *end_feature;
+};
+
+/** constructor to register feature to an arc */
+#define RTE_GRAPH_FEATURE_REGISTER(reg) \
+ RTE_INIT(__rte_graph_feature_register_##reg) \
+ { \
+ __rte_graph_feature_register(®, __func__, __LINE__); \
+ }
+
+/** constructor to register a feature arc */
+#define RTE_GRAPH_FEATURE_ARC_REGISTER(reg) \
+ RTE_INIT(__rte_graph_feature_arc_register_##reg) \
+ { \
+ __rte_graph_feature_arc_register(®, __func__, __LINE__); \
+ }
+/**
+ * Initialize feature arc subsystem
+ *
+ * This API
+ * - Initializes feature arc module and alloc associated memory
+ * - creates feature arc for every RTE_GRAPH_FEATURE_ARC_REGISTER()
+ * - Add feature node to a feature arc for every RTE_GRAPH_FEATURE_REGISTER()
+ * - Replaces all RTE_NODE_REGISTER()->process() functions for
+ * - Every start_node/end_node provided in arc registration
+ * - Every feature node provided in feature registration
+ *
+ * @return
+ * 0: Success
+ * <0: Failure
+ */
+__rte_experimental
+int rte_graph_feature_arc_init(void);
+
+/**
+ * Create a feature arc.
+ *
+ * This API can be skipped if RTE_GRAPH_FEATURE_ARC_REGISTER() is used
+ *
+ * @param reg
+ * Pointer to struct rte_graph_feature_arc_register
+ * @param[out] _arc
+ * Feature arc object
+ *
+ * @return
+ * 0: Success
+ * <0: Failure
+ */
+__rte_experimental
+int rte_graph_feature_arc_create(struct rte_graph_feature_arc_register *reg,
+ rte_graph_feature_arc_t *_arc);
+
+/**
+ * Get feature arc object with name
+ *
+ * @param arc_name
+ * Feature arc name provided to successful @ref rte_graph_feature_arc_create
+ * @param[out] _arc
+ * Feature arc object returned. Valid only when API returns SUCCESS
+ *
+ * @return
+ * 0: Success
+ * <0: Failure.
+ */
+__rte_experimental
+int rte_graph_feature_arc_lookup_by_name(const char *arc_name, rte_graph_feature_arc_t *_arc);
+
+/**
+ * Add a feature to already created feature arc.
+ *
+ * This API is not required in case RTE_GRAPH_FEATURE_REGISTER() is used
+ *
+ * @param feat_reg
+ * Pointer to struct rte_graph_feature_register
+ *
+ * <I> Must be called before rte_graph_create() </I>
+ * <I> rte_graph_feature_add() is not allowed after call to
+ * rte_graph_feature_enable() so all features must be added before they can be
+ * enabled </I>
+ *
+ * @return
+ * 0: Success
+ * <0: Failure
+ */
+__rte_experimental
+int rte_graph_feature_add(struct rte_graph_feature_register *feat_reg);
+
+/**
+ * Enable feature within a feature arc
+ *
+ * Must be called after @b rte_graph_create().
+ *
+ * @param _arc
+ * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref
+ * rte_graph_feature_arc_lookup_by_name
+ * @param index
+ * Application specific index. Can be corresponding to interface_id/port_id etc
+ * @param feature_name
+ * Name of the node which is already added via @ref rte_graph_feature_add
+ * @param app_cookie
+ * Application specific data which is retrieved in fast path
+ * @param qsbr
+ * RCU QSBR object. After enabling feature, API calls
+ * rte_rcu_qsbr_synchronize() followed by call to struct
+ * rte_graph_feature_register::notifier_cb(), if it is set, to notify feature
+ * caller This object can be passed NULL as well if no RCU synchronization is
+ * required
+ *
+ * @return
+ * 0: Success
+ * <0: Failure
+ */
+__rte_experimental
+int rte_graph_feature_enable(rte_graph_feature_arc_t _arc, uint32_t index, const
+ char *feature_name, uint32_t app_cookie,
+ struct rte_rcu_qsbr *qsbr);
+
+/**
+ * Disable already enabled feature within a feature arc
+ *
+ * Must be called after @b rte_graph_create(). API is *NOT* Thread-safe
+ *
+ * @param _arc
+ * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref
+ * rte_graph_feature_arc_lookup_by_name
+ * @param index
+ * Application specific index. Can be corresponding to interface_id/port_id etc
+ * @param feature_name
+ * Name of the node which is already added via @ref rte_graph_feature_add
+ * @param qsbr
+ * RCU QSBR object. After disabling feature, API calls
+ * rte_rcu_qsbr_synchronize() followed by call to struct
+ * rte_graph_feature_register::notifier_cb(), if it is set, to notify feature
+ * caller This object can be passed NULL as well if no RCU synchronization is
+ * required
+ *
+ * @return
+ * 0: Success
+ * <0: Failure
+ */
+__rte_experimental
+int rte_graph_feature_disable(rte_graph_feature_arc_t _arc, uint32_t index,
+ const char *feature_name, struct rte_rcu_qsbr *qsbr);
+
+/**
+ * Get rte_graph_feature_t object from feature name
+ *
+ * @param arc
+ * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref
+ * rte_graph_feature_arc_lookup_by_name
+ * @param feature_name
+ * Feature name provided to @ref rte_graph_feature_add
+ * @param[out] feature
+ * Feature object
+ *
+ * @return
+ * 0: Success
+ * <0: Failure
+ */
+__rte_experimental
+int rte_graph_feature_lookup(rte_graph_feature_arc_t arc, const char *feature_name,
+ rte_graph_feature_t *feature);
+
+/**
+ * Delete feature_arc object
+ *
+ * @param _arc
+ * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref
+ * rte_graph_feature_arc_lookup_by_name
+ *
+ * @return
+ * 0: Success
+ * <0: Failure
+ */
+__rte_experimental
+int rte_graph_feature_arc_destroy(rte_graph_feature_arc_t _arc);
+
+/**
+ * Cleanup all feature arcs
+ *
+ * @return
+ * 0: Success
+ * <0: Failure
+ */
+__rte_experimental
+int rte_graph_feature_arc_cleanup(void);
+
+/**
+ * Slow path API to know how many features are added (NOT enabled) within a
+ * feature arc
+ *
+ * @param _arc
+ * Feature arc object
+ *
+ * @return: Number of added features to arc
+ */
+__rte_experimental
+uint32_t rte_graph_feature_arc_num_features(rte_graph_feature_arc_t _arc);
+
+/**
+ * Slow path API to know how many features are currently enabled within a
+ * feature arc across all indexes. If a single feature is enabled on all interfaces,
+ * this API would return "number_of_interfaces" as count (but not "1")
+ *
+ * @param _arc
+ * Feature arc object
+ *
+ * @return: Number of enabled features across all indexes
+ */
+__rte_experimental
+uint32_t rte_graph_feature_arc_num_enabled_features(rte_graph_feature_arc_t _arc);
+
+/**
+ * Slow path API to get feature node name from rte_graph_feature_t object
+ *
+ * @param _arc
+ * Feature arc object
+ * @param feature
+ * Feature object
+ *
+ * @return: Name of the feature node
+ */
+__rte_experimental
+char *rte_graph_feature_arc_feature_to_name(rte_graph_feature_arc_t _arc,
+ rte_graph_feature_t feature);
+
+/**
+ * Slow path API to get corresponding rte_node_t from
+ * rte_graph_feature_t
+ *
+ * @param _arc
+ * Feature arc object
+ * @param feature
+ * Feature object
+ * @param[out] node
+ * rte_node_t of feature node, Valid only when API returns SUCCESS
+ *
+ * @return: 0 on success, < 0 on failure
+ */
+__rte_experimental
+int
+rte_graph_feature_arc_feature_to_node(rte_graph_feature_arc_t _arc,
+ rte_graph_feature_t feature,
+ rte_node_t *node);
+
+/**
+ * Slow path API to dump valid feature arc names
+ *
+ * @param[out] arc_names
+ * Buffer to copy the arc names. The NULL value is allowed in that case,
+ * the function returns the size of the array that needs to be allocated.
+ *
+ * @return
+ * When next_nodes == NULL, it returns the size of the array else
+ * number of item copied.
+ */
+__rte_experimental
+uint32_t
+rte_graph_feature_arc_names_get(char *arc_names[]);
+
+/**
+ * @internal
+ *
+ * function declaration for registering arc
+ *
+ * @param reg
+ * Pointer to struct rte_graph_feature_arc_register
+ * @param caller_name
+ * Name of the function which is calling this API
+ * @param lineno
+ * Line number of the function which is calling this API
+ */
+__rte_experimental
+void __rte_graph_feature_arc_register(struct rte_graph_feature_arc_register *reg,
+ const char *caller_name, int lineno);
+
+/**
+ * @internal
+ *
+ * function declaration for registering feature
+ *
+ * @param reg
+ * Pointer to struct rte_graph_feature_register
+ * @param caller_name
+ * Name of the function which is calling this API
+ * @param lineno
+ * Line number of the function which is calling this API
+ */
+__rte_experimental
+void __rte_graph_feature_register(struct rte_graph_feature_register *reg,
+ const char *caller_name, int lineno);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/lib/graph/rte_graph_feature_arc_worker.h b/lib/graph/rte_graph_feature_arc_worker.h
new file mode 100644
index 0000000000..4ffd51091b
--- /dev/null
+++ b/lib/graph/rte_graph_feature_arc_worker.h
@@ -0,0 +1,608 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell International Ltd.
+ */
+
+#ifndef _RTE_GRAPH_FEATURE_ARC_WORKER_H_
+#define _RTE_GRAPH_FEATURE_ARC_WORKER_H_
+
+#include <stddef.h>
+#include <rte_graph_feature_arc.h>
+#include <rte_bitops.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+/**
+ * @file
+ *
+ * rte_graph_feature_arc_worker.h
+ *
+ * Defines fast path structure for feature arc
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @internal
+ *
+ * Slow path feature node info list
+ */
+struct rte_graph_feature_node_list {
+ /** Next feature */
+ STAILQ_ENTRY(rte_graph_feature_node_list) next_feature;
+
+ char feature_name[RTE_GRAPH_FEATURE_ARC_NAMELEN];
+
+ /** node id representing feature */
+ rte_node_t feature_node_id;
+
+ /** How many indexes/interfaces using this feature */
+ int32_t ref_count;
+
+ /**
+ * feature arc process function overrides to feature node's original
+ * process function
+ */
+ rte_node_process_t feature_node_process_fn;
+
+ /** Callback for freeing application resources when */
+ rte_graph_feature_change_notifier_cb_t notifier_cb;
+
+ /* finfo_index in list. same as rte_graph_feature_t */
+ uint32_t finfo_index;
+
+ /** Back pointer to feature arc */
+ void *feature_arc;
+
+ /** rte_edge_t to this feature node from feature_arc->start_node */
+ rte_edge_t edge_to_this_feature;
+
+ /* rte_edge_t from this feature node to last feature node */
+ rte_edge_t edge_to_last_feature;
+};
+
+/**
+ * rte_graph Feature arc object
+ *
+ * Feature arc object holds control plane and fast path information for all
+ * features and all interface index information for steering packets across
+ * feature nodes
+ *
+ * Within a feature arc, only RTE_GRAPH_FEATURE_MAX_PER_ARC features can be
+ * added. If more features needs to be added, another feature arc can be
+ * created
+ *
+ * In fast path, rte_graph_feature_arc_t can be translated to (struct
+ * rte_graph_feature_arc *) via rte_graph_feature_arc_get(). Later is needed to
+ * add as an input argument to all fast path feature arc APIs
+ */
+struct __rte_cache_aligned rte_graph_feature_arc {
+ /** Slow path variables follows*/
+ RTE_MARKER slow_path_variables;
+
+ /** All feature lists */
+ STAILQ_HEAD(, rte_graph_feature_node_list) all_features;
+
+ /** feature arc name */
+ char feature_arc_name[RTE_GRAPH_FEATURE_ARC_NAMELEN];
+
+ /** control plane counter to track enabled features */
+ uint32_t runtime_enabled_features;
+
+ /** index in feature_arc_main */
+ uint16_t feature_arc_index;
+
+ /* process ref count to track feature_arc_destroy() */
+ uint8_t process_ref_count;
+
+ /** Back pointer to feature_arc_main */
+ void *feature_arc_main;
+
+ /** Arc's start/end node */
+ struct rte_node_register *start_node;
+ struct rte_graph_feature_register end_feature;
+
+ /* arc start process function */
+ rte_node_process_t arc_start_process;
+
+ /* total arc_size allocated */
+ size_t arc_size;
+
+ /** Slow path bit mask per feature per index */
+ uint64_t *feature_bit_mask_by_index;
+
+ /** Cache aligned fast path variables */
+ alignas(RTE_CACHE_LINE_SIZE) RTE_MARKER fast_path_variables;
+
+ /**
+ * Quick fast path bitmask indicating if any feature enabled. Each bit
+ * corresponds to single feature Helps in optimally process packets for
+ * the case when features are added but not enabled
+ */
+ RTE_ATOMIC(uint64_t) fp_feature_enable_bitmask;
+
+ /** maximum number of features supported by this arc
+ * Immutable during fast path
+ */
+ uint16_t max_features;
+
+ /** maximum number of index supported by this arc
+ * Immutable during fast path
+ */
+ uint16_t max_indexes;
+
+ /** arc + fp_first_feature_arr_offset
+ * Immutable during fast path
+ */
+ uint16_t fp_first_feature_offset;
+
+ /** arc + fp_feature_data_arr_offset
+ * Immutable during fast path
+ */
+ uint16_t fp_feature_data_offset;
+
+ /**
+ * Size of each feature in fastpath.
+ * ALIGN(sizeof(struct rte_graph_feature_data) * arc->max_indexes)
+ * Immutable during fast path
+ */
+ uint32_t fp_feature_size;
+
+ /**
+ * Arc specific fast path data
+ * It accommodates:
+ *
+ * 1. first enabled feature for every index
+ * rte_graph_feature_t (fdata as shown below)
+ *
+ * +-------------------------+ <- cache_aligned
+ * | 0th Index | 1st Index |
+ * +-------------------------+
+ * | feature0 | feature1 |
+ * +-------------------------+
+ *
+ * 2. struct rte_graph_feature_data per index per feature
+ *
+ * feature0-> +----------------------------------------+ ^ <- cache_aligned
+ * | struct rte_graph_feature_data[Index0] | |
+ * +----------------------------------------+ | fp_feature_size
+ * | struct rte_graph_feature_data[Index1] | |
+ * feature1-> +----------------------------------------+ v <- cache aligned
+ * | struct rte_graph_feature_data[Index0] | ^
+ * +----------------------------------------+ | fp_feature_size
+ * | struct rte_graph_feature_data[Index1] | |
+ * +----------------------------------------+ v
+ * ... ....
+ * ... ....
+ */
+ RTE_MARKER8 fp_arc_data;
+};
+
+/**
+ * Feature arc main object
+ *
+ * Holds all feature arcs created by application
+ */
+typedef struct rte_feature_arc_main {
+ /** number of feature arcs created by application */
+ uint32_t num_feature_arcs;
+
+ /** max features arcs allowed */
+ uint32_t max_feature_arcs;
+
+ /** Pointer to all feature arcs */
+ uintptr_t feature_arcs[];
+} rte_graph_feature_arc_main_t;
+
+/**
+ * Fast path feature data object
+ *
+ * Used by fast path inline feature arc APIs
+ * Corresponding to rte_graph_feature_data_t
+ * It holds
+ * - edge to reach to next feature node
+ * - next_feature_data corresponding to next enabled feature
+ * - app_cookie set by application in rte_graph_feature_enable()
+ */
+struct rte_graph_feature_data {
+ /** edge from previous enabled feature to this enabled feature */
+ RTE_ATOMIC(rte_edge_t) next_edge;
+
+ /** Next feature data from this feature data */
+ RTE_ATOMIC(rte_graph_feature_data_t) next_feature_data;
+
+ /**
+ * app_cookie set by application in rte_graph_feature_enable() for
+ * - current feature
+ * - interface index
+ */
+ RTE_ATOMIC(uint32_t) app_cookie;
+};
+
+/** feature arc specific mbuf dynfield structure. */
+struct rte_graph_feature_arc_mbuf_dynfields {
+ /** each mbuf carries feature data */
+ rte_graph_feature_data_t feature_data;
+};
+
+/** Name of dynamic mbuf field offset registered in rte_graph_feature_arc_init() */
+#define RTE_GRAPH_FEATURE_ARC_DYNFIELD_NAME "__rte_graph_feature_arc_mbuf_dynfield"
+
+/** log2(sizeof (struct rte_graph_feature_data)) */
+#define RTE_GRAPH_FEATURE_DATA_SIZE_LOG2 3
+
+/** Number of struct rte_graph_feature_data per feature*/
+#define RTE_GRAPH_FEATURE_DATA_NUM_PER_FEATURE(arc) \
+ (arc->fp_feature_size >> RTE_GRAPH_FEATURE_DATA_SIZE_LOG2)
+
+/** Get rte_graph_feature_data_t from rte_graph_feature_t */
+#define RTE_GRAPH_FEATURE_TO_FEATURE_DATA(arc, feature, index) \
+ ((rte_graph_feature_data_t) \
+ ((RTE_GRAPH_FEATURE_DATA_NUM_PER_FEATURE(arc) * feature) + index))
+
+/** extern variables */
+extern rte_graph_feature_arc_main_t *__rte_graph_feature_arc_main;
+extern int __rte_graph_feature_arc_mbuf_dyn_offset;
+
+/** get feature arc dynamic offset
+ *
+ * @return
+ * offset to feature arc specific fields in mbuf
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_graph_feature_arc_mbuf_dynfield_offset_get(void)
+{
+ return __rte_graph_feature_arc_mbuf_dyn_offset;
+}
+
+/**
+ * Get dynfield offset to feature arc specific fields in mbuf
+ *
+ * @param mbuf
+ * Pointer to packet
+ * @param dyn_off
+ * offset to feature arc specific fields in mbuf
+ *
+ * @return
+ * NULL: On Failure
+ * Non-NULL pointer on Success
+ */
+__rte_experimental
+static __rte_always_inline struct rte_graph_feature_arc_mbuf_dynfields *
+rte_graph_feature_arc_mbuf_dynfields_get(struct rte_mbuf *mbuf, const int dyn_off)
+{
+ return RTE_MBUF_DYNFIELD(mbuf, dyn_off,
+ struct rte_graph_feature_arc_mbuf_dynfields *);
+}
+
+/**
+ * API to know if feature is valid or not
+ *
+ * @param feature
+ * rte_graph_feature_t
+ *
+ * @return
+ * 1: If feature is valid
+ * 0: If feature is invalid
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_graph_feature_is_valid(rte_graph_feature_t feature)
+{
+ return (feature != RTE_GRAPH_FEATURE_INVALID);
+}
+
+/**
+ * API to know if feature data is valid or not
+ *
+ * @param feature_data
+ * rte_graph_feature_data_t
+ *
+ * @return
+ * 1: If feature data is valid
+ * 0: If feature data is invalid
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_graph_feature_data_is_valid(rte_graph_feature_data_t feature_data)
+{
+ return (feature_data != RTE_GRAPH_FEATURE_DATA_INVALID);
+}
+
+/**
+ * Get pointer to feature arc object from rte_graph_feature_arc_t
+ *
+ * @param arc
+ * feature arc
+ *
+ * @return
+ * NULL: On Failure
+ * Non-NULL pointer on Success
+ */
+__rte_experimental
+static __rte_always_inline struct rte_graph_feature_arc *
+rte_graph_feature_arc_get(rte_graph_feature_arc_t arc)
+{
+ rte_graph_feature_arc_main_t *fm = NULL;
+
+ fm = __rte_graph_feature_arc_main;
+
+ if (likely(fm != NULL && arc < fm->max_feature_arcs))
+ return (struct rte_graph_feature_arc *)fm->feature_arcs[arc];
+
+ return NULL;
+}
+
+/**
+ * Get rte_graph_feature_t from feature arc object without any checks
+ *
+ * @param arc
+ * feature arc
+ * @param fdata
+ * feature data object
+ *
+ * @return
+ * Pointer to feature data object
+ */
+__rte_experimental
+static __rte_always_inline struct rte_graph_feature_data*
+__rte_graph_feature_data_get(struct rte_graph_feature_arc *arc,
+ rte_graph_feature_data_t fdata)
+{
+ return ((struct rte_graph_feature_data *) ((uint8_t *)arc + arc->fp_feature_data_offset +
+ (fdata << RTE_GRAPH_FEATURE_DATA_SIZE_LOG2)));
+}
+
+/**
+ * Get next edge from feature data pointer, without any check
+ *
+ * @param fdata
+ * feature data object
+ *
+ * @return
+ * next edge
+ */
+__rte_experimental
+static __rte_always_inline rte_edge_t
+__rte_graph_feature_data_edge_get(struct rte_graph_feature_data *fdata)
+{
+ return rte_atomic_load_explicit(&fdata->next_edge, rte_memory_order_relaxed);
+}
+
+/**
+ * Get app_cookie from feature data pointer, without any check
+ *
+ * @param fdata
+ * feature data object
+ *
+ * @return
+ * app_cookie set by caller in rte_graph_feature_enable() API
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+__rte_graph_feature_data_app_cookie_get(struct rte_graph_feature_data *fdata)
+{
+ return rte_atomic_load_explicit(&fdata->app_cookie, rte_memory_order_relaxed);
+}
+
+/**
+ * Get next_enabled_feature_data from pointer to feature data, without any check
+ *
+ * @param fdata
+ * feature data object
+ *
+ * @return
+ * next enabled feature data from this feature data
+ */
+__rte_experimental
+static __rte_always_inline rte_graph_feature_data_t
+__rte_graph_feature_data_next_feature_get(struct rte_graph_feature_data *fdata)
+{
+ return rte_atomic_load_explicit(&fdata->next_feature_data, rte_memory_order_relaxed);
+}
+
+/**
+ * Get next edge from feature data object with checks
+ *
+ * @param arc
+ * feature arc
+ * @param fdata
+ * feature data object
+ *
+ * @return
+ * next edge
+ */
+__rte_experimental
+static __rte_always_inline rte_edge_t
+rte_graph_feature_data_edge_get(struct rte_graph_feature_arc *arc,
+ rte_graph_feature_data_t fdata)
+{
+ struct rte_graph_feature_data *fdata_obj = __rte_graph_feature_data_get(arc, fdata);
+
+ return __rte_graph_feature_data_edge_get(fdata_obj);
+}
+
+/**
+ * Get app_cookie from feature data object with checks
+ *
+ * @param arc
+ * feature arc
+ * @param fdata
+ * feature data object
+ *
+ * @return
+ * app_cookie set by caller in rte_graph_feature_enable() API
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_graph_feature_data_app_cookie_get(struct rte_graph_feature_arc *arc,
+ rte_graph_feature_data_t fdata)
+{
+ struct rte_graph_feature_data *fdata_obj = __rte_graph_feature_data_get(arc, fdata);
+
+ return __rte_graph_feature_data_app_cookie_get(fdata_obj);
+}
+
+/**
+ * Get next_enabled_feature_data from current feature data object with checks
+ *
+ * @param arc
+ * feature arc
+ * @param fdata
+ * feature data object
+ *
+ * @return
+ * next enabled feature data from this feature data
+ */
+__rte_experimental
+static __rte_always_inline rte_graph_feature_data_t
+rte_graph_feature_data_next_feature_get(struct rte_graph_feature_arc *arc,
+ rte_graph_feature_data_t fdata)
+{
+ struct rte_graph_feature_data *fdata_obj = __rte_graph_feature_data_get(arc, fdata);
+
+ return __rte_graph_feature_data_next_feature_get(fdata_obj);
+}
+
+/**
+ * Get struct rte_graph_feature_data from rte_graph_feature_dat_t
+ *
+ * @param arc
+ * feature arc
+ * @param fdata
+ * feature data object
+ *
+ * @return
+ * NULL: On Failure
+ * Non-NULL pointer on Success
+ */
+__rte_experimental
+static __rte_always_inline struct rte_graph_feature_data*
+rte_graph_feature_data_get(struct rte_graph_feature_arc *arc,
+ rte_graph_feature_data_t fdata)
+{
+ if (unlikely(fdata > (RTE_GRAPH_FEATURE_TO_FEATURE_DATA(arc,
+ arc->max_features - 1,
+ arc->max_indexes - 1))))
+ return NULL;
+
+ return __rte_graph_feature_data_get(arc, fdata);
+}
+
+/**
+ * Get feature data corresponding to first enabled feature on index
+ * @param arc
+ * feature arc
+ * @param index
+ * Interface index
+ * @param[out] fdata
+ * feature data object
+ *
+ * @return
+ * 1: if any feature enabled on index, return corresponding valid feature data
+ * 0: if no feature is enabled on index
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_graph_feature_data_first_feature_get(struct rte_graph_feature_arc *arc,
+ uint32_t index,
+ rte_graph_feature_data_t *fdata)
+{
+ rte_graph_feature_t *feature = NULL;
+
+ *fdata = RTE_GRAPH_FEATURE_DATA_INVALID;
+
+ feature = (rte_graph_feature_t *)((uint8_t *)arc + arc->fp_first_feature_offset +
+ (sizeof(rte_graph_feature_t) * index));
+
+ if ((index < arc->max_indexes) && rte_graph_feature_is_valid(*feature)) {
+ *fdata = RTE_GRAPH_FEATURE_TO_FEATURE_DATA(arc, *feature, index);
+ return 1;
+ }
+
+ return 0;
+}
+
+/**
+ * Fast path API to check if any feature enabled on a feature arc
+ * Typically from arc->start_node process function
+ *
+ * @param arc
+ * Feature arc object
+ *
+ * @return
+ * 0: If no feature enabled
+ * Non-Zero: Bitmask of features enabled.
+ *
+ */
+__rte_experimental
+static __rte_always_inline uint64_t
+rte_graph_feature_arc_is_any_feature_enabled(struct rte_graph_feature_arc *arc)
+{
+ return (rte_atomic_load_explicit(&arc->fp_feature_enable_bitmask,
+ rte_memory_order_relaxed));
+}
+
+/**
+ * Fast path API to check if provided feature is enabled on any interface/index
+ * or not
+ *
+ * @param arc
+ * Feature arc object
+ * @param feature
+ * Input rte_graph_feature_t that needs to be checked. Can be retrieved in
+ * control path via rte_graph_feature_lookup()
+ *
+ * @return
+ * 1: If input [feature] is enabled in arc
+ * 0: If input [feature] is not enabled in arc
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_graph_feature_arc_is_feature_enabled(struct rte_graph_feature_arc *arc,
+ rte_graph_feature_t feature)
+{
+ uint64_t bitmask = RTE_BIT64(feature);
+
+ return (bitmask & rte_atomic_load_explicit(&arc->fp_feature_enable_bitmask,
+ rte_memory_order_relaxed));
+}
+
+/**
+ * Prefetch feature arc fast path cache line
+ *
+ * @param arc
+ * RTE_GRAPH feature arc object
+ */
+__rte_experimental
+static __rte_always_inline void
+rte_graph_feature_arc_prefetch(struct rte_graph_feature_arc *arc)
+{
+ rte_prefetch0((void *)arc->fast_path_variables);
+}
+
+/**
+ * Prefetch feature data related fast path cache line
+ *
+ * @param arc
+ * RTE_GRAPH feature arc object
+ * @param fdata
+ * Pointer to feature data object
+ */
+__rte_experimental
+static __rte_always_inline void
+rte_graph_feature_arc_feature_data_prefetch(struct rte_graph_feature_arc *arc,
+ rte_graph_feature_data_t fdata)
+{
+ if (unlikely(fdata == RTE_GRAPH_FEATURE_DATA_INVALID))
+ return;
+
+ rte_prefetch0((void *)__rte_graph_feature_data_get(arc, fdata));
+}
+
+#ifdef __cplusplus
+}
+#endif
+#endif
diff --git a/lib/graph/version.map b/lib/graph/version.map
index 44fadc00fd..4aadce446d 100644
--- a/lib/graph/version.map
+++ b/lib/graph/version.map
@@ -56,6 +56,26 @@ DPDK_25 {
EXPERIMENTAL {
global:
+ # added in 25.03
+ __rte_graph_feature_arc_main;
+ __rte_graph_feature_arc_mbuf_dyn_offset;
+ __rte_graph_feature_arc_register;
+ __rte_graph_feature_register;
+ rte_graph_feature_arc_init;
+ rte_graph_feature_arc_create;
+ rte_graph_feature_arc_lookup_by_name;
+ rte_graph_feature_add;
+ rte_graph_feature_enable;
+ rte_graph_feature_disable;
+ rte_graph_feature_lookup;
+ rte_graph_feature_arc_destroy;
+ rte_graph_feature_arc_cleanup;
+ rte_graph_feature_arc_num_enabled_features;
+ rte_graph_feature_arc_num_features;
+ rte_graph_feature_arc_feature_to_name;
+ rte_graph_feature_arc_feature_to_node;
+ rte_graph_feature_arc_names_get;
+
# added in 24.11
rte_node_xstat_increment;
};
--
2.43.0
next prev parent reply other threads:[~2025-01-03 6:06 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-07 7:31 [RFC PATCH 0/3] add feature arc in rte_graph Nitin Saxena
2024-09-07 7:31 ` [RFC PATCH 1/3] graph: add feature arc support Nitin Saxena
2024-09-11 4:41 ` Kiran Kumar Kokkilagadda
2024-10-10 4:42 ` Nitin Saxena
2024-09-07 7:31 ` [RFC PATCH 2/3] graph: add feature arc option in graph create Nitin Saxena
2024-09-07 7:31 ` [RFC PATCH 3/3] graph: add IPv4 output feature arc Nitin Saxena
2024-10-08 8:04 ` [RFC PATCH 0/3] add feature arc in rte_graph David Marchand
2024-10-08 14:26 ` [EXTERNAL] " Nitin Saxena
2024-10-14 11:11 ` Nitin Saxena
2024-10-16 9:24 ` David Marchand
2024-10-16 9:38 ` Robin Jarry
2024-10-16 13:50 ` Nitin Saxena
2024-10-17 7:03 ` Nitin Saxena
2024-10-17 7:50 ` Robin Jarry
2024-10-17 8:32 ` [EXTERNAL] " Christophe Fontaine
2024-10-17 10:56 ` Nitin Saxena
2024-10-17 8:48 ` [EXTERNAL] " Nitin Saxena
2024-10-08 13:30 ` [RFC PATCH v2 0/5] " Nitin Saxena
2024-10-08 13:30 ` [RFC PATCH v2 1/5] graph: add feature arc support Nitin Saxena
2024-10-08 13:30 ` [RFC PATCH v2 2/5] graph: add feature arc option in graph create Nitin Saxena
2024-10-08 13:30 ` [RFC PATCH v2 3/5] graph: add IPv4 output feature arc Nitin Saxena
2024-10-08 13:30 ` [RFC PATCH v2 4/5] test/graph_feature_arc: add functional tests Nitin Saxena
2024-10-08 13:30 ` [RFC PATCH v2 5/5] docs: add programming guide for feature arc Nitin Saxena
2024-10-09 13:29 ` [PATCH v3 0/5] add feature arc in rte_graph Nitin Saxena
2024-10-09 13:29 ` [PATCH v3 1/5] graph: add feature arc support Nitin Saxena
2024-10-09 13:29 ` [PATCH v3 2/5] graph: add feature arc option in graph create Nitin Saxena
2024-10-09 13:30 ` [PATCH v3 3/5] graph: add IPv4 output feature arc Nitin Saxena
2024-10-09 13:30 ` [PATCH v3 4/5] test/graph_feature_arc: add functional tests Nitin Saxena
2024-10-09 13:30 ` [PATCH v3 5/5] docs: add programming guide for feature arc Nitin Saxena
2024-10-09 14:21 ` [PATCH v3 0/5] add feature arc in rte_graph Christophe Fontaine
2024-10-10 4:13 ` [EXTERNAL] " Nitin Saxena
2024-10-09 17:37 ` Stephen Hemminger
2024-10-10 4:24 ` [EXTERNAL] " Nitin Saxena
2024-10-10 13:31 ` [PATCH v4 " Nitin Saxena
2024-10-10 13:31 ` [PATCH v4 1/5] graph: add feature arc support Nitin Saxena
2024-10-10 13:31 ` [PATCH v4 2/5] graph: add feature arc option in graph create Nitin Saxena
2024-10-10 13:31 ` [PATCH v4 3/5] graph: add IPv4 output feature arc Nitin Saxena
2024-10-10 13:31 ` [PATCH v4 4/5] test/graph_feature_arc: add functional tests Nitin Saxena
2024-10-10 13:31 ` [PATCH v4 5/5] docs: add programming guide for feature arc Nitin Saxena
2024-10-14 14:33 ` [PATCH v5 0/5] add feature arc in rte_graph Nitin Saxena
2024-10-14 14:33 ` [PATCH v5 1/5] graph: add feature arc support Nitin Saxena
2024-10-14 14:33 ` [PATCH v5 2/5] graph: add feature arc option in graph create Nitin Saxena
2024-10-14 14:33 ` [PATCH v5 3/5] graph: add IPv4 output feature arc Nitin Saxena
2024-10-14 14:33 ` [PATCH v5 4/5] test/graph_feature_arc: add functional tests Nitin Saxena
2024-10-14 19:54 ` Stephen Hemminger
2024-10-14 14:33 ` [PATCH v5 5/5] docs: add programming guide for feature arc Nitin Saxena
2025-01-03 6:06 ` [PATCH v6 0/4] add feature arc in rte_graph Nitin Saxena
2025-01-03 6:06 ` [PATCH v6 1/4] graph: add API to override node process function Nitin Saxena
2025-01-03 6:06 ` Nitin Saxena [this message]
2025-01-03 6:06 ` [PATCH v6 3/4] ip4: add ip4 output feature arc Nitin Saxena
2025-01-03 6:06 ` [PATCH v6 4/4] app/graph: add custom feature nodes for ip4 output arc Nitin Saxena
[not found] ` <SJ0PR18MB5111B56B4323FB3DFD147801B6152@SJ0PR18MB5111.namprd18.prod.outlook.com>
2025-01-03 14:59 ` Feature arc slides Nitin Saxena
2025-01-06 0:15 ` Stephen Hemminger
2025-01-07 12:37 ` Nitin Saxena
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250103060612.2671836-3-nsaxena@marvell.com \
--to=nsaxena@marvell.com \
--cc=cfontain@redhat.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=kirankumark@marvell.com \
--cc=ndabilpuram@marvell.com \
--cc=nsaxena16@gmail.com \
--cc=rjarry@redhat.com \
--cc=yanzhirun_163@163.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).