From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BCC4945B04; Thu, 10 Oct 2024 15:31:33 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 338454060B; Thu, 10 Oct 2024 15:31:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 2063B402A3 for ; Thu, 10 Oct 2024 15:31:27 +0200 (CEST) Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49A710of008428; Thu, 10 Oct 2024 06:31:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=0 YL7xhgXoitkex7w7dSwbmtEb61PZtdp3HSHyhvLibI=; b=OR1t9Gkb9SqvdAYdy VY2wxdNDt6x0sc36pHvFt4ZtGl8LGAhx/AZ6SJhPJ82RsTeVDvm73Ue86TiNArWQ R7nEyIR/9wiwchH6nXKiF09qTtI7P7S3aeLIVs2QY00V2wGxlUKtpyIMcOsi+iNe fL0WznBh6VkqV324S/n0SIug/MLNdIPaGqun6uXpEMBc9VNhn2YzZKoYwZBXcy5a zpc+1bNjPk90bkozb5pDR0Y7TAFQ7lkPzeDwCaohp/fJg4pBgfSlUbLcCpaHfGFs z3NrF412RweISFlle7Fqdpd1cX+AKpHoPNQdM9saXyvaj2johg5EWDN1ZL/BPRcd KEFQA== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 426a5n0pah-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 10 Oct 2024 06:31:19 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 10 Oct 2024 06:31:17 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 10 Oct 2024 06:31:17 -0700 Received: from cavium-PowerEdge-R640.. (unknown [10.28.36.207]) by maili.marvell.com (Postfix) with ESMTP id 33AA03F7051; Thu, 10 Oct 2024 06:31:14 -0700 (PDT) From: Nitin Saxena To: Jerin Jacob , Kiran Kumar K , Nithin Dabilpuram , Zhirun Yan , Robin Jarry , Christophe Fontaine CC: , Nitin Saxena Subject: [PATCH v4 1/5] graph: add feature arc support Date: Thu, 10 Oct 2024 19:01:02 +0530 Message-ID: <20241010133111.2764712-2-nsaxena@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241010133111.2764712-1-nsaxena@marvell.com> References: <20241009133009.1152321-1-nsaxena@marvell.com> <20241010133111.2764712-1-nsaxena@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: ynv4MviyqDfDmQ62zIo00G8J5niuUqvw X-Proofpoint-ORIG-GUID: ynv4MviyqDfDmQ62zIo00G8J5niuUqvw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.687,Hydra:6.0.235,FMLib:17.0.607.475 definitions=2020-10-13_15,2020-10-13_02,2020-04-07_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org add feature arc to allow dynamic steering of packets across graph nodes based on protocol features enabled on incoming or outgoing interface Signed-off-by: Nitin Saxena --- doc/guides/rel_notes/release_24_11.rst | 10 + lib/graph/graph_feature_arc.c | 1236 ++++++++++++++++++++++ lib/graph/meson.build | 2 + lib/graph/rte_graph_feature_arc.h | 431 ++++++++ lib/graph/rte_graph_feature_arc_worker.h | 679 ++++++++++++ lib/graph/version.map | 20 + 6 files changed, 2378 insertions(+) create mode 100644 lib/graph/graph_feature_arc.c create mode 100644 lib/graph/rte_graph_feature_arc.h create mode 100644 lib/graph/rte_graph_feature_arc_worker.h diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 2f78f2d125..bd5589b01c 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -82,6 +82,16 @@ New Features The new statistics are useful for debugging and profiling. +* **Added feature arc abstraction in graph library.** + + Feature arc abstraction helps ``rte_graph`` based applications to steer + packets across different node path(s) based on the features (or protocols) + enabled on interfaces. Different feature node paths can be enabled/disabled + at runtime on some or on all interfaces. This abstraction also help + applications to hook their ``custom nodes`` in standard DPDK node paths + without any code changes in the later. + + * Added ``ip4-output`` feature arc processing in ``ip4_rewrite`` node. Removed Items ------------- diff --git a/lib/graph/graph_feature_arc.c b/lib/graph/graph_feature_arc.c new file mode 100644 index 0000000000..0f8633c317 --- /dev/null +++ b/lib/graph/graph_feature_arc.c @@ -0,0 +1,1236 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell International Ltd. + */ + +#include "graph_private.h" +#include +#include + +#define ARC_PASSIVE_LIST(list) (list ^ 0x1) + +#define rte_graph_uint_cast(x) ((unsigned int)x) +#define feat_dbg graph_dbg + +static rte_graph_feature_arc_main_t *__rte_graph_feature_arc_main; + +/* Make sure fast path cache line is compact */ +_Static_assert((offsetof(struct rte_graph_feature_arc, slow_path_variables) + - offsetof(struct rte_graph_feature_arc, fast_path_variables)) + <= RTE_CACHE_LINE_SIZE, + "Fast path feature arc variables exceed cache line size"); + +#define connect_graph_nodes(node1, node2, edge, arc_name) \ + __connect_graph_nodes(node1, node2, edge, arc_name, __LINE__) + +#define FEAT_COND_ERR(cond, fmt, ...) \ + do { \ + if (cond) \ + graph_err(fmt, ##__VA_ARGS__); \ + } while (0) + +/* + * lookup feature name and get control path node_list as well as feature index + * at which it is inserted + */ +static int +feature_lookup(struct rte_graph_feature_arc *arc, const char *feat_name, + struct rte_graph_feature_node_list **ffinfo, uint32_t *slot) +{ + struct rte_graph_feature_node_list *finfo = NULL; + const char *name; + uint32_t fi = 0; + + if (!feat_name) + return -1; + + if (slot) + *slot = UINT32_MAX; + + STAILQ_FOREACH(finfo, &arc->all_features, next_feature) { + RTE_VERIFY(finfo->feature_arc == arc); + name = rte_node_id_to_name(finfo->feature_node->id); + if (!strncmp(name, feat_name, strlen(name))) { + if (ffinfo) + *ffinfo = finfo; + if (slot) + *slot = fi; + return 0; + } + fi++; + } + return -1; +} + +/* Lookup used only during rte_graph_feature_add() */ +static int +feature_add_lookup(struct rte_graph_feature_arc *arc, const char *feat_name, + struct rte_graph_feature_node_list **ffinfo, uint32_t *slot) +{ + struct rte_graph_feature_node_list *finfo = NULL; + const char *name; + uint32_t fi = 0; + + if (!feat_name) + return -1; + + if (slot) + *slot = 0; + + STAILQ_FOREACH(finfo, &arc->all_features, next_feature) { + RTE_VERIFY(finfo->feature_arc == arc); + name = rte_node_id_to_name(finfo->feature_node->id); + if (!strncmp(name, feat_name, strlen(name))) { + if (ffinfo) + *ffinfo = finfo; + if (slot) + *slot = fi; + return 0; + } + /* Update slot where new feature can be added */ + if (slot) + *slot = fi; + fi++; + } + + return -1; +} + +/* Get control path node info from provided input feature_index */ +static int +feature_arc_node_info_lookup(struct rte_graph_feature_arc *arc, uint32_t feature_index, + struct rte_graph_feature_node_list **ppfinfo, + const int do_sanity_check) +{ + struct rte_graph_feature_node_list *finfo = NULL; + uint32_t index = 0; + + if (!ppfinfo) + return -1; + + *ppfinfo = NULL; + STAILQ_FOREACH(finfo, &arc->all_features, next_feature) { + /* Check sanity */ + if (do_sanity_check) + if (finfo->node_index != index) + RTE_VERIFY(0); + if (index == feature_index) { + *ppfinfo = finfo; + return 0; + } + index++; + } + return -1; +} + +/* prepare feature arc after addition of all features */ +static void +prepare_feature_arc_before_first_enable(struct rte_graph_feature_arc *arc) +{ + struct rte_graph_feature_node_list *finfo = NULL; + uint32_t index = 0; + + rte_atomic_store_explicit(&arc->active_feature_list, 0, + rte_memory_order_relaxed); + + STAILQ_FOREACH(finfo, &arc->all_features, next_feature) { + finfo->node_index = index; + feat_dbg("\t%s prepare: %s added to list at index: %u", arc->feature_arc_name, + finfo->feature_node->name, index); + index++; + } +} + +/* feature arc lookup in array */ +static int +feature_arc_lookup(rte_graph_feature_arc_t _arc) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main; + uint32_t iter; + + if (!__rte_graph_feature_arc_main) + return -1; + + for (iter = 0; iter < dm->max_feature_arcs; iter++) { + if (dm->feature_arcs[iter] == RTE_GRAPH_FEATURE_ARC_INITIALIZER) + continue; + + if (arc == (rte_graph_feature_arc_get(dm->feature_arcs[iter]))) + return 0; + } + return -1; +} + +/* Check valid values for known fields in arc to make sure arc is sane */ +static int check_feature_arc_sanity(rte_graph_feature_arc_t _arc, int iter) +{ +#ifdef FEATURE_ARC_DEBUG + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + + RTE_VERIFY(arc->feature_arc_main == __rte_graph_feature_arc_main); + RTE_VERIFY(arc->feature_arc_index == iter); + + RTE_VERIFY(arc->feature_list[0]->indexed_by_features = arc->features[0]); + RTE_VERIFY(arc->feature_list[1]->indexed_by_features = arc->features[1]); + + RTE_VERIFY(rte_atomic_load_explicit(&arc->active_feature_list, + rte_memory_order_relaxed) < 2); +#else + RTE_SET_USED(_arc); + RTE_SET_USED(iter); +#endif + return 0; +} + +/* Perform sanity on all arc if any corruption occurred */ +static int do_sanity_all_arcs(void) +{ + rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main; + uint32_t iter; + + if (!dm) + return -1; + + for (iter = 0; iter < dm->max_feature_arcs; iter++) { + if (dm->feature_arcs[iter] == RTE_GRAPH_FEATURE_ARC_INITIALIZER) + continue; + + if (check_feature_arc_sanity(dm->feature_arcs[iter], iter)) + return -1; + } + return 0; +} + +/* get existing edge from parent_node -> child_node */ +static int +get_existing_edge(const char *arc_name, struct rte_node_register *parent_node, + struct rte_node_register *child_node, rte_edge_t *_edge) +{ + char **next_edges = NULL; + uint32_t i, count = 0; + + RTE_SET_USED(arc_name); + + count = rte_node_edge_get(parent_node->id, NULL); + + if (!count) + return -1; + + next_edges = malloc(count); + + if (!next_edges) + return -1; + + count = rte_node_edge_get(parent_node->id, next_edges); + for (i = 0; i < count; i++) { + if (strstr(child_node->name, next_edges[i])) { + if (_edge) + *_edge = (rte_edge_t)i; + + free(next_edges); + return 0; + } + } + free(next_edges); + + return -1; +} + +/* create or retrieve already existing edge from parent_node -> child_node */ +static int +__connect_graph_nodes(struct rte_node_register *parent_node, struct rte_node_register *child_node, + rte_edge_t *_edge, char *arc_name, int lineno) +{ + const char *next_node = NULL; + rte_edge_t edge; + + if (!get_existing_edge(arc_name, parent_node, child_node, &edge)) { + feat_dbg("\t%s/%d: %s[%u]: \"%s\", edge reused", arc_name, lineno, + parent_node->name, edge, child_node->name); + + if (_edge) + *_edge = edge; + + return 0; + } + + /* Node to be added */ + next_node = child_node->name; + + edge = rte_node_edge_update(parent_node->id, RTE_EDGE_ID_INVALID, &next_node, 1); + + if (edge == RTE_EDGE_ID_INVALID) { + graph_err("edge invalid"); + return -1; + } + edge = rte_node_edge_count(parent_node->id) - 1; + + feat_dbg("\t%s/%d: %s[%u]: \"%s\", new edge added", arc_name, lineno, parent_node->name, + edge, child_node->name); + + if (_edge) + *_edge = edge; + + return 0; +} + +/* feature arc initialization */ +static int +feature_arc_main_init(rte_graph_feature_arc_main_t **pfl, uint32_t max_feature_arcs) +{ + rte_graph_feature_arc_main_t *pm = NULL; + uint32_t i; + size_t sz; + + if (!pfl) + return -1; + + sz = sizeof(rte_graph_feature_arc_main_t) + + (sizeof(pm->feature_arcs[0]) * max_feature_arcs); + + pm = rte_malloc("rte_graph_feature_arc_main", sz, 0); + if (!pm) + return -1; + + memset(pm, 0, sz); + + for (i = 0; i < max_feature_arcs; i++) + pm->feature_arcs[i] = RTE_GRAPH_FEATURE_ARC_INITIALIZER; + + pm->max_feature_arcs = max_feature_arcs; + + *pfl = pm; + + return 0; +} + +/* feature arc initialization, public API */ +int +rte_graph_feature_arc_init(int max_feature_arcs) +{ + if (!max_feature_arcs) + return -1; + + if (__rte_graph_feature_arc_main) + return -1; + + return feature_arc_main_init(&__rte_graph_feature_arc_main, max_feature_arcs); +} + +/* reset feature list before switching to passive list */ +static void +feature_arc_list_reset(struct rte_graph_feature_arc *arc, uint32_t list_index) +{ + rte_graph_feature_data_t *fdata = NULL; + rte_graph_feature_list_t *list = NULL; + struct rte_graph_feature *feat = NULL; + uint32_t i, j; + + list = arc->feature_list[list_index]; + feat = arc->features[list_index]; + + /*Initialize variables*/ + memset(feat, 0, arc->feature_size * arc->max_features); + memset(list, 0, arc->feature_list_size); + + /* Initialize feature and feature_data */ + for (i = 0; i < arc->max_features; i++) { + feat = __rte_graph_feature_get(arc, i, list_index); + feat->this_feature_index = i; + + for (j = 0; j < arc->max_indexes; j++) { + fdata = rte_graph_feature_data_get(arc, feat, j); + fdata->next_enabled_feature = RTE_GRAPH_FEATURE_INVALID; + fdata->next_edge = UINT16_MAX; + fdata->user_data = UINT32_MAX; + } + } + + for (i = 0; i < arc->max_indexes; i++) + list->first_enabled_feature_by_index[i] = RTE_GRAPH_FEATURE_INVALID; +} + +static int +feature_arc_list_init(struct rte_graph_feature_arc *arc, const char *flist_name, + rte_graph_feature_list_t **pplist, + struct rte_graph_feature **ppfeature, uint32_t list_index) +{ + char fname[2 * RTE_GRAPH_FEATURE_ARC_NAMELEN]; + size_t list_size, feat_size, fdata_size; + rte_graph_feature_list_t *list = NULL; + struct rte_graph_feature *feat = NULL; + + list_size = sizeof(struct rte_graph_feature_list) + + (sizeof(list->first_enabled_feature_by_index[0]) * arc->max_indexes); + + list_size = RTE_ALIGN_CEIL(list_size, RTE_CACHE_LINE_SIZE); + + list = rte_malloc(flist_name, list_size, RTE_CACHE_LINE_SIZE); + if (!list) + return -ENOMEM; + + memset(list, 0, list_size); + fdata_size = arc->max_indexes * sizeof(rte_graph_feature_data_t); + + /* Let one feature and its associated data per index capture complete + * cache lines + */ + feat_size = RTE_ALIGN_CEIL(sizeof(struct rte_graph_feature) + fdata_size, + RTE_CACHE_LINE_SIZE); + + snprintf(fname, sizeof(fname), "%s-%s", arc->feature_arc_name, "feat"); + + feat = rte_malloc(fname, feat_size * arc->max_features, RTE_CACHE_LINE_SIZE); + if (!feat) { + rte_free(list); + return -ENOMEM; + } + arc->feature_size = feat_size; + arc->feature_data_size = fdata_size; + arc->feature_list_size = list_size; + + /* Initialize list */ + list->indexed_by_features = feat; + *pplist = list; + *ppfeature = feat; + + feature_arc_list_reset(arc, list_index); + + return 0; +} + +/* free resources allocated in feature_arc_list_init() */ +static void +feature_arc_list_destroy(struct rte_graph_feature_arc *arc, int list_index) +{ + rte_graph_feature_list_t *list = NULL; + + list = arc->feature_list[list_index]; + + rte_free(list->indexed_by_features); + + arc->features[list_index] = NULL; + + rte_free(list); + + arc->feature_list[list_index] = NULL; +} + +int +rte_graph_feature_arc_create(const char *feature_arc_name, int max_features, int max_indexes, + struct rte_node_register *start_node, rte_graph_feature_arc_t *_arc) +{ + char name[2 * RTE_GRAPH_FEATURE_ARC_NAMELEN]; + struct rte_graph_feature_data *gfd = NULL; + rte_graph_feature_arc_main_t *dfm = NULL; + struct rte_graph_feature_arc *arc = NULL; + struct rte_graph_feature *df = NULL; + uint32_t iter, j, arc_index; + size_t sz; + + if (!_arc) + SET_ERR_JMP(EINVAL, err, "%s: Invalid _arc", feature_arc_name); + + if (max_features < 2) + SET_ERR_JMP(EINVAL, err, "%s: max_features must be greater than 1", + feature_arc_name); + + if (!start_node) + SET_ERR_JMP(EINVAL, err, "%s: start_node cannot be NULL", + feature_arc_name); + + if (!feature_arc_name) + SET_ERR_JMP(EINVAL, err, "%s: feature_arc name cannot be NULL", + feature_arc_name); + + if (max_features > RTE_GRAPH_FEATURE_MAX_PER_ARC) + SET_ERR_JMP(EAGAIN, err, "%s: number of features cannot be greater than 64", + feature_arc_name); + + /* + * Application hasn't called rte_graph_feature_arc_init(). Initialize with + * default values + */ + if (!__rte_graph_feature_arc_main) { + if (rte_graph_feature_arc_init((int)RTE_GRAPH_FEATURE_ARC_MAX) < 0) { + graph_err("rte_graph_feature_arc_init() failed"); + return -1; + } + } + + /* If name is not unique */ + if (!rte_graph_feature_arc_lookup_by_name(feature_arc_name, NULL)) + SET_ERR_JMP(EINVAL, err, "%s: feature arc name already exists", + feature_arc_name); + + dfm = __rte_graph_feature_arc_main; + + /* threshold check */ + if (dfm->num_feature_arcs > (dfm->max_feature_arcs - 1)) + SET_ERR_JMP(EAGAIN, err, "%s: max number (%u) of feature arcs reached", + feature_arc_name, dfm->max_feature_arcs); + + /* Find the free slot for feature arc */ + for (iter = 0; iter < dfm->max_feature_arcs; iter++) { + if (dfm->feature_arcs[iter] == RTE_GRAPH_FEATURE_ARC_INITIALIZER) + break; + } + arc_index = iter; + + if (arc_index >= dfm->max_feature_arcs) { + graph_err("No free slot found for num_feature_arc"); + return -1; + } + + /* This should not happen */ + RTE_VERIFY(dfm->feature_arcs[arc_index] == RTE_GRAPH_FEATURE_ARC_INITIALIZER); + + /* size of feature arc + feature_bit_mask_by_index */ + sz = RTE_ALIGN_CEIL(sizeof(*arc) + (sizeof(uint64_t) * max_indexes), RTE_CACHE_LINE_SIZE); + + arc = rte_malloc(feature_arc_name, sz, RTE_CACHE_LINE_SIZE); + + if (!arc) { + graph_err("malloc failed for feature_arc_create()"); + return -1; + } + + memset(arc, 0, sz); + + /* Initialize rte_graph port group fixed variables */ + STAILQ_INIT(&arc->all_features); + strncpy(arc->feature_arc_name, feature_arc_name, RTE_GRAPH_FEATURE_ARC_NAMELEN - 1); + arc->feature_arc_main = (void *)dfm; + arc->start_node = start_node; + arc->max_features = max_features; + arc->max_indexes = max_indexes; + arc->feature_arc_index = arc_index; + + snprintf(name, sizeof(name), "%s-%s", feature_arc_name, "flist0"); + + if (feature_arc_list_init(arc, name, &arc->feature_list[0], &arc->features[0], 0) < 0) { + rte_free(arc); + graph_err("feature_arc_list_init(0) failed"); + return -1; + } + snprintf(name, sizeof(name), "%s-%s", feature_arc_name, "flist1"); + + if (feature_arc_list_init(arc, name, &arc->feature_list[1], &arc->features[1], 1) < 0) { + feature_arc_list_destroy(arc, 0); + rte_free(arc); + graph_err("feature_arc_list_init(1) failed"); + return -1; + } + + for (iter = 0; iter < arc->max_features; iter++) { + df = rte_graph_feature_get(arc, iter); + for (j = 0; j < arc->max_indexes; j++) { + gfd = rte_graph_feature_data_get(arc, df, j); + gfd->next_enabled_feature = RTE_GRAPH_FEATURE_INVALID; + } + } + dfm->feature_arcs[arc->feature_arc_index] = (rte_graph_feature_arc_t)arc; + dfm->num_feature_arcs++; + + if (_arc) + *_arc = (rte_graph_feature_arc_t)arc; + + do_sanity_all_arcs(); + + feat_dbg("Feature arc %s[%p] created with max_features: %u and indexes: %u", + feature_arc_name, (void *)arc, max_features, max_indexes); + return 0; + +err: + return -rte_errno; +} + +int +rte_graph_feature_add(rte_graph_feature_arc_t _arc, struct rte_node_register *feature_node, + const char *_runs_after, const char *runs_before) +{ + struct rte_graph_feature_node_list *after_finfo = NULL, *before_finfo = NULL; + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + struct rte_graph_feature_node_list *temp = NULL, *finfo = NULL; + char feature_name[3*RTE_GRAPH_FEATURE_ARC_NAMELEN]; + const char *runs_after = NULL; + uint32_t num_feature = 0; + uint32_t slot, add_flag; + rte_edge_t edge = -1; + + /* sanity */ + if (arc->feature_arc_main != __rte_graph_feature_arc_main) { + graph_err("feature arc not created: 0x%016" PRIx64, (uint64_t)_arc); + return -1; + } + + if (feature_arc_lookup(_arc)) { + graph_err("invalid feature arc: 0x%016" PRIx64, (uint64_t)_arc); + return -1; + } + + if (arc->runtime_enabled_features) { + graph_err("adding features after enabling any one of them is not supported"); + return -1; + } + + if ((_runs_after != NULL) && (runs_before != NULL) && + (_runs_after == runs_before)) { + graph_err("runs_after and runs_before are same '%s:%s]", _runs_after, + runs_before); + return -1; + } + + if (!feature_node) { + graph_err("feature_node: %p invalid", feature_node); + return -1; + } + + arc = rte_graph_feature_arc_get(_arc); + + if (feature_node->id == RTE_NODE_ID_INVALID) { + graph_err("Invalid node: %s", feature_node->name); + return -1; + } + + if (!feature_add_lookup(arc, feature_node->name, &finfo, &slot)) { + graph_err("%s feature already added", feature_node->name); + return -1; + } + + if (slot >= arc->max_features) { + graph_err("%s: Max features %u added to feature arc", + arc->feature_arc_name, slot); + return -1; + } + + if (strstr(feature_node->name, arc->start_node->name)) { + graph_err("Feature %s cannot point to itself: %s", feature_node->name, + arc->start_node->name); + return -1; + } + + feat_dbg("%s: adding feature node: %s at feature index: %u", arc->feature_arc_name, + feature_node->name, slot); + + if (connect_graph_nodes(arc->start_node, feature_node, &edge, arc->feature_arc_name)) { + graph_err("unable to connect %s -> %s", arc->start_node->name, feature_node->name); + return -1; + } + + snprintf(feature_name, sizeof(feature_name), "%s-%s-finfo", + arc->feature_arc_name, feature_node->name); + + finfo = rte_malloc(feature_name, sizeof(*finfo), 0); + if (!finfo) { + graph_err("%s/%s: rte_malloc failed", arc->feature_arc_name, feature_node->name); + return -1; + } + + memset(finfo, 0, sizeof(*finfo)); + + finfo->feature_arc = (void *)arc; + finfo->feature_node = feature_node; + finfo->edge_to_this_feature = edge; + arc->runtime_enabled_features = 0; + + /* + * if no constraints given and provided feature is not the first feature, + * explicitly set "runs_after" as last_feature. Handles the case: + * + * add(f1, NULL, NULL); + * add(f2, NULL, NULL); + */ + num_feature = rte_graph_feature_arc_num_features(_arc); + if (!_runs_after && !runs_before && num_feature) + runs_after = rte_graph_feature_arc_feature_to_name(_arc, num_feature - 1); + else + runs_after = _runs_after; + + /* Check for before and after constraints */ + if (runs_before) { + /* runs_before sanity */ + if (feature_lookup(arc, runs_before, &before_finfo, NULL)) + SET_ERR_JMP(EINVAL, finfo_free, + "Invalid before feature name: %s", runs_before); + + if (!before_finfo) + SET_ERR_JMP(EINVAL, finfo_free, + "runs_before %s does not exist", runs_before); + + /* + * Starting from 0 to runs_before, continue connecting edges + */ + add_flag = 1; + STAILQ_FOREACH(temp, &arc->all_features, next_feature) { + if (!add_flag) + /* Nodes after seeing "runs_before", finfo connects to temp*/ + connect_graph_nodes(finfo->feature_node, temp->feature_node, + NULL, arc->feature_arc_name); + /* + * As soon as we see runs_before. stop adding edges + */ + if (!strncmp(temp->feature_node->name, runs_before, + RTE_GRAPH_NAMESIZE)) { + if (!connect_graph_nodes(finfo->feature_node, temp->feature_node, + &edge, arc->feature_arc_name)) + add_flag = 0; + } + + if (add_flag) + /* Nodes before seeing "run_before" are connected to finfo */ + connect_graph_nodes(temp->feature_node, finfo->feature_node, NULL, + arc->feature_arc_name); + } + } + + if (runs_after) { + if (feature_lookup(arc, runs_after, &after_finfo, NULL)) + SET_ERR_JMP(EINVAL, finfo_free, + "Invalid after feature_name %s", runs_after); + + if (!after_finfo) + SET_ERR_JMP(EINVAL, finfo_free, + "runs_after %s does not exist", runs_after); + + /* Starting from runs_after to end continue connecting edges */ + add_flag = 0; + STAILQ_FOREACH(temp, &arc->all_features, next_feature) { + if (add_flag) + /* We have already seen runs_after now */ + /* Add all features as next node to current feature*/ + connect_graph_nodes(finfo->feature_node, temp->feature_node, NULL, + arc->feature_arc_name); + else + /* Connect initial nodes to newly added node*/ + connect_graph_nodes(temp->feature_node, finfo->feature_node, NULL, + arc->feature_arc_name); + + /* as soon as we see runs_after. start adding edges + * from next iteration + */ + if (!strncmp(temp->feature_node->name, runs_after, RTE_GRAPH_NAMESIZE)) + add_flag = 1; + } + + /* add feature next to runs_after */ + STAILQ_INSERT_AFTER(&arc->all_features, after_finfo, finfo, next_feature); + } else { + if (before_finfo) { + /* add finfo before "before_finfo" element in the list */ + after_finfo = NULL; + STAILQ_FOREACH(temp, &arc->all_features, next_feature) { + if (before_finfo == temp) { + if (after_finfo) + STAILQ_INSERT_AFTER(&arc->all_features, after_finfo, + finfo, next_feature); + else + STAILQ_INSERT_HEAD(&arc->all_features, finfo, + next_feature); + + return 0; + } + after_finfo = temp; + } + } else { + /* Very first feature just needs to be added to list */ + STAILQ_INSERT_TAIL(&arc->all_features, finfo, next_feature); + } + } + + return 0; + +finfo_free: + rte_free(finfo); + + return -1; +} + +int +rte_graph_feature_lookup(rte_graph_feature_arc_t _arc, const char *feature_name, + rte_graph_feature_t *feat) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + struct rte_graph_feature_node_list *finfo = NULL; + uint32_t slot; + + if (!feature_lookup(arc, feature_name, &finfo, &slot)) { + *feat = (rte_graph_feature_t) slot; + return 0; + } + + return -1; +} + +int +rte_graph_feature_validate(rte_graph_feature_arc_t _arc, uint32_t index, const char *feature_name, + int is_enable_disable, bool emit_logs) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + struct rte_graph_feature_node_list *finfo = NULL; + rte_graph_feature_rt_list_t active_list; + struct rte_graph_feature *gf = NULL; + uint32_t slot; + + /* validate _arc */ + if (arc->feature_arc_main != __rte_graph_feature_arc_main) { + FEAT_COND_ERR(emit_logs, "invalid feature arc: 0x%016" PRIx64, (uint64_t)_arc); + return -EINVAL; + } + + /* validate index */ + if (index >= arc->max_indexes) { + FEAT_COND_ERR(emit_logs, "%s: Invalid provided index: %u >= %u configured", + arc->feature_arc_name, index, arc->max_indexes); + return -1; + } + + /* validate feature_name is already added or not */ + if (feature_lookup(arc, feature_name, &finfo, &slot)) { + FEAT_COND_ERR(emit_logs, "%s: No feature %s added", + arc->feature_arc_name, feature_name); + return -EINVAL; + } + + if (!finfo) { + FEAT_COND_ERR(emit_logs, "%s: No feature: %s found", + arc->feature_arc_name, feature_name); + return -EINVAL; + } + + /* slot should be in valid range */ + if (slot >= arc->max_features) { + FEAT_COND_ERR(emit_logs, "%s/%s: Invalid free slot %u(max=%u) for feature", + arc->feature_arc_name, feature_name, slot, arc->max_features); + return -EINVAL; + } + + /* slot should be in range of 0 - 63 */ + if (slot > (RTE_GRAPH_FEATURE_MAX_PER_ARC - 1)) { + FEAT_COND_ERR(emit_logs, "%s/%s: Invalid slot: %u", arc->feature_arc_name, + feature_name, slot); + return -EINVAL; + } + + if (finfo->node_index != slot) { + FEAT_COND_ERR(emit_logs, + "%s/%s: lookup slot mismatch for finfo idx: %u and lookup slot: %u", + arc->feature_arc_name, feature_name, finfo->node_index, slot); + return -1; + } + + active_list = rte_atomic_load_explicit(&arc->active_feature_list, + rte_memory_order_relaxed); + + /* Get feature from active list */ + gf = __rte_graph_feature_get(arc, slot, ARC_PASSIVE_LIST(active_list)); + if (gf->this_feature_index != slot) { + FEAT_COND_ERR(emit_logs, + "%s: %s rcvd feature_idx: %u does not match with saved: %u", + arc->feature_arc_name, feature_name, slot, gf->this_feature_index); + return -1; + } + + if (is_enable_disable && (arc->feature_bit_mask_by_index[index] & + RTE_BIT64(slot))) { + FEAT_COND_ERR(emit_logs, "%s: %s already enabled on index: %u", + arc->feature_arc_name, feature_name, index); + return -1; + } + + if (!is_enable_disable && !arc->runtime_enabled_features) { + FEAT_COND_ERR(emit_logs, "%s: No feature enabled to disable", + arc->feature_arc_name); + return -1; + } + + if (!is_enable_disable && !(arc->feature_bit_mask_by_index[index] & RTE_BIT64(slot))) { + FEAT_COND_ERR(emit_logs, "%s: %s not enabled in bitmask for index: %u", + arc->feature_arc_name, feature_name, index); + return -1; + } + + return 0; +} + +/* + * Before switch to passive list, user_data needs to be copied from active list to passive list + */ +static void +copy_fastpath_user_data(struct rte_graph_feature_arc *arc, uint16_t dest_list_index, + uint16_t src_list_index) +{ + rte_graph_feature_data_t *sgfd = NULL, *dgfd = NULL; + struct rte_graph_feature *sgf = NULL, *dgf = NULL; + uint32_t i, j; + + for (i = 0; i < arc->max_features; i++) { + sgf = __rte_graph_feature_get(arc, i, src_list_index); + dgf = __rte_graph_feature_get(arc, i, dest_list_index); + for (j = 0; j < arc->max_indexes; j++) { + sgfd = rte_graph_feature_data_get(arc, sgf, j); + dgfd = rte_graph_feature_data_get(arc, dgf, j); + dgfd->user_data = sgfd->user_data; + } + } +} +/* + * Fill fast path information like + * - next_edge + * - next_enabled_feature + */ +static void +refill_feature_fastpath_data(struct rte_graph_feature_arc *arc, uint16_t list_index) +{ + struct rte_graph_feature_node_list *finfo = NULL, *prev_finfo = NULL; + struct rte_graph_feature_data *gfd = NULL, *prev_gfd = NULL; + uint32_t fi = UINT32_MAX, di = UINT32_MAX, prev_fi = UINT32_MAX; + struct rte_graph_feature *gf = NULL, *prev_gf = NULL; + rte_graph_feature_list_t *flist = NULL; + rte_edge_t edge = UINT16_MAX; + uint64_t bitmask = 0; + + flist = arc->feature_list[list_index]; + + for (di = 0; di < arc->max_indexes; di++) { + bitmask = arc->feature_bit_mask_by_index[di]; + prev_fi = RTE_GRAPH_FEATURE_INVALID; + /* for each feature set for index, set fast path data */ + while (rte_bsf64_safe(bitmask, &fi)) { + gf = __rte_graph_feature_get(arc, fi, list_index); + gfd = rte_graph_feature_data_get(arc, gf, di); + RTE_VERIFY(!feature_arc_node_info_lookup(arc, fi, &finfo, 1)); + + /* If previous feature_index was valid in last loop */ + if (prev_fi != RTE_GRAPH_FEATURE_INVALID) { + prev_gf = __rte_graph_feature_get(arc, prev_fi, list_index); + prev_gfd = rte_graph_feature_data_get(arc, prev_gf, di); + /* + * Get edge of previous feature node connecting + * to this feature node + */ + RTE_VERIFY(!feature_arc_node_info_lookup(arc, prev_fi, + &prev_finfo, 1)); + if (!get_existing_edge(arc->feature_arc_name, + prev_finfo->feature_node, + finfo->feature_node, &edge)) { + feat_dbg("\t[%s/%u/di:%2u,cookie:%u]: (%u->%u)%s[%u] = %s", + arc->feature_arc_name, list_index, di, + prev_gfd->user_data, prev_fi, fi, + prev_finfo->feature_node->name, + edge, finfo->feature_node->name); + /* Copy feature index for next iteration*/ + gfd->next_edge = edge; + prev_fi = fi; + /* + * Fill current feature as next enabled + * feature to previous one + */ + prev_gfd->next_enabled_feature = fi; + } else { + /* Should not fail */ + RTE_VERIFY(0); + } + } + /* On first feature edge of the node to be added */ + if (fi == rte_bsf64(arc->feature_bit_mask_by_index[di])) { + if (!get_existing_edge(arc->feature_arc_name, arc->start_node, + finfo->feature_node, + &edge)) { + feat_dbg("\t[%s/%u/di:%2u,cookie:%u]: (->%u)%s[%u]=%s", + arc->feature_arc_name, list_index, di, + gfd->user_data, fi, + arc->start_node->name, edge, + finfo->feature_node->name); + /* Copy feature index for next iteration*/ + gfd->next_edge = edge; + prev_fi = fi; + /* Set first feature set array for index*/ + flist->first_enabled_feature_by_index[di] = + (rte_graph_feature_t)fi; + } else { + /* Should not fail */ + RTE_VERIFY(0); + } + } + /* Clear current feature index */ + bitmask &= ~RTE_BIT64(fi); + } + } +} + +int +rte_graph_feature_enable(rte_graph_feature_arc_t _arc, uint32_t index, const + char *feature_name, int32_t user_data) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + rte_graph_feature_rt_list_t passive_list, active_list; + struct rte_graph_feature_node_list *finfo = NULL; + struct rte_graph_feature_data *gfd = NULL; + struct rte_graph_feature *gf = NULL; + uint64_t bitmask; + uint32_t slot; + + feat_dbg("%s: Enabling feature: %s for index: %u", + arc->feature_arc_name, feature_name, index); + + if (!arc->runtime_enabled_features) + prepare_feature_arc_before_first_enable(arc); + + if (rte_graph_feature_validate(_arc, index, feature_name, 1, true)) + return -1; + + /** This should not fail as validate() has passed */ + if (feature_lookup(arc, feature_name, &finfo, &slot)) + RTE_VERIFY(0); + + active_list = rte_atomic_load_explicit(&arc->active_feature_list, + rte_memory_order_relaxed); + + passive_list = ARC_PASSIVE_LIST(active_list); + + feat_dbg("\t%s/%s: index: %u, passive list: %u, feature index: %u", + arc->feature_arc_name, feature_name, index, passive_list, slot); + + gf = __rte_graph_feature_get(arc, slot, passive_list); + gfd = rte_graph_feature_data_get(arc, gf, index); + + /* Reset feature list */ + feature_arc_list_reset(arc, passive_list); + + /* Copy user-data */ + copy_fastpath_user_data(arc, passive_list, active_list); + + /* Set current user-data */ + gfd->user_data = user_data; + + /* Set bitmask in control path bitmask */ + rte_bit_relaxed_set64(rte_graph_uint_cast(slot), &arc->feature_bit_mask_by_index[index]); + refill_feature_fastpath_data(arc, passive_list); + + /* If first time feature getting enabled */ + bitmask = rte_atomic_load_explicit(&arc->feature_enable_bitmask[active_list], + rte_memory_order_relaxed); + + /* On very first feature enable instance */ + if (!finfo->ref_count) + bitmask |= RTE_BIT64(slot); + + rte_atomic_store_explicit(&arc->feature_enable_bitmask[passive_list], + bitmask, rte_memory_order_relaxed); + + /* Slow path updates */ + arc->runtime_enabled_features++; + + /* Increase feature node info reference count */ + finfo->ref_count++; + + /* Store release semantics for active_list update */ + rte_atomic_store_explicit(&arc->active_feature_list, passive_list, + rte_memory_order_release); + + feat_dbg("%s/%s: After enable, switched active feature list to %u", + arc->feature_arc_name, feature_name, passive_list); + + return 0; +} + +int +rte_graph_feature_disable(rte_graph_feature_arc_t _arc, uint32_t index, const char *feature_name) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + rte_graph_feature_rt_list_t passive_list, active_list; + struct rte_graph_feature_data *gfd = NULL; + struct rte_graph_feature_node_list *finfo = NULL; + struct rte_graph_feature *gf = NULL; + uint64_t bitmask; + uint32_t slot; + + feat_dbg("%s: Disable feature: %s for index: %u", + arc->feature_arc_name, feature_name, index); + + if (rte_graph_feature_validate(_arc, index, feature_name, 0, true)) + return -1; + + if (feature_lookup(arc, feature_name, &finfo, &slot)) + return -1; + + active_list = rte_atomic_load_explicit(&arc->active_feature_list, + rte_memory_order_relaxed); + + passive_list = ARC_PASSIVE_LIST(active_list); + + gf = __rte_graph_feature_get(arc, slot, passive_list); + gfd = rte_graph_feature_data_get(arc, gf, index); + + feat_dbg("\t%s/%s: index: %u, passive list: %u, feature index: %u", + arc->feature_arc_name, feature_name, index, passive_list, slot); + + rte_bit_relaxed_clear64(rte_graph_uint_cast(slot), &arc->feature_bit_mask_by_index[index]); + + /* Reset feature list */ + feature_arc_list_reset(arc, passive_list); + + /* Copy user-data */ + copy_fastpath_user_data(arc, passive_list, active_list); + + /* Reset current user-data */ + gfd->user_data = ~0; + + refill_feature_fastpath_data(arc, passive_list); + + finfo->ref_count--; + arc->runtime_enabled_features--; + + /* If no feature enabled, reset feature in u64 fast path bitmask */ + bitmask = rte_atomic_load_explicit(&arc->feature_enable_bitmask[active_list], + rte_memory_order_relaxed); + + /* When last feature is disabled */ + if (!finfo->ref_count) + bitmask &= ~(RTE_BIT64(slot)); + + rte_atomic_store_explicit(&arc->feature_enable_bitmask[passive_list], bitmask, + rte_memory_order_relaxed); + + /* Store release semantics for active_list update */ + rte_atomic_store_explicit(&arc->active_feature_list, passive_list, + rte_memory_order_release); + + feat_dbg("%s/%s: After disable, switched active feature list to %u", + arc->feature_arc_name, feature_name, passive_list); + + return 0; +} + +int +rte_graph_feature_arc_destroy(rte_graph_feature_arc_t _arc) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main; + struct rte_graph_feature_node_list *node_info = NULL; + + while (!STAILQ_EMPTY(&arc->all_features)) { + node_info = STAILQ_FIRST(&arc->all_features); + STAILQ_REMOVE_HEAD(&arc->all_features, next_feature); + rte_free(node_info); + } + feature_arc_list_destroy(arc, 0); + feature_arc_list_destroy(arc, 1); + + dm->feature_arcs[arc->feature_arc_index] = RTE_GRAPH_FEATURE_ARC_INITIALIZER; + + rte_free(arc); + + do_sanity_all_arcs(); + + return 0; +} + +int +rte_graph_feature_arc_cleanup(void) +{ + rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main; + uint32_t iter; + + if (!__rte_graph_feature_arc_main) + return -1; + + for (iter = 0; iter < dm->max_feature_arcs; iter++) { + if (dm->feature_arcs[iter] == RTE_GRAPH_FEATURE_ARC_INITIALIZER) + continue; + + rte_graph_feature_arc_destroy((rte_graph_feature_arc_t)dm->feature_arcs[iter]); + } + rte_free(dm); + + __rte_graph_feature_arc_main = NULL; + + return 0; +} + +int +rte_graph_feature_arc_lookup_by_name(const char *arc_name, rte_graph_feature_arc_t *_arc) +{ + rte_graph_feature_arc_main_t *dm = __rte_graph_feature_arc_main; + struct rte_graph_feature_arc *arc = NULL; + uint32_t iter; + + if (!__rte_graph_feature_arc_main) + return -1; + + if (_arc) + *_arc = RTE_GRAPH_FEATURE_ARC_INITIALIZER; + + for (iter = 0; iter < dm->max_feature_arcs; iter++) { + if (dm->feature_arcs[iter] == RTE_GRAPH_FEATURE_ARC_INITIALIZER) + continue; + + arc = rte_graph_feature_arc_get(dm->feature_arcs[iter]); + + if ((strstr(arc->feature_arc_name, arc_name)) && + (strlen(arc->feature_arc_name) == strlen(arc_name))) { + if (_arc) + *_arc = (rte_graph_feature_arc_t)arc; + return 0; + } + } + + return -1; +} + +uint32_t +rte_graph_feature_arc_num_enabled_features(rte_graph_feature_arc_t _arc) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + + return arc->runtime_enabled_features; +} + +uint32_t +rte_graph_feature_arc_num_features(rte_graph_feature_arc_t _arc) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + struct rte_graph_feature_node_list *finfo = NULL; + uint32_t count = 0; + + STAILQ_FOREACH(finfo, &arc->all_features, next_feature) + count++; + + return count; +} + +char * +rte_graph_feature_arc_feature_to_name(rte_graph_feature_arc_t _arc, rte_graph_feature_t feat) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + struct rte_graph_feature_node_list *finfo = NULL; + uint32_t slot = feat; + + if (feat >= rte_graph_feature_arc_num_features(_arc)) { + graph_err("%s: feature %u does not exist", arc->feature_arc_name, feat); + return NULL; + } + if (!feature_arc_node_info_lookup(arc, slot, &finfo, 0/* ignore sanity*/)) + return finfo->feature_node->name; + + return NULL; +} + +struct rte_node_register * +rte_graph_feature_arc_feature_to_node(rte_graph_feature_arc_t _arc, rte_graph_feature_t feat) +{ + struct rte_graph_feature_arc *arc = rte_graph_feature_arc_get(_arc); + struct rte_graph_feature_node_list *finfo = NULL; + uint32_t slot = feat; + + if (feat >= rte_graph_feature_arc_num_features(_arc)) { + graph_err("%s: feature %u does not exist", arc->feature_arc_name, feat); + return NULL; + } + if (!feature_arc_node_info_lookup(arc, slot, &finfo, 0/* ignore sanity*/)) + return finfo->feature_node; + + return NULL; + +} diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 0cb15442ab..d916176fb7 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -14,11 +14,13 @@ sources = files( 'graph_debug.c', 'graph_stats.c', 'graph_populate.c', + 'graph_feature_arc.c', 'graph_pcap.c', 'rte_graph_worker.c', 'rte_graph_model_mcore_dispatch.c', ) headers = files('rte_graph.h', 'rte_graph_worker.h') +headers += files('rte_graph_feature_arc.h', 'rte_graph_feature_arc_worker.h') indirect_headers += files( 'rte_graph_model_mcore_dispatch.h', 'rte_graph_model_rtc.h', diff --git a/lib/graph/rte_graph_feature_arc.h b/lib/graph/rte_graph_feature_arc.h new file mode 100644 index 0000000000..1615f8e1c8 --- /dev/null +++ b/lib/graph/rte_graph_feature_arc.h @@ -0,0 +1,431 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell International Ltd. + */ + +#ifndef _RTE_GRAPH_FEATURE_ARC_H_ +#define _RTE_GRAPH_FEATURE_ARC_H_ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * + * rte_graph_feature_arc.h + * + * Define APIs and structures/variables with respect to feature arc + * + * - Feature arc(s) + * - Feature(s) + * + * A feature arc represents an ordered list of features/protocol-nodes at a + * given networking layer. Feature arc provides a high level abstraction to + * connect various *rte_graph* nodes, designated as *feature nodes*, and + * allowing steering of packets across these feature nodes fast path processing + * in a generic manner. In a typical network stack, often a protocol or feature + * must be first enabled on a given interface, before any packet is steered + * towards it for feature processing. For eg: incoming IPv4 packets are sent to + * routing sub-system only after a valid IPv4 address is assigned to the + * received interface. In other words, often packets needs to be steered across + * features not based on the packet content but based on whether a feature is + * enable or disable on a given incoming/outgoing interface. Feature arc + * provides mechanism to enable/disable feature(s) on each interface at runtime + * and allow seamless packet steering across runtime enabled feature nodes in + * fast path. + * + * Feature arc also provides a way to steer packets from standard nodes to + * custom/user-defined *feature nodes* without any change in standard node's + * fast path functions + * + * On a given interface multiple feature(s) might be enabled in a particular + * feature arc. For instance, both "ipv4-output" and "IPsec policy output" + * features may be enabled on "eth0" interface in "L3-output" feature arc. + * Similarly, "ipv6-output" and "ipsec-output" may be enabled on "eth1" + * interface in same "L3-output" feature arc. + * + * When multiple features are present in a given feature arc, its imperative + * to allow each feature processing in a particular sequential order. For + * instance, in "L3-input" feature arc it may be required to run "IPsec + * input" feature first, for packet decryption, before "ip-lookup". So a + * sequential order must be maintained among features present in a feature arc. + * + * Features are enabled/disabled multiple times at runtime to some or all + * available interfaces present in the system. Enable/disabling features on one + * interface is independent of other interface. + * + * A given feature might consume packet (if it's configured to consume) or may + * forward it to next enabled feature. For instance, "IPsec input" feature may + * consume/drop all packets with "Protect" policy action while all packets with + * policy action as "Bypass" may be forwarded to next enabled feature (with in + * same feature arc) + * + * This library facilitates rte graph based applications to steer packets in + * fast path to different feature nodes with-in a feature arc and support all + * functionalities described above + * + * In order to use feature-arc APIs, applications needs to do following in + * control path: + * - Initialize feature arc library via rte_graph_feature_arc_init() + * - Create feature arc via rte_graph_feature_arc_create() + * - *Before calling rte_graph_create()*, features must be added to feature-arc + * via rte_graph_feature_add(). rte_graph_feature_add() allows adding + * features in a sequential order with "runs_after" and "runs_before" + * constraints. + * - Post rte_graph_create(), features can be enabled/disabled at runtime on + * any interface via rte_graph_feature_enable()/rte_graph_feature_disable() + * - Feature arc can be destroyed via rte_graph_feature_arc_destroy() + * + * In fast path, APIs are provided to steer packets towards feature path from + * - start_node (provided as an argument to rte_graph_feature_arc_create()) + * - feature nodes (which are added via rte_graph_feature_add()) + * + * For typical steering of packets across feature nodes, application required + * to know "rte_edges" which are saved in feature data object. Feature data + * object is unique for every interface per feature with in a feature arc. + * + * When steering packets from start_node to feature node: + * - rte_graph_feature_arc_first_feature_get() provides first enabled feature. + * - Next rte_edge from start_node to first enabled feature can be obtained via + * rte_graph_feature_arc_feature_set() + * + * rte_mbuf can carry [current feature, interface index] from start_node of an + * arc to other feature nodes + * + * At the time of feature enable(rte_graph_feature_enable), application can set + * 32-bit unique user_data specific to feature per interface. In fast path + * user_data can be retrieved via rte_graph_feature_user_data_get(). User data + * can hold application specific cookie like IPsec policy database index, FIB + * table index etc. + * + * If feature node is not consuming packet, next enabled feature and next + * rte_edge can be obtained via rte_graph_feature_arc_next_feature_get() + * + * It is application responsibility to ensure that at-least *last feature*(or + * sink feature) must be enabled from where packet can exit feature-arc path, + * if *NO* intermediate feature is consuming the packet and it has reached till + * the end of feature arc path + * + * It is recommended that all features *MUST* be added to feature arc before + * calling `rte_graph_create()`. Addition of features after + * `rte_graph_create()` may not work functionally. + * Although,rte_graph_feature_enable()/rte_graph_feature_disable() should be + * called after `rte_graph_create()` in control plane. + * + * Synchronization among cores + * --------------------------- + * Subsequent calls to rte_graph_feature_enable() is allowed while worker cores + * are processing in rte_graph_walk() loop. However, for + * rte_graph_feature_disable() application must use RCU based synchronization + */ + +/** Initializer value for rte_graph_feature_arc_t */ +#define RTE_GRAPH_FEATURE_ARC_INITIALIZER ((rte_graph_feature_arc_t)UINT64_MAX) + +/** Max number of feature arcs which can be created */ +#define RTE_GRAPH_FEATURE_ARC_MAX 64 + +/** Max number of features supported in a given feature arc */ +#define RTE_GRAPH_FEATURE_MAX_PER_ARC 64 + +/** Length of feature arc name */ +#define RTE_GRAPH_FEATURE_ARC_NAMELEN RTE_NODE_NAMESIZE + +/** @internal */ +#define rte_graph_feature_cast(x) ((rte_graph_feature_t)x) + +/**< Initializer value for rte_graph_feature_arc_t */ +#define RTE_GRAPH_FEATURE_INVALID rte_graph_feature_cast(UINT8_MAX) + +/** rte_graph feature arc object */ +typedef uintptr_t rte_graph_feature_arc_t; + +/** rte_graph feature object */ +typedef uint8_t rte_graph_feature_t; + +/** runtime active feature list index with in feature arc*/ +typedef uint16_t rte_graph_feature_rt_list_t; + +/** per feature arc monotonically increasing counter to synchronize fast path APIs */ +typedef uint16_t rte_graph_feature_counter_t; + +/** + * Initialize feature arc subsystem + * + * @param max_feature_arcs + * Maximum number of feature arcs required to be supported + * + * @return + * 0: Success + * <0: Failure + */ +__rte_experimental +int rte_graph_feature_arc_init(int max_feature_arcs); + +/** + * Create a feature arc + * + * @param feature_arc_name + * Feature arc name with max length of @ref RTE_GRAPH_FEATURE_ARC_NAMELEN + * @param max_features + * Maximum number of features to be supported in this feature arc + * @param max_indexes + * Maximum number of interfaces/ports/indexes to be supported + * @param start_node + * Base node where this feature arc's features are checked in fast path + * @param[out] _arc + * Feature arc object + * + * @return + * 0: Success + * <0: Failure + */ +__rte_experimental +int rte_graph_feature_arc_create(const char *feature_arc_name, int max_features, int max_indexes, + struct rte_node_register *start_node, + rte_graph_feature_arc_t *_arc); + +/** + * Get feature arc object with name + * + * @param arc_name + * Feature arc name provided to successful @ref rte_graph_feature_arc_create + * @param[out] _arc + * Feature arc object returned. Valid only when API returns SUCCESS + * + * @return + * 0: Success + * <0: Failure. + */ +__rte_experimental +int rte_graph_feature_arc_lookup_by_name(const char *arc_name, rte_graph_feature_arc_t *_arc); + +/** + * Add a feature to already created feature arc. For instance + * + * 1. Add first feature node: "ipv4-input" to input arc + * rte_graph_feature_add(ipv4_input_arc, "ipv4-input", NULL, NULL); + * + * 2. Add "ipsec-input" feature node after "ipv4-input" feature + * rte_graph_feature_add(ipv4_input_arc, "ipsec-input", "ipv4-input", NULL); + * + * 3. Add "ipv4-pre-classify-input" node before "ipv4-input" feature + * rte_graph_feature_add(ipv4_input_arc, "ipv4-pre-classify-input"", NULL, "ipv4-input"); + * + * 4. Add "acl-classify-input" node after ipv4-input but before ipsec-input + * rte_graph_feature_add(ipv4_input_arc, "acl-classify-input", "ipv4-input", "ipsec-input"); + * + * @param _arc + * Feature arc handle returned from @ref rte_graph_feature_arc_create() + * @param feature_node + * Graph node representing feature. On success, feature_node is next_node of + * feature_arc->start_node + * @param runs_after + * Add this feature_node after already added "runs_after". Creates + * start_node -> runs_after -> this_feature sequence + * @param runs_before + * Add this feature_node before already added "runs_before". Creates + * start_node -> this_feature -> runs_before sequence + * + * Must be called before rte_graph_create() + * rte_graph_feature_add() is not allowed after call to + * rte_graph_feature_enable() so all features must be added before they can be + * enabled + * + * @return + * 0: Success + * <0: Failure + */ +__rte_experimental +int rte_graph_feature_add(rte_graph_feature_arc_t _arc, struct rte_node_register *feature_node, + const char *runs_after, const char *runs_before); + +/** + * Enable feature within a feature arc + * + * Must be called after @b rte_graph_create(). + * + * @param _arc + * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref + * rte_graph_feature_arc_lookup_by_name + * @param index + * Application specific index. Can be corresponding to interface_id/port_id etc + * @param feature_name + * Name of the node which is already added via @ref rte_graph_feature_add + * @param user_data + * Application specific data which is retrieved in fast path + * + * @return + * 0: Success + * <0: Failure + */ +__rte_experimental +int rte_graph_feature_enable(rte_graph_feature_arc_t _arc, uint32_t index, const char *feature_name, + int32_t user_data); + +/** + * Validate whether subsequent enable/disable feature would succeed or not. + * API is thread-safe + * + * @param _arc + * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref + * rte_graph_feature_arc_lookup_by_name + * @param index + * Application specific index. Can be corresponding to interface_id/port_id etc + * @param feature_name + * Name of the node which is already added via @ref rte_graph_feature_add + * @param is_enable_disable + * If 1, validate whether subsequent @ref rte_graph_feature_enable would pass or not + * If 0, validate whether subsequent @ref rte_graph_feature_disable would pass or not + * @param emit_logs + * If passed true, emit error logs when failure is returned + * If passed false, do not emit error logs when failure is returned + * + * @return + * 0: Subsequent enable/disable API would pass + * <0: Subsequent enable/disable API would not pass + */ +__rte_experimental +int rte_graph_feature_validate(rte_graph_feature_arc_t _arc, uint32_t index, + const char *feature_name, int is_enable_disable, bool emit_logs); + +/** + * Disable already enabled feature within a feature arc + * + * Must be called after @b rte_graph_create(). API is *NOT* Thread-safe + * + * @param _arc + * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref + * rte_graph_feature_arc_lookup_by_name + * @param index + * Application specific index. Can be corresponding to interface_id/port_id etc + * @param feature_name + * Name of the node which is already added via @ref rte_graph_feature_add + * + * @return + * 0: Success + * <0: Failure + */ +__rte_experimental +int rte_graph_feature_disable(rte_graph_feature_arc_t _arc, uint32_t index, + const char *feature_name); + +/** + * Get rte_graph_feature_t object from feature name + * + * @param arc + * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref + * rte_graph_feature_arc_lookup_by_name + * @param feature_name + * Feature name provided to @ref rte_graph_feature_add + * @param[out] feature + * Feature object + * + * @return + * 0: Success + * <0: Failure + */ +__rte_experimental +int rte_graph_feature_lookup(rte_graph_feature_arc_t arc, const char *feature_name, + rte_graph_feature_t *feature); + +/** + * Delete feature_arc object + * + * @param _arc + * Feature arc object returned by @ref rte_graph_feature_arc_create or @ref + * rte_graph_feature_arc_lookup_by_name + * + * @return + * 0: Success + * <0: Failure + */ +__rte_experimental +int rte_graph_feature_arc_destroy(rte_graph_feature_arc_t _arc); + +/** + * Cleanup all feature arcs + * + * @return + * 0: Success + * <0: Failure + */ +__rte_experimental +int rte_graph_feature_arc_cleanup(void); + +/** + * Slow path API to know how many features are added (NOT enabled) within a + * feature arc + * + * @param _arc + * Feature arc object + * + * @return: Number of added features to arc + */ +__rte_experimental +uint32_t rte_graph_feature_arc_num_features(rte_graph_feature_arc_t _arc); + +/** + * Slow path API to know how many features are currently enabled within a + * feature arc across all indexes. If a single feature is enabled on all interfaces, + * this API would return "number_of_interfaces" as count (but not "1") + * + * @param _arc + * Feature arc object + * + * @return: Number of enabled features across all indexes + */ +__rte_experimental +uint32_t rte_graph_feature_arc_num_enabled_features(rte_graph_feature_arc_t _arc); + +/** + * Slow path API to get feature node name from rte_graph_feature_t object + * + * @param _arc + * Feature arc object + * @param feature + * Feature object + * + * @return: Name of the feature node + */ +__rte_experimental +char *rte_graph_feature_arc_feature_to_name(rte_graph_feature_arc_t _arc, + rte_graph_feature_t feature); + +/** + * Slow path API to get corresponding struct rte_node_register * from + * rte_graph_feature_t + * + * @param _arc + * Feature arc object + * @param feature + * Feature object + * + * @return: struct rte_node_register * of feature node on SUCCESS else NULL + */ +__rte_experimental +struct rte_node_register * +rte_graph_feature_arc_feature_to_node(rte_graph_feature_arc_t _arc, + rte_graph_feature_t feature); + + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/lib/graph/rte_graph_feature_arc_worker.h b/lib/graph/rte_graph_feature_arc_worker.h new file mode 100644 index 0000000000..9b720e366c --- /dev/null +++ b/lib/graph/rte_graph_feature_arc_worker.h @@ -0,0 +1,679 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell International Ltd. + */ + +#ifndef _RTE_GRAPH_FEATURE_ARC_WORKER_H_ +#define _RTE_GRAPH_FEATURE_ARC_WORKER_H_ + +#include +#include +#include + +/** + * @file + * + * rte_graph_feature_arc_worker.h + * + * Defines fast path structure + */ + +#ifdef __cplusplus +extern "C" { +#endif + +/** @internal + * + * Slow path feature node info list + */ +struct rte_graph_feature_node_list { + /** Next feature */ + STAILQ_ENTRY(rte_graph_feature_node_list) next_feature; + + /** node representing feature */ + struct rte_node_register *feature_node; + + /** How many indexes/interfaces using this feature */ + int32_t ref_count; + + /* node_index in list (after feature_enable())*/ + uint32_t node_index; + + /** Back pointer to feature arc */ + void *feature_arc; + + /** rte_edge_t to this feature node from feature_arc->start_node */ + rte_edge_t edge_to_this_feature; +}; + +/** + * Feature data object: + * + * Feature data stores information to steer packets for: + * - a feature with in feature arc + * - Index i.e. Port/Interface index + * + * Each feature data object holds + * - User data of current feature retrieved via rte_graph_feature_user_data_get() + * - next_edge is used in two conditions when packet to be steered from + * -- start_node to first enabled feature on an interface index + * -- current feature node to next enabled feature on an interface index + * - next_enabled_feature on interface index, if current feature is not + * consuming packet + * + * While user_data corresponds to current enabled feature node however + * next_edge and next_enabled_feature corresponds to next enabled feature + * node on an interface index + * + * First enabled feature on interface index can be retrieved via: + * - rte_graph_feature_first_feature_get() if arc's start_node is trying to + * steer packet from start_node to first enabled feature on interface index + * + * Next enabled feature on interface index can be retrieved via: + * - rte_graph_feature_next_feature_get() if current node is not arc's + * start_node. Input to rte_graph_feature_next_feature_get() is current + * enabled feature and interface index + */ +typedef struct __rte_packed rte_graph_feature_data { + /** edge from current node to next enabled feature */ + rte_edge_t next_edge; + + union { + uint16_t reserved; + struct { + /** next enabled feature on index from current feature */ + rte_graph_feature_t next_enabled_feature; + }; + }; + + /** user_data set by application in rte_graph_feature_enable() for + * - current feature + * - interface index + */ + int32_t user_data; +} rte_graph_feature_data_t; + +/** + * Feature object + * + * Feature object holds feature data object for every index/interface within + * feature + * + * Within a given arc and interface index, first feature object can be + * retrieved in arc's start_node via: + * - rte_graph_feature_arc_first_feature_get() + * + * Feature data information can be retrieved for first feature in start node via + * - rte_graph_feature_arc_feature_set() + * + * Next enabled feature on interface index can be retrieved via: + * - rte_graph_feature_arc_next_feature_get() + * + * Typically application stores rte_graph_feature_t object in rte_mbuf. + * rte_graph_feature_t can be translated to (struct rte_graph_feature *) via + * rte_graph_feature_get() in fast path. Further if needed, feature data for an + * index within a feature can be retrieved via rte_graph_feature_data_get() + */ +struct __rte_cache_aligned rte_graph_feature { + /** feature index or rte_graph_feature_t */ + uint16_t this_feature_index; + + /* + * Array of size arc->feature_data_size + * + * <----------------- Feature --------------------------> + * [data-index-0][data-index-1]...[data-index-max_index-1] + * + * sizeof(feature_data_by_index[0] == sizeof(rte_graph_feature_data_t) + * + */ + uint8_t feature_data_by_index[]; +}; + +/** + * Feature list object + * + * Feature list is required to decouple fast path APIs with control path APIs. + * + * There are two feature lists: active, passive + * Passive list is duplicate of active list in terms of memory. + * + * While fast path APIs always work on active list but control plane work on + * passive list. When control plane needs to enable/disable any feature, it + * populates passive list afresh and atomically switch passive list to active + * list to make it available for fast path APIs + * + * Each feature node in start of its fast path function, must grab active list from + * arc via + * - rte_graph_feature_arc_has_any_feature() or + * rte_graph_feature_arc_has_feature() + * + * Retrieved list must be provided to other feature arc fast path APIs so that + * any control plane changes of active list should not impact current node + * execution iteration. Active list change would be reflected to current node + * in next iteration + * + * With active/passive lists and RCU mechanism in graph worker + * loop, application can update features at runtime without stopping fast path + * cores. A RCU synchronization is required when a feature needs to be + * disabled via rte_graph_feature_disable(). On enabling a feature, RCU + * synchronization may not be required + * + */ +typedef struct __rte_cache_aligned rte_graph_feature_list { + /** + * fast path array holding per_feature data. + * Duplicate entry as feature-arc also hold this pointer + * arc->features[] + * + *<-------------feature-0 ---------><---------feature-1 -------------->... + *[index-0][index-1]...[max_index-1]<-ALIGN->[index-0][index-1] ...[max_index-1]... + */ + struct rte_graph_feature *indexed_by_features; + /* + * fast path array holding first enabled feature per index + * (Required in start_node. In non start_node, mbuf can hold next enabled + * feature) + */ + rte_graph_feature_t first_enabled_feature_by_index[]; +} rte_graph_feature_list_t; + +/** + * rte_graph Feature arc object + * + * Feature arc object holds control plane and fast path information for all + * features and all interface index information for steering packets across + * feature nodes + * + * Within a feature arc, only RTE_GRAPH_FEATURE_MAX_PER_ARC features can be + * added. If more features needs to be added, another feature arc can be + * created + * + * Application gets rte_graph_feature_arc_t object via + * - rte_graph_feature_arc_create() OR + * - rte_graph_feature_arc_lookup_by_name() + * + * In fast path, rte_graph_feature_arc_t can be translated to (struct + * rte_graph_feature_arc *) via rte_graph_feature_arc_get(). Later is needed to + * add as an input argument to all fast path feature arc APIs + */ +struct __rte_cache_aligned rte_graph_feature_arc { + /* First 64B is fast path variables */ + RTE_MARKER fast_path_variables; + + /** runtime active feature list */ + RTE_ATOMIC(rte_graph_feature_rt_list_t) active_feature_list; + + /** Actual Size of feature_list object */ + uint16_t feature_list_size; + + /** + * Size each feature in fastpath. + * Required to navigate from feature to another feature in fast path + */ + uint16_t feature_size; + + /** + * Size of all feature data for an index + * Required to navigate through various feature data within a feature + * in fast path + */ + uint16_t feature_data_size; + + /** + * Quick fast path bitmask indicating if any feature enabled or not on + * any of the indexes. Helps in optimally process packets for the case + * when features are added but not enabled + * + * Separate for active and passive list + */ + RTE_ATOMIC(uint64_t) feature_enable_bitmask[2]; + + /** + * Pointer to both active and passive feature list object + */ + rte_graph_feature_list_t *feature_list[2]; + + /** + * Feature objects for each list + */ + struct rte_graph_feature *features[2]; + + /** index in feature_arc_main */ + uint16_t feature_arc_index; + + uint16_t reserved[3]; + + /** Slow path variables follows*/ + RTE_MARKER slow_path_variables; + + /** feature arc name */ + char feature_arc_name[RTE_GRAPH_FEATURE_ARC_NAMELEN]; + + /** All feature lists */ + STAILQ_HEAD(, rte_graph_feature_node_list) all_features; + + /** control plane counter to track enabled features */ + uint32_t runtime_enabled_features; + + /** Back pointer to feature_arc_main */ + void *feature_arc_main; + + /** Arc's start/base node */ + struct rte_node_register *start_node; + + /** maximum number of features supported by this arc */ + uint32_t max_features; + + /** maximum number of index supported by this arc */ + uint32_t max_indexes; + + /** Slow path bit mask per feature per index */ + uint64_t feature_bit_mask_by_index[]; +}; + +/** + * Feature arc main object + * + * Holds all feature arcs created by application + * + * RTE_GRAPH_FEATURE_ARC_MAX number of feature arcs can be created by + * application via rte_graph_feature_arc_create() + */ +typedef struct feature_arc_main { + /** number of feature arcs created by application */ + uint32_t num_feature_arcs; + + /** max features arcs allowed */ + uint32_t max_feature_arcs; + + /** feature arcs */ + rte_graph_feature_arc_t feature_arcs[]; +} rte_graph_feature_arc_main_t; + +/** @internal Get feature arc pointer from object */ +#define rte_graph_feature_arc_get(arc) ((struct rte_graph_feature_arc *)arc) + +extern rte_graph_feature_arc_main_t *__feature_arc_main; + +/** + * API to know if feature is valid or not + */ +__rte_experimental +static __rte_always_inline int +rte_graph_feature_is_valid(rte_graph_feature_t feature) +{ + return (feature != RTE_GRAPH_FEATURE_INVALID); +} + +/** + * Get rte_graph_feature object with no checks + * + * @param arc + * Feature arc pointer + * @param feature + * Feature index + * @param feature_list + * active feature list retrieved from rte_graph_feature_arc_has_any_feature() + * or rte_graph_feature_arc_has_feature() + * + * @return + * Internal feature object. + */ +__rte_experimental +static __rte_always_inline struct rte_graph_feature * +__rte_graph_feature_get(struct rte_graph_feature_arc *arc, rte_graph_feature_t feature, + const rte_graph_feature_rt_list_t feature_list) +{ + return ((struct rte_graph_feature *)(((uint8_t *)arc->features[feature_list]) + + (feature * arc->feature_size))); +} + +/** + * Get rte_graph_feature object for a given interface/index from feature arc + * + * @param arc + * Feature arc pointer + * @param feature + * Feature index + * + * @return + * Internal feature object. + */ +__rte_experimental +static __rte_always_inline struct rte_graph_feature * +rte_graph_feature_get(struct rte_graph_feature_arc *arc, rte_graph_feature_t feature) +{ + rte_graph_feature_rt_list_t list; + + if (unlikely(feature >= arc->max_features)) + RTE_VERIFY(0); + + if (likely(rte_graph_feature_is_valid(feature))) { + list = rte_atomic_load_explicit(&arc->active_feature_list, + rte_memory_order_relaxed); + return __rte_graph_feature_get(arc, feature, list); + } + + return NULL; +} + +__rte_experimental +static __rte_always_inline rte_graph_feature_data_t * +__rte_graph_feature_data_get(struct rte_graph_feature_arc *arc, struct rte_graph_feature *feature, + uint8_t index) +{ + RTE_SET_USED(arc); + return ((rte_graph_feature_data_t *)(((uint8_t *)feature->feature_data_by_index) + + (index * sizeof(rte_graph_feature_data_t)))); +} + +/** + * Get rte_graph feature data object for a index in feature + * + * @param arc + * feature arc + * @param feature + * Pointer to feature object + * @param index + * Index of feature maintained in slow path linked list + * + * @return + * Valid feature data + */ +__rte_experimental +static __rte_always_inline rte_graph_feature_data_t * +rte_graph_feature_data_get(struct rte_graph_feature_arc *arc, struct rte_graph_feature *feature, + uint8_t index) +{ + if (likely(index < arc->max_indexes)) + return __rte_graph_feature_data_get(arc, feature, index); + + RTE_VERIFY(0); +} + +/** + * Fast path API to check if any feature enabled on a feature arc + * Typically from arc->start_node process function + * + * @param arc + * Feature arc object + * @param[out] plist + * Pointer to runtime active feature list which needs to be provided to other + * fast path APIs + * + * @return + * 0: If no feature enabled + * Non-Zero: Bitmask of features enabled. plist is valid + * + */ +__rte_experimental +static __rte_always_inline uint64_t +rte_graph_feature_arc_has_any_feature(struct rte_graph_feature_arc *arc, + rte_graph_feature_rt_list_t *plist) +{ + *plist = rte_atomic_load_explicit(&arc->active_feature_list, rte_memory_order_relaxed); + + return (rte_atomic_load_explicit(arc->feature_enable_bitmask + (uint8_t)*plist, + rte_memory_order_relaxed)); +} + +/** + * Fast path API to check if provided feature is enabled on any interface/index + * or not + * + * @param arc + * Feature arc object + * @param feature + * Input rte_graph_feature_t that needs to be checked + * @param[out] plist + * Returns active list to caller which needs to be provided to other fast path + * APIs + * + * @return + * 1: If input [feature] is enabled in arc + * 0: If input [feature] is not enabled in arc + */ +__rte_experimental +static __rte_always_inline int +rte_graph_feature_arc_has_feature(struct rte_graph_feature_arc *arc, + rte_graph_feature_t feature, + rte_graph_feature_rt_list_t *plist) +{ + uint64_t bitmask = RTE_BIT64(feature); + + *plist = rte_atomic_load_explicit(&arc->active_feature_list, rte_memory_order_relaxed); + + return (bitmask & rte_atomic_load_explicit(arc->feature_enable_bitmask + (uint8_t)*plist, + rte_memory_order_relaxed)); +} + +/** + * Prefetch feature arc fast path cache line + * + * @param arc + * RTE_GRAPH feature arc object + */ +__rte_experimental +static __rte_always_inline void +rte_graph_feature_arc_prefetch(struct rte_graph_feature_arc *arc) +{ + rte_prefetch0((void *)&arc->fast_path_variables); +} + +/** + * Prefetch feature related fast path cache line + * + * @param arc + * RTE_GRAPH feature arc object + * @param list + * Pointer to runtime active feature list from rte_graph_feature_arc_has_any_feature(); + * @param feature + * Pointer to feature object + */ +__rte_experimental +static __rte_always_inline void +rte_graph_feature_arc_feature_prefetch(struct rte_graph_feature_arc *arc, + const rte_graph_feature_rt_list_t list, + rte_graph_feature_t feature) +{ + /* feature cache line */ + if (likely(rte_graph_feature_is_valid(feature))) + rte_prefetch0((void *)__rte_graph_feature_get(arc, feature, list)); +} + +/** + * Prefetch feature data upfront. Perform sanity + * + * @param arc + * RTE_GRAPH feature arc object + * @param list + * Pointer to runtime active feature list from rte_graph_feature_arc_has_any_feature(); + * @param feature + * Pointer to feature object returned from @ref + * rte_graph_feature_arc_first_feature_get() + * @param index + * Interface/index + */ +__rte_experimental +static __rte_always_inline void +rte_graph_feature_arc_data_prefetch(struct rte_graph_feature_arc *arc, + const rte_graph_feature_rt_list_t list, + rte_graph_feature_t feature, uint32_t index) +{ + if (likely(rte_graph_feature_is_valid(feature))) + rte_prefetch0((void *)((uint8_t *)arc->features[list] + + offsetof(struct rte_graph_feature, feature_data_by_index) + + (index * sizeof(rte_graph_feature_data_t)))); +} + +/** + * Fast path API to get first enabled feature on interface index + * Typically required in arc->start_node so that from returned feature, + * feature-data can be retrieved to steer packets + * + * @param arc + * Feature arc object + * @param list + * Pointer to runtime active feature list from + * rte_graph_feature_arc_has_any_feature() or + * rte_graph_feature_arc_has_feature() + * @param index + * Interface Index + * @param[out] feature + * Pointer to rte_graph_feature_t. + * + * @return + * 1. Success. If first feature field is enabled and returned [feature] is valid + * 0. Failure. If first feature field is disabled in arc + * + */ +__rte_experimental +static __rte_always_inline int +rte_graph_feature_arc_first_feature_get(struct rte_graph_feature_arc *arc, + const rte_graph_feature_rt_list_t list, + uint32_t index, + rte_graph_feature_t *feature) +{ + struct rte_graph_feature_list *feature_list = arc->feature_list[list]; + + *feature = feature_list->first_enabled_feature_by_index[index]; + + return rte_graph_feature_is_valid(*feature); +} + +/** + * Fast path API to get next enabled feature on interface index with provided + * input feature + * + * @param arc + * Feature arc object + * @param list + * Pointer to runtime active feature list from + * rte_graph_feature_arc_has_any_feature() or + * @param index + * Interface Index + * @param[out] feature + * Pointer to rte_graph_feature_t. API sets next enabled feature on [index] + * from provided input feature. Valid only if API returns Success + * @param[out] next_edge + * Edge from current feature to next feature. Valid only if next feature is valid + * + * @return + * 1. Success. first feature field is enabled/valid + * 0. Failure. first feature field is disabled/invalid + */ +__rte_experimental +static __rte_always_inline int +rte_graph_feature_arc_next_feature_get(struct rte_graph_feature_arc *arc, + const rte_graph_feature_rt_list_t list, + uint32_t index, + rte_graph_feature_t *feature, + rte_edge_t *next_edge) +{ + rte_graph_feature_data_t *feature_data = NULL; + struct rte_graph_feature *f = NULL; + + if (likely(rte_graph_feature_is_valid(*feature))) { + f = __rte_graph_feature_get(arc, *feature, list); + feature_data = rte_graph_feature_data_get(arc, f, index); + *feature = feature_data->next_enabled_feature; + *next_edge = feature_data->next_edge; + return rte_graph_feature_is_valid(*feature); + } + + return 0; +} + +/** + * Set fields with respect to first enabled feature in an arc and return edge + * Typically returned feature and interface index must be saved in rte_mbuf + * structure to pass this information to next feature node + * + * @param arc + * Feature arc object + * @param list + * Pointer to runtime active feature list from rte_graph_feature_arc_has_any_feature(); + * @param index + * Index (of interface) + * @param[out] gf + * Pointer to rte_graph_feature_t. Valid if API returns Success + * @param[out] edge + * Edge to steer packet from arc->start_node to first enabled feature. Valid + * only if API returns Success + * + * @return + * 0: If valid feature is enabled and set by API in *gf + * 1: If valid feature is NOT enabled + */ +__rte_experimental +static __rte_always_inline rte_graph_feature_t +rte_graph_feature_arc_feature_set(struct rte_graph_feature_arc *arc, + const rte_graph_feature_rt_list_t list, + uint32_t index, + rte_graph_feature_t *gf, + rte_edge_t *edge) +{ + struct rte_graph_feature_list *feature_list = arc->feature_list[list]; + struct rte_graph_feature_data *feature_data = NULL; + struct rte_graph_feature *feature = NULL; + rte_graph_feature_t f; + + f = feature_list->first_enabled_feature_by_index[index]; + + if (unlikely(rte_graph_feature_is_valid(f))) { + feature = __rte_graph_feature_get(arc, f, list); + feature_data = rte_graph_feature_data_get(arc, feature, index); + *gf = f; + *edge = feature_data->next_edge; + return 0; + } + + return 1; +} + +__rte_experimental +static __rte_always_inline int32_t +__rte_graph_feature_user_data_get(rte_graph_feature_data_t *fdata) +{ + return fdata->user_data; +} + +/** + * Get user data corresponding to current feature set by application in + * rte_graph_feature_enable() + * + * @param arc + * Feature arc object + * @param list + * Pointer to runtime active feature list from rte_graph_feature_arc_has_any_feature(); + * @param feature + * Feature index + * @param index + * Interface index + * + * @return + * -1: Failure + * Valid user data: Success + */ +__rte_experimental +static __rte_always_inline int32_t +rte_graph_feature_user_data_get(struct rte_graph_feature_arc *arc, + const rte_graph_feature_rt_list_t list, + rte_graph_feature_t feature, + uint32_t index) +{ + rte_graph_feature_data_t *fdata = NULL; + struct rte_graph_feature *f = NULL; + + if (likely(rte_graph_feature_is_valid(feature))) { + f = __rte_graph_feature_get(arc, feature, list); + fdata = rte_graph_feature_data_get(arc, f, index); + return __rte_graph_feature_user_data_get(fdata); + } + + return -1; +} +#ifdef __cplusplus +} +#endif +#endif diff --git a/lib/graph/version.map b/lib/graph/version.map index 2c83425ddc..3b7f475afd 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -52,3 +52,23 @@ DPDK_25 { local: *; }; + +EXPERIMENTAL { + global: + + # added in 24.11 + rte_graph_feature_arc_init; + rte_graph_feature_arc_create; + rte_graph_feature_arc_lookup_by_name; + rte_graph_feature_add; + rte_graph_feature_enable; + rte_graph_feature_validate; + rte_graph_feature_disable; + rte_graph_feature_lookup; + rte_graph_feature_arc_destroy; + rte_graph_feature_arc_cleanup; + rte_graph_feature_arc_num_enabled_features; + rte_graph_feature_arc_num_features; + rte_graph_feature_arc_feature_to_name; + rte_graph_feature_arc_feature_to_node; +}; -- 2.43.0