From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id BA30A42C71;
	Fri,  9 Jun 2023 21:21:25 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 89AF242D69;
	Fri,  9 Jun 2023 21:20:26 +0200 (CEST)
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by mails.dpdk.org (Postfix) with ESMTP id 5DF7B42D77
 for <dev@dpdk.org>; Fri,  9 Jun 2023 21:20:24 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
 t=1686338424; x=1717874424;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=mr9iwvNxhZ7hGShL923CB2ZRxK/tbZrY7A7RG2Y8TbA=;
 b=BKuvwAv2e0bdkpQ7Xvh49Fm09NHmQWBppAiaTnn6CxGr3YaBatCL03GJ
 4/+S05IKUw9DnAvt+TXuXJpS1GS5QWSzUeRe8ePgXK5QthOkhr0h0zknZ
 oglzmBsMqRrD0MoXcTGY8oUltaetQtNk+O0M0NpALQJLDvIjz/jAkv41i
 G9BPIBPvuv2IccBL8h1U3M93ASL4EmcZVd6SK1zM77aD1QMabdGfH2WRo
 CuNOpo008pn2mgFfQz0X/ihmZoZDn16m9qa4mQCvTx9idUrjpaP4wJUQq
 Zg3DYGAw0luhZIw2D/4/a5XOn7o1sm/r0eQ43nSRPzJDqH7XGBPqICPpk Q==;
X-IronPort-AV: E=McAfee;i="6600,9927,10736"; a="360155115"
X-IronPort-AV: E=Sophos;i="6.00,230,1681196400"; d="scan'208";a="360155115"
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Jun 2023 12:20:23 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10736"; a="957254692"
X-IronPort-AV: E=Sophos;i="6.00,230,1681196400"; d="scan'208";a="957254692"
Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.94])
 by fmsmga006.fm.intel.com with ESMTP; 09 Jun 2023 12:20:21 -0700
From: Zhirun Yan <zhirun.yan@intel.com>
To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com,
 ndabilpuram@marvell.com, stephen@networkplumber.org,
 pbhagavatula@marvell.com, jerinjacobk@gmail.com, david.marchand@redhat.com
Cc: cunming.liang@intel.com, haiyue.wang@intel.com,
 mattias.ronnblom@ericsson.com, Zhirun Yan <zhirun.yan@intel.com>
Subject: [PATCH v12 12/16] graph: introduce graph walk by cross-core dispatch
Date: Sat, 10 Jun 2023 03:12:41 +0800
Message-Id: <20230609191245.252521-13-zhirun.yan@intel.com>
X-Mailer: git-send-email 2.37.2
In-Reply-To: <20230609191245.252521-1-zhirun.yan@intel.com>
References: <20230608151844.1823783-1-zhirun.yan@intel.com>
 <20230609191245.252521-1-zhirun.yan@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

This patch introduces the task scheduler mechanism to enable dispatching
tasks to another worker cores. Currently, there is only a local work
queue for one graph to walk. We introduce a scheduler worker queue in
each worker core for dispatching tasks. It will perform the walk on
scheduler work queue first, then handle the local work queue.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 lib/graph/rte_graph_model_mcore_dispatch.h | 44 ++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/lib/graph/rte_graph_model_mcore_dispatch.h b/lib/graph/rte_graph_model_mcore_dispatch.h
index 6163f96c37..c78a3bbdf9 100644
--- a/lib/graph/rte_graph_model_mcore_dispatch.h
+++ b/lib/graph/rte_graph_model_mcore_dispatch.h
@@ -83,6 +83,50 @@ __rte_experimental
 int rte_graph_model_mcore_dispatch_node_lcore_affinity_set(const char *name,
 							   unsigned int lcore_id);
 
+/**
+ * Perform graph walk on the circular buffer and invoke the process function
+ * of the nodes and collect the stats.
+ *
+ * @param graph
+ *   Graph pointer returned from rte_graph_lookup function.
+ *
+ * @see rte_graph_lookup()
+ */
+__rte_experimental
+static inline void
+rte_graph_walk_mcore_dispatch(struct rte_graph *graph)
+{
+	const rte_graph_off_t *cir_start = graph->cir_start;
+	const rte_node_t mask = graph->cir_mask;
+	uint32_t head = graph->head;
+	struct rte_node *node;
+
+	RTE_ASSERT(graph->parent_id != RTE_GRAPH_ID_INVALID);
+	if (graph->dispatch.wq != NULL)
+		__rte_graph_mcore_dispatch_sched_wq_process(graph);
+
+	while (likely(head != graph->tail)) {
+		node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]);
+
+		/* skip the src nodes which not bind with current worker */
+		if ((int32_t)head < 0 && node->dispatch.lcore_id != graph->dispatch.lcore_id)
+			continue;
+
+		/* Schedule the node until all task/objs are done */
+		if (node->dispatch.lcore_id != RTE_MAX_LCORE &&
+		    graph->dispatch.lcore_id != node->dispatch.lcore_id &&
+		    graph->dispatch.rq != NULL &&
+		    __rte_graph_mcore_dispatch_sched_node_enqueue(node, graph->dispatch.rq))
+			continue;
+
+		__rte_node_process(graph, node);
+
+		head = likely((int32_t)head > 0) ? head & mask : head;
+	}
+
+	graph->tail = 0;
+}
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.37.2