From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A2569A00C2; Thu, 17 Nov 2022 06:10:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC70242D49; Thu, 17 Nov 2022 06:10:08 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 5A16142D53 for ; Thu, 17 Nov 2022 06:10:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668661806; x=1700197806; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q4+8K+8H5m+L1/E8+7xrXF0HneATnliJosiODYpnBNo=; b=A0gB50g9pcEqvJh3BaYtV/sFo8wWlnclxdunor24gQAxz/9skTw5GR0S WEfJOkZF+10xNjPI/v+NESrs1Cou4mn+hXPYAer2BEnyCzxmQA3y31e6A 88UWGl4yeBiqiSYys3ttpdOZChPGkpeQ9bZJPoMH6Tsia7nW6Py4w8ckm v8ccvfUIxAWouzuYUfShP4eMqXVoeHH6p2LQoDIIh4rKdNE1AynHex15d BPyoPw1Yd4/GVy7zigpYyrywWB5IheZlWP8dKJ4XXKQUUbThdbj06aRjG jyocdTSMVflGVjMfb7+X0shp2g4Ih8nh8YmDVY6GbOCPEKiOJdi/gEKeK A==; X-IronPort-AV: E=McAfee;i="6500,9779,10533"; a="377026710" X-IronPort-AV: E=Sophos;i="5.96,169,1665471600"; d="scan'208";a="377026710" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Nov 2022 21:10:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10533"; a="617466171" X-IronPort-AV: E=Sophos;i="5.96,169,1665471600"; d="scan'208";a="617466171" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.118.230]) by orsmga006.jf.intel.com with ESMTP; 16 Nov 2022 21:10:03 -0800 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v1 10/13] graph: introduce graph walk by cross-core dispatch Date: Thu, 17 Nov 2022 13:09:23 +0800 Message-Id: <20221117050926.136974-11-zhirun.yan@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221117050926.136974-1-zhirun.yan@intel.com> References: <20221117050926.136974-1-zhirun.yan@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces the task scheduler mechanism to enable dispatching tasks to another worker cores. Currently, there is only a local work queue for one graph to walk. We introduce a scheduler worker queue in each worker core for dispatching tasks. It will perform the walk on scheduler work queue first, then handle the local work queue. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/rte_graph_model_generic.h | 36 +++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/lib/graph/rte_graph_model_generic.h b/lib/graph/rte_graph_model_generic.h index 5715fc8ffb..c29fc31309 100644 --- a/lib/graph/rte_graph_model_generic.h +++ b/lib/graph/rte_graph_model_generic.h @@ -71,6 +71,42 @@ void __rte_noinline __rte_graph_sched_wq_process(struct rte_graph *graph); __rte_experimental int rte_node_model_generic_set_lcore_affinity(const char *name, unsigned int lcore_id); +/** + * Perform graph walk on the circular buffer and invoke the process function + * of the nodes and collect the stats. + * + * @param graph + * Graph pointer returned from rte_graph_lookup function. + * + * @see rte_graph_lookup() + */ +__rte_experimental +static inline void +rte_graph_walk_generic(struct rte_graph *graph) +{ + uint32_t head = graph->head; + struct rte_node *node; + + if (graph->wq != NULL) + __rte_graph_sched_wq_process(graph); + + rte_graph_walk_node(graph, head, node) { + /* skip the src nodes which not bind with current worker */ + if ((int32_t)head < 0 && node->lcore_id != graph->lcore_id) + continue; + + /* Schedule the node until all task/objs are done */ + if (node->lcore_id != RTE_MAX_LCORE && + graph->lcore_id != node->lcore_id && graph->rq != NULL && + __rte_graph_sched_node_enqueue(node, graph->rq)) + continue; + + __rte_node_process(graph, node); + } + + graph->tail = 0; +} + #ifdef __cplusplus } #endif -- 2.25.1