From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BAD94281E; Fri, 24 Mar 2023 04:17:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3B5F742D4B; Fri, 24 Mar 2023 04:16:59 +0100 (CET) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 32DFE42D49 for ; Fri, 24 Mar 2023 04:16:58 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679627818; x=1711163818; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NuEeAUD+JLi9xqHCFNINzTY2iM75dNJU4zeKvN705Q0=; b=QVrL5iqbr0FjWQnGi+XvnKcizi5RRtpR8gn/jL3TeeKP51cMfXZ3/0il xMFsT5TG3YwmTWcEMDgcsq1hIfnRCvJsFGPKIivrNEexz6rsYGkmaL0Yo YN+hxQ3voV14O2X7RAKowoDHT4+Vr+wUhcjgqVEJ7btnd2RoRQylIX11h HzPTAPX20OBV2hL0jcUPDpvg874O6bRZMIJAnhQd0co6vRpk6qFoHvrfc tWPLEaTGULmfxTEB/rp5S+m3r6FJph84XIjcrY4uxdA1hEh4Isd0//kMC ZSflqlOfbyxmJuNclx8dRqESE6EaQFK2UtGMFsDX5ANDrsVg5kKiE/Aiu g==; X-IronPort-AV: E=McAfee;i="6600,9927,10658"; a="402266480" X-IronPort-AV: E=Sophos;i="5.98,286,1673942400"; d="scan'208";a="402266480" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2023 20:16:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10658"; a="684998585" X-IronPort-AV: E=Sophos;i="5.98,286,1673942400"; d="scan'208";a="684998585" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga007.fm.intel.com with ESMTP; 23 Mar 2023 20:16:55 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v2 13/15] graph: add stats for corss-core dispatching Date: Fri, 24 Mar 2023 11:16:20 +0900 Message-Id: <20230324021622.1369006-14-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230324021622.1369006-1-zhirun.yan@intel.com> References: <20221117050926.136974-1-zhirun.yan@intel.com> <20230324021622.1369006-1-zhirun.yan@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add stats for cross-core dispatching scheduler if stats collection is enabled. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_debug.c | 6 +++ lib/graph/graph_stats.c | 74 +++++++++++++++++++++++++--- lib/graph/rte_graph.h | 2 + lib/graph/rte_graph_model_dispatch.c | 3 ++ lib/graph/rte_graph_worker_common.h | 2 + 5 files changed, 79 insertions(+), 8 deletions(-) diff --git a/lib/graph/graph_debug.c b/lib/graph/graph_debug.c index b84412f5dd..7dcf07b080 100644 --- a/lib/graph/graph_debug.c +++ b/lib/graph/graph_debug.c @@ -74,6 +74,12 @@ rte_graph_obj_dump(FILE *f, struct rte_graph *g, bool all) fprintf(f, " size=%d\n", n->size); fprintf(f, " idx=%d\n", n->idx); fprintf(f, " total_objs=%" PRId64 "\n", n->total_objs); + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + fprintf(f, " total_sched_objs=%" PRId64 "\n", + n->total_sched_objs); + fprintf(f, " total_sched_fail=%" PRId64 "\n", + n->total_sched_fail); + } fprintf(f, " total_calls=%" PRId64 "\n", n->total_calls); for (i = 0; i < n->nb_edges; i++) fprintf(f, " edge[%d] <%s>\n", i, diff --git a/lib/graph/graph_stats.c b/lib/graph/graph_stats.c index c0140ba922..aa22cc403c 100644 --- a/lib/graph/graph_stats.c +++ b/lib/graph/graph_stats.c @@ -40,13 +40,19 @@ struct rte_graph_cluster_stats { struct cluster_node clusters[]; } __rte_cache_aligned; +#define boarder_model_dispatch() \ + fprintf(f, "+-------------------------------+---------------+--------" \ + "-------+---------------+---------------+---------------+" \ + "---------------+---------------+-" \ + "----------+\n") + #define boarder() \ fprintf(f, "+-------------------------------+---------------+--------" \ "-------+---------------+---------------+---------------+-" \ "----------+\n") static inline void -print_banner(FILE *f) +print_banner_default(FILE *f) { boarder(); fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s\n", "|Node", "|calls", @@ -55,6 +61,27 @@ print_banner(FILE *f) boarder(); } +static inline void +print_banner_dispatch(FILE *f) +{ + boarder_model_dispatch(); + fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s%-16s%-16s\n", + "|Node", "|calls", + "|objs", "|sched objs", "|sched fail", + "|realloc_count", "|objs/call", "|objs/sec(10E6)", + "|cycles/call|"); + boarder_model_dispatch(); +} + +static inline void +print_banner(FILE *f) +{ + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) + print_banner_dispatch(f); + else + print_banner_default(f); +} + static inline void print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat) { @@ -76,11 +103,21 @@ print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat) objs_per_sec = ts_per_hz ? (objs - prev_objs) / ts_per_hz : 0; objs_per_sec /= 1000000; - fprintf(f, - "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 - "|%-15.3f|%-15.6f|%-11.4f|\n", - stat->name, calls, objs, stat->realloc_count, objs_per_call, - objs_per_sec, cycles_per_call); + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + fprintf(f, + "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15.3f|%-15.6f|%-11.4f|\n", + stat->name, calls, objs, stat->sched_objs, + stat->sched_fail, stat->realloc_count, objs_per_call, + objs_per_sec, cycles_per_call); + } else { + fprintf(f, + "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15.3f|%-15.6f|%-11.4f|\n", + stat->name, calls, objs, stat->realloc_count, objs_per_call, + objs_per_sec, cycles_per_call); + } } static int @@ -88,13 +125,20 @@ graph_cluster_stats_cb(bool is_first, bool is_last, void *cookie, const struct rte_graph_cluster_node_stats *stat) { FILE *f = cookie; + int model; + + model = rte_graph_worker_model_get(); if (unlikely(is_first)) print_banner(f); if (stat->objs) print_node(f, stat); - if (unlikely(is_last)) - boarder(); + if (unlikely(is_last)) { + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) + boarder_model_dispatch(); + else + boarder(); + } return 0; }; @@ -332,13 +376,21 @@ static inline void cluster_node_arregate_stats(struct cluster_node *cluster) { uint64_t calls = 0, cycles = 0, objs = 0, realloc_count = 0; + uint64_t sched_objs = 0, sched_fail = 0; struct rte_graph_cluster_node_stats *stat = &cluster->stat; struct rte_node *node; rte_node_t count; + int model; + model = rte_graph_worker_model_get(); for (count = 0; count < cluster->nb_nodes; count++) { node = cluster->nodes[count]; + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + sched_objs += node->total_sched_objs; + sched_fail += node->total_sched_fail; + } + calls += node->total_calls; objs += node->total_objs; cycles += node->total_cycles; @@ -348,6 +400,12 @@ cluster_node_arregate_stats(struct cluster_node *cluster) stat->calls = calls; stat->objs = objs; stat->cycles = cycles; + + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + stat->sched_objs = sched_objs; + stat->sched_fail = sched_fail; + } + stat->ts = rte_get_timer_cycles(); stat->realloc_count = realloc_count; } diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index 2f86c17de7..7d77a790ac 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -208,6 +208,8 @@ struct rte_graph_cluster_node_stats { uint64_t prev_calls; /**< Previous number of calls. */ uint64_t prev_objs; /**< Previous number of processed objs. */ uint64_t prev_cycles; /**< Previous number of cycles. */ + uint64_t sched_objs; /**< Previous number of scheduled objs. */ + uint64_t sched_fail; /**< Previous number of failed schedule objs. */ uint64_t realloc_count; /**< Realloc count. */ diff --git a/lib/graph/rte_graph_model_dispatch.c b/lib/graph/rte_graph_model_dispatch.c index b46dd156ac..4cf00160ea 100644 --- a/lib/graph/rte_graph_model_dispatch.c +++ b/lib/graph/rte_graph_model_dispatch.c @@ -83,6 +83,7 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) rte_pause(); off += size; + node->total_sched_objs += size; node->idx -= size; if (node->idx > 0) goto submit_again; @@ -94,6 +95,8 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) memmove(&node->objs[0], &node->objs[off], node->idx * sizeof(void *)); + node->total_sched_fail += node->idx; + return false; } diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index 70cfde7015..be8508cd83 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -94,6 +94,8 @@ struct rte_node { /* Fast schedule area for mcore dispatch model */ unsigned int lcore_id; /**< Node running lcore. */ }; + uint64_t total_sched_objs; /**< Number of objects scheduled. */ + uint64_t total_sched_fail; /**< Number of scheduled failure. */ /* Fast path area */ #define RTE_NODE_CTX_SZ 16 uint8_t ctx[RTE_NODE_CTX_SZ] __rte_cache_aligned; /**< Node Context. */ -- 2.37.2