From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C539C4286D; Thu, 30 Mar 2023 08:20:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0948642D5A; Thu, 30 Mar 2023 08:19:31 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 41D7642B7E for ; Thu, 30 Mar 2023 08:19:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157166; x=1711693166; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LVhckUaRkO+6TmsxwmLz/KPOPMLLS8UCjv+USsYZWwA=; b=Zqb+pVJKz6RYjA5j08/yn8AsbNxdVxtEuRAydb5w25o9s8zKC71ze4bY AtaFpXU1K1Iy+Hn1RygDDuZKD07k7MClWEfqT0duKXA9Cm9XXEh/lSFRg qyDVOywBBbLuP8RECtUsVvtVIPwZTmQcKq19/A8hU4LXtNhmAjRe36cIQ 8pr7DL8zVCAn+LWgrQDcrGkLA8psoFn4ujocEzTkDOd2tBI4aXjeS1r5K 4s11upPesApkejjThxNGXLqMkgibd42wfzagfxcb/YdS+4UjBgdVwy+1s 0/wK6lQ+Q4HPxOWdcfnBZC5cbOGrexdfgxkFEzoHKTWyPLy8WfAkt6OAA w==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530639" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530639" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:19:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176245" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176245" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:19:04 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 13/15] graph: add stats for cross-core dispatching Date: Thu, 30 Mar 2023 15:18:32 +0900 Message-Id: <20230330061834.3118201-14-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add stats for cross-core dispatching scheduler if stats collection is enabled. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_debug.c | 6 +++ lib/graph/graph_stats.c | 74 +++++++++++++++++++++++++--- lib/graph/rte_graph.h | 2 + lib/graph/rte_graph_model_dispatch.c | 3 ++ lib/graph/rte_graph_worker_common.h | 2 + 5 files changed, 79 insertions(+), 8 deletions(-) diff --git a/lib/graph/graph_debug.c b/lib/graph/graph_debug.c index b84412f5dd..7dcf07b080 100644 --- a/lib/graph/graph_debug.c +++ b/lib/graph/graph_debug.c @@ -74,6 +74,12 @@ rte_graph_obj_dump(FILE *f, struct rte_graph *g, bool all) fprintf(f, " size=%d\n", n->size); fprintf(f, " idx=%d\n", n->idx); fprintf(f, " total_objs=%" PRId64 "\n", n->total_objs); + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + fprintf(f, " total_sched_objs=%" PRId64 "\n", + n->total_sched_objs); + fprintf(f, " total_sched_fail=%" PRId64 "\n", + n->total_sched_fail); + } fprintf(f, " total_calls=%" PRId64 "\n", n->total_calls); for (i = 0; i < n->nb_edges; i++) fprintf(f, " edge[%d] <%s>\n", i, diff --git a/lib/graph/graph_stats.c b/lib/graph/graph_stats.c index c0140ba922..aa22cc403c 100644 --- a/lib/graph/graph_stats.c +++ b/lib/graph/graph_stats.c @@ -40,13 +40,19 @@ struct rte_graph_cluster_stats { struct cluster_node clusters[]; } __rte_cache_aligned; +#define boarder_model_dispatch() \ + fprintf(f, "+-------------------------------+---------------+--------" \ + "-------+---------------+---------------+---------------+" \ + "---------------+---------------+-" \ + "----------+\n") + #define boarder() \ fprintf(f, "+-------------------------------+---------------+--------" \ "-------+---------------+---------------+---------------+-" \ "----------+\n") static inline void -print_banner(FILE *f) +print_banner_default(FILE *f) { boarder(); fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s\n", "|Node", "|calls", @@ -55,6 +61,27 @@ print_banner(FILE *f) boarder(); } +static inline void +print_banner_dispatch(FILE *f) +{ + boarder_model_dispatch(); + fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s%-16s%-16s\n", + "|Node", "|calls", + "|objs", "|sched objs", "|sched fail", + "|realloc_count", "|objs/call", "|objs/sec(10E6)", + "|cycles/call|"); + boarder_model_dispatch(); +} + +static inline void +print_banner(FILE *f) +{ + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) + print_banner_dispatch(f); + else + print_banner_default(f); +} + static inline void print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat) { @@ -76,11 +103,21 @@ print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat) objs_per_sec = ts_per_hz ? (objs - prev_objs) / ts_per_hz : 0; objs_per_sec /= 1000000; - fprintf(f, - "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 - "|%-15.3f|%-15.6f|%-11.4f|\n", - stat->name, calls, objs, stat->realloc_count, objs_per_call, - objs_per_sec, cycles_per_call); + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + fprintf(f, + "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15.3f|%-15.6f|%-11.4f|\n", + stat->name, calls, objs, stat->sched_objs, + stat->sched_fail, stat->realloc_count, objs_per_call, + objs_per_sec, cycles_per_call); + } else { + fprintf(f, + "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15.3f|%-15.6f|%-11.4f|\n", + stat->name, calls, objs, stat->realloc_count, objs_per_call, + objs_per_sec, cycles_per_call); + } } static int @@ -88,13 +125,20 @@ graph_cluster_stats_cb(bool is_first, bool is_last, void *cookie, const struct rte_graph_cluster_node_stats *stat) { FILE *f = cookie; + int model; + + model = rte_graph_worker_model_get(); if (unlikely(is_first)) print_banner(f); if (stat->objs) print_node(f, stat); - if (unlikely(is_last)) - boarder(); + if (unlikely(is_last)) { + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) + boarder_model_dispatch(); + else + boarder(); + } return 0; }; @@ -332,13 +376,21 @@ static inline void cluster_node_arregate_stats(struct cluster_node *cluster) { uint64_t calls = 0, cycles = 0, objs = 0, realloc_count = 0; + uint64_t sched_objs = 0, sched_fail = 0; struct rte_graph_cluster_node_stats *stat = &cluster->stat; struct rte_node *node; rte_node_t count; + int model; + model = rte_graph_worker_model_get(); for (count = 0; count < cluster->nb_nodes; count++) { node = cluster->nodes[count]; + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + sched_objs += node->total_sched_objs; + sched_fail += node->total_sched_fail; + } + calls += node->total_calls; objs += node->total_objs; cycles += node->total_cycles; @@ -348,6 +400,12 @@ cluster_node_arregate_stats(struct cluster_node *cluster) stat->calls = calls; stat->objs = objs; stat->cycles = cycles; + + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + stat->sched_objs = sched_objs; + stat->sched_fail = sched_fail; + } + stat->ts = rte_get_timer_cycles(); stat->realloc_count = realloc_count; } diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index 2f86c17de7..7d77a790ac 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -208,6 +208,8 @@ struct rte_graph_cluster_node_stats { uint64_t prev_calls; /**< Previous number of calls. */ uint64_t prev_objs; /**< Previous number of processed objs. */ uint64_t prev_cycles; /**< Previous number of cycles. */ + uint64_t sched_objs; /**< Previous number of scheduled objs. */ + uint64_t sched_fail; /**< Previous number of failed schedule objs. */ uint64_t realloc_count; /**< Realloc count. */ diff --git a/lib/graph/rte_graph_model_dispatch.c b/lib/graph/rte_graph_model_dispatch.c index a300fefb85..9db60eb463 100644 --- a/lib/graph/rte_graph_model_dispatch.c +++ b/lib/graph/rte_graph_model_dispatch.c @@ -83,6 +83,7 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) rte_pause(); off += size; + node->total_sched_objs += size; node->idx -= size; if (node->idx > 0) goto submit_again; @@ -94,6 +95,8 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) memmove(&node->objs[0], &node->objs[off], node->idx * sizeof(void *)); + node->total_sched_fail += node->idx; + return false; } diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index dc0a0b5554..d94983589c 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -95,6 +95,8 @@ struct rte_node { /* Fast schedule area for mcore dispatch model */ unsigned int lcore_id; /**< Node running lcore. */ }; + uint64_t total_sched_objs; /**< Number of objects scheduled. */ + uint64_t total_sched_fail; /**< Number of scheduled failure. */ /* Fast path area */ #define RTE_NODE_CTX_SZ 16 uint8_t ctx[RTE_NODE_CTX_SZ] __rte_cache_aligned; /**< Node Context. */ -- 2.37.2