From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0599DA00C5; Tue, 7 Jul 2020 01:32:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 131CF1DC33; Tue, 7 Jul 2020 01:32:41 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 886DA1DC0F; Tue, 7 Jul 2020 01:32:39 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EF3BCC0A; Mon, 6 Jul 2020 16:32:38 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E18BA3F718; Mon, 6 Jul 2020 16:32:38 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, alialnu@mellanox.com, orgerlitz@mellanox.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, ferruh.yigit@intel.com Cc: hemant.agrawal@nxp.com, jerinj@marvell.com, viacheslavo@mellanox.com, thomas@monjalon.net, ruifeng.wang@arm.com, phil.yang@arm.com, nd@arm.com, zhihong.wang@intel.com, stable@dpdk.org Date: Mon, 6 Jul 2020 18:32:29 -0500 Message-Id: <20200706233231.25881-1-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200617144307.9961-1-honnappa.nagarahalli@arm.com> References: <20200617144307.9961-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v3 1/3] app/testpmd: clock gettime call in throughput calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The throughput calculation requires a counter that measures passing of time. However, the kernel saves and restores the PMU state when a thread is unscheduled and scheduled. This ensures that the PMU cycles are not counted towards a thread that is not scheduled. Hence, when RTE_ARM_EAL_RDTSC_USE_PMU is enabled, the PMU cycles do not represent the passing of time. This results in incorrect calculation of throughput numbers. Use clock_gettime system call to calculate the time passed since last call. Bugzilla ID: 450 Fixes: 0e106980301d ("app/testpmd: show throughput in port stats") Cc: zhihong.wang@intel.com Cc: stable@dpdk.org Signed-off-by: Honnappa Nagarahalli Reviewed-by: Phil Yang Reviewed-by: Ruifeng Wang Reviewed-by: Ferruh Yigit Tested-by: Phil Yang Tested-by: Ali Alnubani --- v3: 1) Dropped 'noisy vnf' patch, will add with future Dharmik's patch 2) Dropped flow gen patch, send out a separate one if there is consensus on the mailing list 3) Changed the logic to display burst stats to make it easier to understand (Ferruh) v2: Updated commit log in 1/5 (Jerin) app/test-pmd/config.c | 44 +++++++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 016bcb09c..91fbf99f8 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -54,6 +54,14 @@ #define ETHDEV_FWVERS_LEN 32 +#ifdef CLOCK_MONOTONIC_RAW /* Defined in glibc bits/time.h */ +#define CLOCK_TYPE_ID CLOCK_MONOTONIC_RAW +#else +#define CLOCK_TYPE_ID CLOCK_MONOTONIC +#endif + +#define NS_PER_SEC 1E9 + static char *flowtype_to_str(uint16_t flow_type); static const struct { @@ -136,9 +144,10 @@ nic_stats_display(portid_t port_id) static uint64_t prev_pkts_tx[RTE_MAX_ETHPORTS]; static uint64_t prev_bytes_rx[RTE_MAX_ETHPORTS]; static uint64_t prev_bytes_tx[RTE_MAX_ETHPORTS]; - static uint64_t prev_cycles[RTE_MAX_ETHPORTS]; + static uint64_t prev_ns[RTE_MAX_ETHPORTS]; + struct timespec cur_time; uint64_t diff_pkts_rx, diff_pkts_tx, diff_bytes_rx, diff_bytes_tx, - diff_cycles; + diff_ns; uint64_t mpps_rx, mpps_tx, mbps_rx, mbps_tx; struct rte_eth_stats stats; struct rte_port *port = &ports[port_id]; @@ -195,10 +204,17 @@ nic_stats_display(portid_t port_id) } } - diff_cycles = prev_cycles[port_id]; - prev_cycles[port_id] = rte_rdtsc(); - if (diff_cycles > 0) - diff_cycles = prev_cycles[port_id] - diff_cycles; + diff_ns = 0; + if (clock_gettime(CLOCK_TYPE_ID, &cur_time) == 0) { + uint64_t ns; + + ns = cur_time.tv_sec * NS_PER_SEC; + ns += cur_time.tv_nsec; + + if (prev_ns[port_id] != 0) + diff_ns = ns - prev_ns[port_id]; + prev_ns[port_id] = ns; + } diff_pkts_rx = (stats.ipackets > prev_pkts_rx[port_id]) ? (stats.ipackets - prev_pkts_rx[port_id]) : 0; @@ -206,10 +222,10 @@ nic_stats_display(portid_t port_id) (stats.opackets - prev_pkts_tx[port_id]) : 0; prev_pkts_rx[port_id] = stats.ipackets; prev_pkts_tx[port_id] = stats.opackets; - mpps_rx = diff_cycles > 0 ? - diff_pkts_rx * rte_get_tsc_hz() / diff_cycles : 0; - mpps_tx = diff_cycles > 0 ? - diff_pkts_tx * rte_get_tsc_hz() / diff_cycles : 0; + mpps_rx = diff_ns > 0 ? + (double)diff_pkts_rx / diff_ns * NS_PER_SEC : 0; + mpps_tx = diff_ns > 0 ? + (double)diff_pkts_tx / diff_ns * NS_PER_SEC : 0; diff_bytes_rx = (stats.ibytes > prev_bytes_rx[port_id]) ? (stats.ibytes - prev_bytes_rx[port_id]) : 0; @@ -217,10 +233,10 @@ nic_stats_display(portid_t port_id) (stats.obytes - prev_bytes_tx[port_id]) : 0; prev_bytes_rx[port_id] = stats.ibytes; prev_bytes_tx[port_id] = stats.obytes; - mbps_rx = diff_cycles > 0 ? - diff_bytes_rx * rte_get_tsc_hz() / diff_cycles : 0; - mbps_tx = diff_cycles > 0 ? - diff_bytes_tx * rte_get_tsc_hz() / diff_cycles : 0; + mbps_rx = diff_ns > 0 ? + (double)diff_bytes_rx / diff_ns * NS_PER_SEC : 0; + mbps_tx = diff_ns > 0 ? + (double)diff_bytes_tx / diff_ns * NS_PER_SEC : 0; printf("\n Throughput (since last show)\n"); printf(" Rx-pps: %12"PRIu64" Rx-bps: %12"PRIu64"\n Tx-pps: %12" -- 2.17.1