From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25B24A04F1; Thu, 18 Jun 2020 12:16:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 94E0E1BEC5; Thu, 18 Jun 2020 12:16:31 +0200 (CEST) Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by dpdk.org (Postfix) with ESMTP id 7796B3B5; Thu, 18 Jun 2020 12:16:30 +0200 (CEST) Received: by mail-io1-f65.google.com with SMTP id m81so6421203ioa.1; Thu, 18 Jun 2020 03:16:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=S0pEs72v7obBYJGGxE1mY7cfQ3IbmuFl4tMkRJALDGk=; b=L+uam7Ug27UmxLg5D/YaEy5STy4B7ybWYsHf5jpS3anegwvC+vAD0aFF9lWv9OJvvy YriaIgFTjagMJ63E9td7R+Mfrkluz/iEbil+UMsoBWXgcBi7UUkQ0nu2tdscswrGR6Bq xnSRTUwlsIKNSjUD15ChINuFh4abQ5jBe5KUzhwAQ/7lbnhzrNrdCcXn9cMwxtIgMzzi FL2wWMFOoit7Qt0z9Sn+ZaUst1uE7RhCuWGjrS/5VEHc9njk+Y2oY+O8IfabnvPx6Wna je3Pya1MOWr0wHpo7FJT/eBQgqBjaB/hR+6f0jTzVCov+FxxvnjB4ekMeMxAfF/Q9tVN nurQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=S0pEs72v7obBYJGGxE1mY7cfQ3IbmuFl4tMkRJALDGk=; b=OAlh9VR8XVxet9hPgQ8ZEaKmaloxcooRi2jtXqEE9vp/4FYf8wFuvSi5J/A5kCYRPs L7HAg6jc4Q8lVADx1vdyh9CaZc7EvHGCqhRWHfAA6X/iPzgxOXhurgfazM87hAsDkPz8 OGgcD0eu9xAnkbvH9u/k+BchhmHG72hXMjkEtmrMfnvYyo/WtEqPT+bUOEdoup7tbICO 47N0yRNQ7uMgiw5pCXy6cbqDGTqucMiogzKwu9kUQJF2y1QRQFe81fBp3QS8GG7lNbFI m/enVpBgqClCh9b2vAt9sXXlwulRJW/gUUQxRsKnGPQSz2T2yX3xqb2Vsk7Ua4QNe7AS ThHg== X-Gm-Message-State: AOAM531BwEuqmNmwyblbsBwYzGQo8bRjr7HD+PHovGiBFALoduug4S3o Obc9LC7HCLEAot0fJdE4fwWTkHXX1HTVCHErCZ0= X-Google-Smtp-Source: ABdhPJwoFxObc3BVVAxoDbVXSCfA/qRhwahK1BsPelmg8WSssnizKT/iLw6fdobppYlM7Gt8OiCTehoTrw6J8DefECQ= X-Received: by 2002:a02:6a26:: with SMTP id l38mr3647159jac.60.1592475389766; Thu, 18 Jun 2020 03:16:29 -0700 (PDT) MIME-Version: 1.0 References: <20200617144307.9961-1-honnappa.nagarahalli@arm.com> In-Reply-To: From: Jerin Jacob Date: Thu, 18 Jun 2020 15:46:13 +0530 Message-ID: To: Honnappa Nagarahalli Cc: dpdk-dev , Ali Alnubani , "orgerlitz@mellanox.com" , Wenzhuo Lu , Beilei Xing , Bernard Iremonger , "hemant.agrawal@nxp.com" , "jerinj@marvell.com" , Slava Ovsiienko , "thomas@monjalon.net" , Ruifeng Wang , Phil Yang , nd , Zhihong Wang , dpdk stable Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH 1/5] app/testpmd: clock gettime call in throughput calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Jun 18, 2020 at 9:33 AM Honnappa Nagarahalli wrote: > > Thanks Jerin for the feedback > > > -----Original Message----- > > From: Jerin Jacob > > Sent: Wednesday, June 17, 2020 10:16 AM > > To: Honnappa Nagarahalli > > Cc: dpdk-dev ; Ali Alnubani ; > > orgerlitz@mellanox.com; Wenzhuo Lu ; Beilei Xing > > ; Bernard Iremonger ; > > hemant.agrawal@nxp.com; jerinj@marvell.com; Slava Ovsiienko > > ; thomas@monjalon.net; Ruifeng Wang > > ; Phil Yang ; nd > > ; Zhihong Wang ; dpdk stable > > > > Subject: Re: [dpdk-dev] [PATCH 1/5] app/testpmd: clock gettime call in > > throughput calculation > > > > On Wed, Jun 17, 2020 at 8:13 PM Honnappa Nagarahalli > > wrote: > > > > > > The throughput calculation requires a counter that measures passing o= f > > > time. The PMU cycle counter does not do that. This > > > > > > It is not clear from git commit on why PMU cycle counter does not do th= at? > > On dpdk bootup, we are figuring out the Hz value based on PMU counter > > cycles. > > What is the missing piece here? > As I understand Linux kernel saves the PMU state and restores it every ti= me a thread is scheduled out and in. So, when the thread is scheduled out t= he PMU cycles are not counted towards that thread. The thread that prints t= he statistics issues good amount of system calls and I am guessing it is ge= tting scheduled out. So, it is reporting very low cycle count. OK. Probably add this info in git commit. > > > > IMO, PMU counter should have less latency and more granularity than > > clock_getime. > In general, agree. In this particular calculation the granularity has not= mattered much (for ex: numbers are fine with 50Mhz generic counter and 2.5= Ghz CPU). The latency also does not matter as it is getting amortized over = a large number of packets. So, I do not see it affecting the reported PPS/B= PS numbers. Reasonable to use clock_gettime for the control thread. > > > > > > results in incorrect throughput numbers when > > RTE_ARM_EAL_RDTSC_USE_PMU > > > is enabled. Use clock_gettime system call to calculate the time passe= d > > > since last call. > > > > > > Bugzilla ID: 450 > > > Fixes: 0e106980301d ("app/testpmd: show throughput in port stats") > > > Cc: zhihong.wang@intel.com > > > Cc: stable@dpdk.org > > > > > > Signed-off-by: Honnappa Nagarahalli > > > Reviewed-by: Phil Yang > > > Reviewed-by: Ruifeng Wang > > > --- > > > app/test-pmd/config.c | 44 > > > +++++++++++++++++++++++++++++-------------- > > > 1 file changed, 30 insertions(+), 14 deletions(-) > > > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index > > > 016bcb09c..91fbf99f8 100644 > > > --- a/app/test-pmd/config.c > > > +++ b/app/test-pmd/config.c > > > @@ -54,6 +54,14 @@ > > > > > > #define ETHDEV_FWVERS_LEN 32 > > > > > > +#ifdef CLOCK_MONOTONIC_RAW /* Defined in glibc bits/time.h */ #defin= e > > > +CLOCK_TYPE_ID CLOCK_MONOTONIC_RAW #else #define CLOCK_TYPE_ID > > > +CLOCK_MONOTONIC #endif > > > + > > > +#define NS_PER_SEC 1E9 > > > + > > > static char *flowtype_to_str(uint16_t flow_type); > > > > > > static const struct { > > > @@ -136,9 +144,10 @@ nic_stats_display(portid_t port_id) > > > static uint64_t prev_pkts_tx[RTE_MAX_ETHPORTS]; > > > static uint64_t prev_bytes_rx[RTE_MAX_ETHPORTS]; > > > static uint64_t prev_bytes_tx[RTE_MAX_ETHPORTS]; > > > - static uint64_t prev_cycles[RTE_MAX_ETHPORTS]; > > > + static uint64_t prev_ns[RTE_MAX_ETHPORTS]; > > > + struct timespec cur_time; > > > uint64_t diff_pkts_rx, diff_pkts_tx, diff_bytes_rx, diff_byte= s_tx, > > > - diff_= cycles; > > > + > > > + diff_ns; > > > uint64_t mpps_rx, mpps_tx, mbps_rx, mbps_tx; > > > struct rte_eth_stats stats; > > > struct rte_port *port =3D &ports[port_id]; @@ -195,10 +204,17= @@ > > > nic_stats_display(portid_t port_id) > > > } > > > } > > > > > > - diff_cycles =3D prev_cycles[port_id]; > > > - prev_cycles[port_id] =3D rte_rdtsc(); > > > - if (diff_cycles > 0) > > > - diff_cycles =3D prev_cycles[port_id] - diff_cycles; > > > + diff_ns =3D 0; > > > + if (clock_gettime(CLOCK_TYPE_ID, &cur_time) =3D=3D 0) { > > > + uint64_t ns; > > > + > > > + ns =3D cur_time.tv_sec * NS_PER_SEC; > > > + ns +=3D cur_time.tv_nsec; > > > + > > > + if (prev_ns[port_id] !=3D 0) > > > + diff_ns =3D ns - prev_ns[port_id]; > > > + prev_ns[port_id] =3D ns; > > > + } > > > > > > diff_pkts_rx =3D (stats.ipackets > prev_pkts_rx[port_id]) ? > > > (stats.ipackets - prev_pkts_rx[port_id]) : 0; @@ > > > -206,10 +222,10 @@ nic_stats_display(portid_t port_id) > > > (stats.opackets - prev_pkts_tx[port_id]) : 0; > > > prev_pkts_rx[port_id] =3D stats.ipackets; > > > prev_pkts_tx[port_id] =3D stats.opackets; > > > - mpps_rx =3D diff_cycles > 0 ? > > > - diff_pkts_rx * rte_get_tsc_hz() / diff_cycles : 0; > > > - mpps_tx =3D diff_cycles > 0 ? > > > - diff_pkts_tx * rte_get_tsc_hz() / diff_cycles : 0; > > > + mpps_rx =3D diff_ns > 0 ? > > > + (double)diff_pkts_rx / diff_ns * NS_PER_SEC : 0; > > > + mpps_tx =3D diff_ns > 0 ? > > > + (double)diff_pkts_tx / diff_ns * NS_PER_SEC : 0; > > > > > > diff_bytes_rx =3D (stats.ibytes > prev_bytes_rx[port_id]) ? > > > (stats.ibytes - prev_bytes_rx[port_id]) : 0; @@ > > > -217,10 +233,10 @@ nic_stats_display(portid_t port_id) > > > (stats.obytes - prev_bytes_tx[port_id]) : 0; > > > prev_bytes_rx[port_id] =3D stats.ibytes; > > > prev_bytes_tx[port_id] =3D stats.obytes; > > > - mbps_rx =3D diff_cycles > 0 ? > > > - diff_bytes_rx * rte_get_tsc_hz() / diff_cycles : 0; > > > - mbps_tx =3D diff_cycles > 0 ? > > > - diff_bytes_tx * rte_get_tsc_hz() / diff_cycles : 0; > > > + mbps_rx =3D diff_ns > 0 ? > > > + (double)diff_bytes_rx / diff_ns * NS_PER_SEC : 0; > > > + mbps_tx =3D diff_ns > 0 ? > > > + (double)diff_bytes_tx / diff_ns * NS_PER_SEC : 0; > > > > > > printf("\n Throughput (since last show)\n"); > > > printf(" Rx-pps: %12"PRIu64" Rx-bps: %12"PRIu64"\n = Tx- > > pps: %12" > > > -- > > > 2.17.1 > > >