From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1D94A09F0; Wed, 16 Dec 2020 17:51:38 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F08CECA0E; Wed, 16 Dec 2020 17:50:11 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id BAE22C9DE for ; Wed, 16 Dec 2020 17:50:04 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from ophirmu@nvidia.com) with SMTP; 16 Dec 2020 18:49:58 +0200 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0BGGnvcw005924; Wed, 16 Dec 2020 18:49:58 +0200 From: Ophir Munk To: Ori Kam , dev@dpdk.org, Raslan Darawsheh Cc: Ophir Munk , Thomas Monjalon Date: Wed, 16 Dec 2020 16:49:30 +0000 Message-Id: <20201216164931.1517-6-ophirmu@nvidia.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20201216164931.1517-1-ophirmu@nvidia.com> References: <20201216164931.1517-1-ophirmu@nvidia.com> Subject: [dpdk-dev] [PATCH v1 5/6] app/regex: support performance measurements per QP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Up to this commit measuring the parsing elapsed time and Giga bits per second performance was done on the aggregation of all QPs (per core). This commit seperates the time measurements per individual QP. Signed-off-by: Ophir Munk --- app/test-regex/main.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/app/test-regex/main.c b/app/test-regex/main.c index 720eb1c..f305186 100644 --- a/app/test-regex/main.c +++ b/app/test-regex/main.c @@ -48,6 +48,8 @@ struct qp_params { struct rte_regex_ops **ops; struct job_ctx *jobs_ctx; char *buf; + time_t start; + time_t end; }; struct qps_per_lcore { @@ -324,8 +326,6 @@ run_regex(void *args) unsigned long d_ind = 0; struct rte_mbuf_ext_shared_info shinfo; int res = 0; - time_t start; - time_t end; double time; struct rte_mempool *mbuf_mp; struct qp_params *qp; @@ -418,9 +418,10 @@ run_regex(void *args) qp->buf = buf; qp->total_matches = 0; + qp->start = 0; + qp->end = 0; } - start = clock(); for (i = 0; i < nb_iterations; i++) { for (qp_id = 0; qp_id < nb_qps; qp_id++) { qp = &qps[qp_id]; @@ -431,6 +432,8 @@ run_regex(void *args) update = false; for (qp_id = 0; qp_id < nb_qps; qp_id++) { qp = &qps[qp_id]; + if (!qp->start) + qp->start = clock(); if (qp->total_dequeue < actual_jobs) { struct rte_regex_ops ** cur_ops_to_enqueue = qp->ops + @@ -461,22 +464,30 @@ run_regex(void *args) qp->total_enqueue - qp->total_dequeue); update = true; + } else { + if (!qp->end) + qp->end = clock(); } + } } while (update); } - end = clock(); - time = ((double)end - start) / CLOCKS_PER_SEC; - printf("Job len = %ld Bytes\n", job_len); - printf("Time = %lf sec\n", time); - printf("Perf = %lf Gbps\n", - (((double)actual_jobs * job_len * nb_iterations * 8) / time) / - 1000000000.0); + for (qp_id = 0; qp_id < nb_qps; qp_id++) { + time = ((double)qp->end - qp->start) / CLOCKS_PER_SEC; + printf("Core=%u QP=%u\n", rte_lcore_id(), qp_id + qp_id_base); + printf("Job len = %ld Bytes\n", job_len); + printf("Time = %lf sec\n", time); + printf("Perf = %lf Gbps\n\n", + (((double)actual_jobs * job_len * + nb_iterations * 8) / time) / + 1000000000.0); + } if (rgxc->perf_mode) goto end; for (qp_id = 0; qp_id < nb_qps; qp_id++) { - printf("\n############ QP id=%u ############\n", qp_id); + printf("\n############ Core=%u QP=%u ############\n", + rte_lcore_id(), qp_id + qp_id_base); qp = &qps[qp_id]; /* Log results per job. */ for (d_ind = 0; d_ind < qp->total_dequeue; d_ind++) { -- 2.8.4