From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32686A0487 for ; Wed, 3 Jul 2019 07:51:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4B65144C3; Wed, 3 Jul 2019 07:51:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3126C37AF for ; Wed, 3 Jul 2019 07:51:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x635jLJY010617 for ; Tue, 2 Jul 2019 22:51:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=KHm6k50TZZRdMfb5SbDtF5fLIWCq7DhAUH8COfIa+7Y=; b=YBL+iPBcyaeJ56lqIMWcoL1WXyE7K+KiFiqGP6GDoNR8RMuZrvhrzySRvLVjmx7oTmw8 aSOZZGAHatPKhI9RdQyNkXk5XRVfc43Uz0OY7rb7zq6aJHeUXWvvf52uL5+sowC14uam +mDNiEe5l0hrsEnVqrZtZemujI7ClKC2qWX5jIeeLMtn/3ssw2R8nHRJ8QusXN+d6Xvv K1dJT9XcKlUsah8GOoaVFJ7fEnsvR7sYL3hjqkmJvmusv/xVOZCepJ+FsM4D4QgG1/jm f6apmNsJwAvfHBa4doZ6HpCP/j9zJeuBIm6v9lPTFhjVmQFRqOYOR82wJNcePBF/R2Hy 3w== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tg57341ec-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 02 Jul 2019 22:51:41 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 2 Jul 2019 22:51:39 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Tue, 2 Jul 2019 22:51:39 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.31]) by maili.marvell.com (Postfix) with ESMTP id 16E4E3F7043; Tue, 2 Jul 2019 22:51:37 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Date: Wed, 3 Jul 2019 11:21:35 +0530 Message-ID: <20190703055136.883-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-03_02:, , signatures=0 Subject: [dpdk-dev] [PATCH] app/test-eventdev: optimize producer routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh When using synthetic and timer event producer reduce the calls made to mempool library by using get_bulk() instead of get(). Signed-off-by: Pavan Nikhilesh --- app/test-eventdev/test_perf_common.c | 61 +++++++++++++++------------- 1 file changed, 33 insertions(+), 28 deletions(-) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index 01f782820..66bfc9173 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -28,6 +28,7 @@ perf_test_result(struct evt_test *test, struct evt_options *opt) static inline int perf_producer(void *arg) { + int i; struct prod_data *p = arg; struct test_perf *t = p->t; struct evt_options *opt = t->opt; @@ -38,7 +39,7 @@ perf_producer(void *arg) const uint32_t nb_flows = t->nb_flows; uint32_t flow_counter = 0; uint64_t count = 0; - struct perf_elt *m; + struct perf_elt *m[BURST_SIZE + 1] = {NULL}; struct rte_event ev; if (opt->verbose_level > 1) @@ -54,19 +55,21 @@ perf_producer(void *arg) ev.sub_event_type = 0; /* stage 0 */ while (count < nb_pkts && t->done == false) { - if (rte_mempool_get(pool, (void **)&m) < 0) + if (rte_mempool_get_bulk(pool, (void **)m, BURST_SIZE) < 0) continue; - - ev.flow_id = flow_counter++ % nb_flows; - ev.event_ptr = m; - m->timestamp = rte_get_timer_cycles(); - while (rte_event_enqueue_burst(dev_id, port, &ev, 1) != 1) { - if (t->done) - break; - rte_pause(); - m->timestamp = rte_get_timer_cycles(); + for (i = 0; i < BURST_SIZE; i++) { + ev.flow_id = flow_counter++ % nb_flows; + ev.event_ptr = m[i]; + m[i]->timestamp = rte_get_timer_cycles(); + while (rte_event_enqueue_burst(dev_id, + port, &ev, 1) != 1) { + if (t->done) + break; + rte_pause(); + m[i]->timestamp = rte_get_timer_cycles(); + } } - count++; + count += BURST_SIZE; } return 0; @@ -75,6 +78,7 @@ perf_producer(void *arg) static inline int perf_event_timer_producer(void *arg) { + int i; struct prod_data *p = arg; struct test_perf *t = p->t; struct evt_options *opt = t->opt; @@ -85,7 +89,7 @@ perf_event_timer_producer(void *arg) const uint32_t nb_flows = t->nb_flows; const uint64_t nb_timers = opt->nb_timers; struct rte_mempool *pool = t->pool; - struct perf_elt *m; + struct perf_elt *m[BURST_SIZE + 1] = {NULL}; struct rte_event_timer_adapter **adptr = t->timer_adptr; struct rte_event_timer tim; uint64_t timeout_ticks = opt->expiry_nsec / opt->timer_tick_nsec; @@ -107,23 +111,24 @@ perf_event_timer_producer(void *arg) printf("%s(): lcore %d\n", __func__, rte_lcore_id()); while (count < nb_timers && t->done == false) { - if (rte_mempool_get(pool, (void **)&m) < 0) + if (rte_mempool_get_bulk(pool, (void **)m, BURST_SIZE) < 0) continue; - - m->tim = tim; - m->tim.ev.flow_id = flow_counter++ % nb_flows; - m->tim.ev.event_ptr = m; - m->timestamp = rte_get_timer_cycles(); - while (rte_event_timer_arm_burst( - adptr[flow_counter % nb_timer_adptrs], - (struct rte_event_timer **)&m, 1) != 1) { - if (t->done) - break; - rte_pause(); - m->timestamp = rte_get_timer_cycles(); + for (i = 0; i < BURST_SIZE; i++) { + rte_prefetch0(m[i + 1]); + m[i]->tim = tim; + m[i]->tim.ev.flow_id = flow_counter++ % nb_flows; + m[i]->tim.ev.event_ptr = m[i]; + m[i]->timestamp = rte_get_timer_cycles(); + while (rte_event_timer_arm_burst( + adptr[flow_counter % nb_timer_adptrs], + (struct rte_event_timer **)&m[i], 1) != 1) { + if (t->done) + break; + m[i]->timestamp = rte_get_timer_cycles(); + } + arm_latency += rte_get_timer_cycles() - m[i]->timestamp; } - arm_latency += rte_get_timer_cycles() - m->timestamp; - count++; + count += BURST_SIZE; } fflush(stdout); rte_delay_ms(1000); -- 2.17.1