From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 282754299C; Tue, 25 Apr 2023 21:51:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F96542B8E; Tue, 25 Apr 2023 21:51:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 412EE427EE for ; Tue, 25 Apr 2023 21:51:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33PIXXa0013102 for ; Tue, 25 Apr 2023 12:51:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=HF0pL/SJ+AoUXXRGcz1aQmqxTJpdlxbz+799G85f9+4=; b=bbAl6fActM/9ZRE53YCNYcog9TcXAXBY2kN57ZSv/8Hc8HRJ/pNieuPMgIstLURTGKSu 95QX41nUh/FyZnX5nWupaQKpVIX172n1PNdM8XJE0QSDl5hSgKahf8FbI5RhOcv4oRIX f3FRX4nMurgjgzmqJPSPWLbi78/3c+A8/p39Uy+JMEtH0gAbeBMsGSK2bhekew8pOamA 3IOyd6buhYz7ZcnauFb/oy1b0bHhxYPY5QHFhk8s+jP2hc1CqgAZgcuD6VHhs3TgmOi6 N63o91+OUjihrCYWbGIvs55e38qNyIhsKh91NEzwbnc1nnZKFdFkBDX19nrV2CM2Q0Zu 9A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3q6c2fax4f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 25 Apr 2023 12:51:21 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 25 Apr 2023 12:51:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 25 Apr 2023 12:51:19 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.28.164.122]) by maili.marvell.com (Postfix) with ESMTP id 79A845B6948; Tue, 25 Apr 2023 12:51:16 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Subject: [PATCH v2 2/3] app/eventdev: use enqueue new event burst routine Date: Wed, 26 Apr 2023 01:21:09 +0530 Message-ID: <20230425195110.4223-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425195110.4223-1-pbhagavatula@marvell.com> References: <20230419200151.2474-1-pbhagavatula@marvell.com> <20230425195110.4223-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: 1J5W8sAEVEmfwzLOqknzWH-EqNIKDKAQ X-Proofpoint-ORIG-GUID: 1J5W8sAEVEmfwzLOqknzWH-EqNIKDKAQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-25_08,2023-04-25_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Use the `rte_event_enqueue_new_burst` routine to enqueue events with rte_event::op as RTE_EVENT_OP_NEW. This allows PMDs to use optimized enqueue routines. Signed-off-by: Pavan Nikhilesh --- app/test-eventdev/evt_options.c | 2 +- app/test-eventdev/test_perf_common.c | 58 +++++++++++++++++----------- 2 files changed, 37 insertions(+), 23 deletions(-) diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c index b175c067cd..03fb3bfce0 100644 --- a/app/test-eventdev/evt_options.c +++ b/app/test-eventdev/evt_options.c @@ -27,7 +27,7 @@ evt_options_default(struct evt_options *opt) opt->nb_flows = 1024; opt->socket_id = SOCKET_ID_ANY; opt->pool_sz = 16 * 1024; - opt->prod_enq_burst_sz = 1; + opt->prod_enq_burst_sz = 0; opt->wkr_deq_dep = 16; opt->nb_pkts = (1ULL << 26); /* do ~64M packets */ opt->nb_timers = 1E8; diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index fd434666cb..68af3cb346 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -131,8 +131,10 @@ perf_producer(void *arg) uint32_t flow_counter = 0; uint64_t count = 0; struct perf_elt *m[BURST_SIZE + 1] = {NULL}; + uint8_t enable_fwd_latency; struct rte_event ev; + enable_fwd_latency = opt->fwd_latency; if (opt->verbose_level > 1) printf("%s(): lcore %d dev_id %d port=%d queue %d\n", __func__, rte_lcore_id(), dev_id, port, p->queue_id); @@ -151,13 +153,16 @@ perf_producer(void *arg) for (i = 0; i < BURST_SIZE; i++) { ev.flow_id = flow_counter++ % nb_flows; ev.event_ptr = m[i]; - m[i]->timestamp = rte_get_timer_cycles(); - while (rte_event_enqueue_burst(dev_id, - port, &ev, 1) != 1) { + if (enable_fwd_latency) + m[i]->timestamp = rte_get_timer_cycles(); + while (rte_event_enqueue_new_burst(dev_id, port, &ev, + 1) != 1) { if (t->done) break; rte_pause(); - m[i]->timestamp = rte_get_timer_cycles(); + if (enable_fwd_latency) + m[i]->timestamp = + rte_get_timer_cycles(); } } count += BURST_SIZE; @@ -171,7 +176,6 @@ perf_producer_burst(void *arg) { uint32_t i; uint64_t timestamp; - struct rte_event_dev_info dev_info; struct prod_data *p = arg; struct test_perf *t = p->t; struct evt_options *opt = t->opt; @@ -183,15 +187,13 @@ perf_producer_burst(void *arg) uint32_t flow_counter = 0; uint16_t enq = 0; uint64_t count = 0; - struct perf_elt *m[MAX_PROD_ENQ_BURST_SIZE + 1]; - struct rte_event ev[MAX_PROD_ENQ_BURST_SIZE + 1]; + struct perf_elt *m[opt->prod_enq_burst_sz + 1]; + struct rte_event ev[opt->prod_enq_burst_sz + 1]; uint32_t burst_size = opt->prod_enq_burst_sz; + uint8_t enable_fwd_latency; - memset(m, 0, sizeof(*m) * (MAX_PROD_ENQ_BURST_SIZE + 1)); - rte_event_dev_info_get(dev_id, &dev_info); - if (dev_info.max_event_port_enqueue_depth < burst_size) - burst_size = dev_info.max_event_port_enqueue_depth; - + enable_fwd_latency = opt->fwd_latency; + memset(m, 0, sizeof(*m) * (opt->prod_enq_burst_sz + 1)); if (opt->verbose_level > 1) printf("%s(): lcore %d dev_id %d port=%d queue %d\n", __func__, rte_lcore_id(), dev_id, port, p->queue_id); @@ -212,19 +214,21 @@ perf_producer_burst(void *arg) for (i = 0; i < burst_size; i++) { ev[i].flow_id = flow_counter++ % nb_flows; ev[i].event_ptr = m[i]; - m[i]->timestamp = timestamp; + if (enable_fwd_latency) + m[i]->timestamp = timestamp; } - enq = rte_event_enqueue_burst(dev_id, port, ev, burst_size); + enq = rte_event_enqueue_new_burst(dev_id, port, ev, burst_size); while (enq < burst_size) { - enq += rte_event_enqueue_burst(dev_id, port, - ev + enq, - burst_size - enq); + enq += rte_event_enqueue_new_burst( + dev_id, port, ev + enq, burst_size - enq); if (t->done) break; rte_pause(); - timestamp = rte_get_timer_cycles(); - for (i = enq; i < burst_size; i++) - m[i]->timestamp = timestamp; + if (enable_fwd_latency) { + timestamp = rte_get_timer_cycles(); + for (i = enq; i < burst_size; i++) + m[i]->timestamp = timestamp; + } } count += burst_size; } @@ -799,9 +803,19 @@ perf_event_crypto_producer_burst(void *arg) static int perf_producer_wrapper(void *arg) { + struct rte_event_dev_info dev_info; struct prod_data *p = arg; struct test_perf *t = p->t; - bool burst = evt_has_burst_mode(p->dev_id); + + rte_event_dev_info_get(p->dev_id, &dev_info); + if (!t->opt->prod_enq_burst_sz) { + t->opt->prod_enq_burst_sz = MAX_PROD_ENQ_BURST_SIZE; + if (dev_info.max_event_port_enqueue_depth > 0 && + (uint32_t)dev_info.max_event_port_enqueue_depth < + t->opt->prod_enq_burst_sz) + t->opt->prod_enq_burst_sz = + dev_info.max_event_port_enqueue_depth; + } /* In case of synthetic producer, launch perf_producer or * perf_producer_burst depending on producer enqueue burst size @@ -811,7 +825,7 @@ perf_producer_wrapper(void *arg) return perf_producer(arg); else if (t->opt->prod_type == EVT_PROD_TYPE_SYNT && t->opt->prod_enq_burst_sz > 1) { - if (!burst) + if (dev_info.max_event_port_enqueue_depth == 1) evt_err("This event device does not support burst mode"); else return perf_producer_burst(arg); -- 2.25.1