From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5FEAA0542 for ; Fri, 4 Nov 2022 13:26:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9F1A442D25; Fri, 4 Nov 2022 13:26:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9BC4042D25; Fri, 4 Nov 2022 13:26:12 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2A48aDnD006436; Fri, 4 Nov 2022 05:26:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=4tpVKNqw3dKLgwZKwBCmCxrLIyp+2/zXPHR+xklm9gw=; b=hvMlYy+Q13Uu8PuoTEyZzteBLkX+rZfsxtUvdK8OXoXcNh6QuppblWYUhX9zBMRfxFlJ qDgrjU9SvOIqsi9ZrX1SriQ8mbl3+3Tevtd6wDSRYmrwopqs+gIfqQI6N1HWk+Fw6Ihw 6bLsdG4NIuPVfkSOj+yEr3hQx43YJPYrkrYK/iZ2SGUBEFosLCus1EL1M9ecGX3yIe5B xoYL/7V76B5D+DZ3k+BcGIgMERA6g/NAYbmUsNHkqHKPKBMOm9DAJtpdh6rQJSn5kbUe m02VXeXV23ejKnLYkSgvWOPunh1bymrozv+W5jVbih0wgKmKzSsipJqjkRCeFk7d3XaU 1Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3kmycd0mg9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 04 Nov 2022 05:26:11 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 4 Nov 2022 05:26:09 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 4 Nov 2022 05:26:09 -0700 Received: from localhost.localdomain (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id B19C33F705E; Fri, 4 Nov 2022 05:26:06 -0700 (PDT) From: Volodymyr Fialko To: , Jerin Jacob , Akhil Goyal , Abhinandan Gujjar , Shijith Thotton CC: , Volodymyr Fialko , Subject: [PATCH v2 3/3] app/testeventdev: fix timestamp with crypto producer Date: Fri, 4 Nov 2022 13:25:56 +0100 Message-ID: <20221104122556.751286-4-vfialko@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221104122556.751286-1-vfialko@marvell.com> References: <20221103175347.651579-1-vfialko@marvell.com> <20221104122556.751286-1-vfialko@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: zVs_qN4W9ztgxQkDr0Aj6dzOIxnz92M4 X-Proofpoint-ORIG-GUID: zVs_qN4W9ztgxQkDr0Aj6dzOIxnz92M4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-04_08,2022-11-03_01,2022-06-22_01 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org With symmetric crypto producer and enabled `--fwd_latency` we will treat rte_mbuf as perf_elt which will lead to rte_mbuf header corruption. Use rte_mbuf data to store time stamp information. For asymmetric add space in result data for time stamp. Fixes: de2bc16e1bd1 ("app/eventdev: add crypto producer mode") Cc: stable@dpdk.org Signed-off-by: Volodymyr Fialko --- app/test-eventdev/test_perf_atq.c | 64 ++++----------- app/test-eventdev/test_perf_common.c | 47 +++++++++--- app/test-eventdev/test_perf_common.h | 111 +++++++++++++++++++++------ app/test-eventdev/test_perf_queue.c | 71 +++++------------ 4 files changed, 160 insertions(+), 133 deletions(-) diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c index 2b71f30b66..9d30081117 100644 --- a/app/test-eventdev/test_perf_atq.c +++ b/app/test-eventdev/test_perf_atq.c @@ -14,16 +14,6 @@ atq_nb_event_queues(struct evt_options *opt) rte_eth_dev_count_avail() : evt_nr_active_lcores(opt->plcores); } -static __rte_always_inline void -atq_mark_fwd_latency(struct rte_event *const ev) -{ - if (unlikely(ev->sub_event_type == 0)) { - struct perf_elt *const m = ev->event_ptr; - - m->timestamp = rte_get_timer_cycles(); - } -} - static __rte_always_inline void atq_fwd_event(struct rte_event *const ev, uint8_t *const sched_type_list, const uint8_t nb_stages) @@ -37,9 +27,11 @@ atq_fwd_event(struct rte_event *const ev, uint8_t *const sched_type_list, static int perf_atq_worker(void *arg, const int enable_fwd_latency) { + struct perf_elt *pe = NULL; uint16_t enq = 0, deq = 0; struct rte_event ev; PERF_WORKER_INIT; + uint8_t stage; while (t->done == false) { deq = rte_event_dequeue_burst(dev, port, &ev, 1, 0); @@ -49,30 +41,18 @@ perf_atq_worker(void *arg, const int enable_fwd_latency) continue; } - if (prod_crypto_type && - (ev.event_type == RTE_EVENT_TYPE_CRYPTODEV)) { - struct rte_crypto_op *op = ev.event_ptr; - - if (op->status == RTE_CRYPTO_OP_STATUS_SUCCESS) { - if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { - if (op->sym->m_dst == NULL) - ev.event_ptr = op->sym->m_src; - else - ev.event_ptr = op->sym->m_dst; - rte_crypto_op_free(op); - } - } else { - rte_crypto_op_free(op); + if (prod_crypto_type && (ev.event_type == RTE_EVENT_TYPE_CRYPTODEV)) { + if (perf_handle_crypto_ev(&ev, &pe, enable_fwd_latency)) continue; - } } - if (enable_fwd_latency && !prod_timer_type) + stage = ev.sub_event_type % nb_stages; + if (enable_fwd_latency && !prod_timer_type && stage == 0) /* first stage in pipeline, mark ts to compute fwd latency */ - atq_mark_fwd_latency(&ev); + perf_mark_fwd_latency(ev.event_ptr); /* last stage in pipeline */ - if (unlikely((ev.sub_event_type % nb_stages) == laststage)) { + if (unlikely(stage == laststage)) { if (enable_fwd_latency) cnt = perf_process_last_stage_latency(pool, prod_crypto_type, &ev, w, bufs, sz, cnt); @@ -99,7 +79,9 @@ perf_atq_worker_burst(void *arg, const int enable_fwd_latency) /* +1 to avoid prefetch out of array check */ struct rte_event ev[BURST_SIZE + 1]; uint16_t enq = 0, nb_rx = 0; + struct perf_elt *pe = NULL; PERF_WORKER_INIT; + uint8_t stage; uint16_t i; while (t->done == false) { @@ -111,35 +93,21 @@ perf_atq_worker_burst(void *arg, const int enable_fwd_latency) } for (i = 0; i < nb_rx; i++) { - if (prod_crypto_type && - (ev[i].event_type == RTE_EVENT_TYPE_CRYPTODEV)) { - struct rte_crypto_op *op = ev[i].event_ptr; - - if (op->status == - RTE_CRYPTO_OP_STATUS_SUCCESS) { - if (op->sym->m_dst == NULL) - ev[i].event_ptr = - op->sym->m_src; - else - ev[i].event_ptr = - op->sym->m_dst; - rte_crypto_op_free(op); - } else { - rte_crypto_op_free(op); + if (prod_crypto_type && (ev[i].event_type == RTE_EVENT_TYPE_CRYPTODEV)) { + if (perf_handle_crypto_ev(&ev[i], &pe, enable_fwd_latency)) continue; - } } - if (enable_fwd_latency && !prod_timer_type) { + stage = ev[i].sub_event_type % nb_stages; + if (enable_fwd_latency && !prod_timer_type && stage == 0) { rte_prefetch0(ev[i+1].event_ptr); /* first stage in pipeline. * mark time stamp to compute fwd latency */ - atq_mark_fwd_latency(&ev[i]); + perf_mark_fwd_latency(ev[i].event_ptr); } /* last stage in pipeline */ - if (unlikely((ev[i].sub_event_type % nb_stages) - == laststage)) { + if (unlikely(stage == laststage)) { if (enable_fwd_latency) cnt = perf_process_last_stage_latency(pool, prod_crypto_type, &ev[i], w, bufs, sz, cnt); diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index 6aae18fddb..140c0c2dc3 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -370,16 +370,17 @@ crypto_adapter_enq_op_new(struct prod_data *p) uint64_t alloc_failures = 0; uint32_t flow_counter = 0; struct rte_crypto_op *op; + uint16_t len, offset; struct rte_mbuf *m; uint64_t count = 0; - uint16_t len; if (opt->verbose_level > 1) printf("%s(): lcore %d queue %d cdev_id %u cdev_qp_id %u\n", __func__, rte_lcore_id(), p->queue_id, p->ca.cdev_id, p->ca.cdev_qp_id); - len = opt->mbuf_sz ? opt->mbuf_sz : RTE_ETHER_MIN_LEN; + offset = sizeof(struct perf_elt); + len = RTE_MAX(RTE_ETHER_MIN_LEN + offset, opt->mbuf_sz); while (count < nb_pkts && t->done == false) { if (opt->crypto_op_type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { @@ -402,19 +403,24 @@ crypto_adapter_enq_op_new(struct prod_data *p) rte_pktmbuf_append(m, len); sym_op = op->sym; sym_op->m_src = m; - sym_op->cipher.data.offset = 0; - sym_op->cipher.data.length = len; + sym_op->cipher.data.offset = offset; + sym_op->cipher.data.length = len - offset; rte_crypto_op_attach_sym_session( op, p->ca.crypto_sess[flow_counter++ % nb_flows]); } else { struct rte_crypto_asym_op *asym_op; - uint8_t *result = rte_zmalloc(NULL, - modex_test_case.result_len, 0); + uint8_t *result; + + if (rte_mempool_get(pool, (void **)&result)) { + alloc_failures++; + continue; + } op = rte_crypto_op_alloc(t->ca_op_pool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC); if (unlikely(op == NULL)) { alloc_failures++; + rte_mempool_put(pool, result); continue; } @@ -451,10 +457,10 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) uint64_t alloc_failures = 0; uint32_t flow_counter = 0; struct rte_crypto_op *op; + uint16_t len, offset; struct rte_event ev; struct rte_mbuf *m; uint64_t count = 0; - uint16_t len; if (opt->verbose_level > 1) printf("%s(): lcore %d port %d queue %d cdev_id %u cdev_qp_id %u\n", @@ -466,7 +472,9 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) ev.queue_id = p->queue_id; ev.sched_type = RTE_SCHED_TYPE_ATOMIC; ev.event_type = RTE_EVENT_TYPE_CPU; - len = opt->mbuf_sz ? opt->mbuf_sz : RTE_ETHER_MIN_LEN; + + offset = sizeof(struct perf_elt); + len = RTE_MAX(RTE_ETHER_MIN_LEN + offset, opt->mbuf_sz); while (count < nb_pkts && t->done == false) { if (opt->crypto_op_type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { @@ -489,19 +497,24 @@ crypto_adapter_enq_op_fwd(struct prod_data *p) rte_pktmbuf_append(m, len); sym_op = op->sym; sym_op->m_src = m; - sym_op->cipher.data.offset = 0; - sym_op->cipher.data.length = len; + sym_op->cipher.data.offset = offset; + sym_op->cipher.data.length = len - offset; rte_crypto_op_attach_sym_session( op, p->ca.crypto_sess[flow_counter++ % nb_flows]); } else { struct rte_crypto_asym_op *asym_op; - uint8_t *result = rte_zmalloc(NULL, - modex_test_case.result_len, 0); + uint8_t *result; + + if (rte_mempool_get(pool, (void **)&result)) { + alloc_failures++; + continue; + } op = rte_crypto_op_alloc(t->ca_op_pool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC); if (unlikely(op == NULL)) { alloc_failures++; + rte_mempool_put(pool, result); continue; } @@ -1510,6 +1523,16 @@ perf_mempool_setup(struct evt_test *test, struct evt_options *opt) 0, NULL, NULL, perf_elt_init, /* obj constructor */ NULL, opt->socket_id, 0); /* flags */ + } else if (opt->prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR && + opt->crypto_op_type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { + t->pool = rte_mempool_create(test->name, /* mempool name */ + opt->pool_sz, /* number of elements*/ + sizeof(struct perf_elt) + modex_test_case.result_len, + /* element size*/ + 512, /* cache size*/ + 0, NULL, NULL, + NULL, /* obj constructor */ + NULL, opt->socket_id, 0); /* flags */ } else { t->pool = rte_pktmbuf_pool_create(test->name, /* mempool name */ opt->pool_sz, /* number of elements*/ diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h index 5b075bfbc4..503b6aa1db 100644 --- a/app/test-eventdev/test_perf_common.h +++ b/app/test-eventdev/test_perf_common.h @@ -107,11 +107,50 @@ struct perf_elt { printf("%s(): lcore %d dev_id %d port=%d\n", __func__,\ rte_lcore_id(), dev, port) +static __rte_always_inline void +perf_mark_fwd_latency(struct perf_elt *const pe) +{ + pe->timestamp = rte_get_timer_cycles(); +} + +static __rte_always_inline int +perf_handle_crypto_ev(struct rte_event *ev, struct perf_elt **pe, int enable_fwd_latency) +{ + struct rte_crypto_op *op = ev->event_ptr; + struct rte_mbuf *m; + + + if (unlikely(op->status != RTE_CRYPTO_OP_STATUS_SUCCESS)) { + rte_crypto_op_free(op); + return op->status; + } + + /* Forward latency not enabled - perf data will not be accessed */ + if (!enable_fwd_latency) + return 0; + + /* Get pointer to perf data */ + if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + if (op->sym->m_dst == NULL) + m = op->sym->m_src; + else + m = op->sym->m_dst; + *pe = rte_pktmbuf_mtod(m, struct perf_elt *); + } else { + *pe = RTE_PTR_ADD(op->asym->modex.result.data, op->asym->modex.result.length); + } + + return 0; +} + + static __rte_always_inline int perf_process_last_stage(struct rte_mempool *const pool, uint8_t prod_crypto_type, struct rte_event *const ev, struct worker_data *const w, void *bufs[], int const buf_sz, uint8_t count) { + void *to_free_in_bulk; + /* release fence here ensures event_prt is * stored before updating the number of * processed packets for worker lcores @@ -119,20 +158,31 @@ perf_process_last_stage(struct rte_mempool *const pool, uint8_t prod_crypto_type rte_atomic_thread_fence(__ATOMIC_RELEASE); w->processed_pkts++; - if (prod_crypto_type && - ((struct rte_crypto_op *)ev->event_ptr)->type == - RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { + if (prod_crypto_type) { struct rte_crypto_op *op = ev->event_ptr; + struct rte_mbuf *m; + + if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + if (op->sym->m_dst == NULL) + m = op->sym->m_src; + else + m = op->sym->m_dst; - rte_free(op->asym->modex.result.data); + to_free_in_bulk = m; + } else { + to_free_in_bulk = op->asym->modex.result.data; + } rte_crypto_op_free(op); } else { - bufs[count++] = ev->event_ptr; - if (unlikely(count == buf_sz)) { - count = 0; - rte_mempool_put_bulk(pool, bufs, buf_sz); - } + to_free_in_bulk = ev->event_ptr; } + + bufs[count++] = to_free_in_bulk; + if (unlikely(count == buf_sz)) { + count = 0; + rte_mempool_put_bulk(pool, bufs, buf_sz); + } + return count; } @@ -142,7 +192,8 @@ perf_process_last_stage_latency(struct rte_mempool *const pool, uint8_t prod_cry void *bufs[], int const buf_sz, uint8_t count) { uint64_t latency; - struct perf_elt *const m = ev->event_ptr; + struct perf_elt *pe; + void *to_free_in_bulk; /* release fence here ensures event_prt is * stored before updating the number of @@ -151,22 +202,38 @@ perf_process_last_stage_latency(struct rte_mempool *const pool, uint8_t prod_cry rte_atomic_thread_fence(__ATOMIC_RELEASE); w->processed_pkts++; - if (prod_crypto_type && - ((struct rte_crypto_op *)m)->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { - rte_free(((struct rte_crypto_op *)m)->asym->modex.result.data); - rte_crypto_op_free((struct rte_crypto_op *)m); - } else { - bufs[count++] = ev->event_ptr; - if (unlikely(count == buf_sz)) { - count = 0; - latency = rte_get_timer_cycles() - m->timestamp; - rte_mempool_put_bulk(pool, bufs, buf_sz); + if (prod_crypto_type) { + struct rte_crypto_op *op = ev->event_ptr; + struct rte_mbuf *m; + + if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + if (op->sym->m_dst == NULL) + m = op->sym->m_src; + else + m = op->sym->m_dst; + + to_free_in_bulk = m; + pe = rte_pktmbuf_mtod(m, struct perf_elt *); } else { - latency = rte_get_timer_cycles() - m->timestamp; + pe = RTE_PTR_ADD(op->asym->modex.result.data, + op->asym->modex.result.length); + to_free_in_bulk = op->asym->modex.result.data; } + rte_crypto_op_free(op); + } else { + pe = ev->event_ptr; + to_free_in_bulk = pe; + } - w->latency += latency; + latency = rte_get_timer_cycles() - pe->timestamp; + w->latency += latency; + + bufs[count++] = to_free_in_bulk; + if (unlikely(count == buf_sz)) { + count = 0; + rte_mempool_put_bulk(pool, bufs, buf_sz); } + return count; } diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c index 38509eddbb..69ef0ebbac 100644 --- a/app/test-eventdev/test_perf_queue.c +++ b/app/test-eventdev/test_perf_queue.c @@ -15,17 +15,6 @@ perf_queue_nb_event_queues(struct evt_options *opt) return nb_prod * opt->nb_stages; } -static __rte_always_inline void -mark_fwd_latency(struct rte_event *const ev, - const uint8_t nb_stages) -{ - if (unlikely((ev->queue_id % nb_stages) == 0)) { - struct perf_elt *const m = ev->event_ptr; - - m->timestamp = rte_get_timer_cycles(); - } -} - static __rte_always_inline void fwd_event(struct rte_event *const ev, uint8_t *const sched_type_list, const uint8_t nb_stages) @@ -39,9 +28,12 @@ fwd_event(struct rte_event *const ev, uint8_t *const sched_type_list, static int perf_queue_worker(void *arg, const int enable_fwd_latency) { + struct perf_elt *pe = NULL; uint16_t enq = 0, deq = 0; struct rte_event ev; PERF_WORKER_INIT; + uint8_t stage; + while (t->done == false) { deq = rte_event_dequeue_burst(dev, port, &ev, 1, 0); @@ -51,30 +43,20 @@ perf_queue_worker(void *arg, const int enable_fwd_latency) continue; } - if (prod_crypto_type && - (ev.event_type == RTE_EVENT_TYPE_CRYPTODEV)) { - struct rte_crypto_op *op = ev.event_ptr; - - if (op->status == RTE_CRYPTO_OP_STATUS_SUCCESS) { - if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { - if (op->sym->m_dst == NULL) - ev.event_ptr = op->sym->m_src; - else - ev.event_ptr = op->sym->m_dst; - rte_crypto_op_free(op); - } - } else { - rte_crypto_op_free(op); + if (prod_crypto_type && (ev.event_type == RTE_EVENT_TYPE_CRYPTODEV)) { + if (perf_handle_crypto_ev(&ev, &pe, enable_fwd_latency)) continue; - } + } else { + pe = ev.event_ptr; } - if (enable_fwd_latency && !prod_timer_type) + stage = ev.queue_id % nb_stages; + if (enable_fwd_latency && !prod_timer_type && stage == 0) /* first q in pipeline, mark timestamp to compute fwd latency */ - mark_fwd_latency(&ev, nb_stages); + perf_mark_fwd_latency(pe); /* last stage in pipeline */ - if (unlikely((ev.queue_id % nb_stages) == laststage)) { + if (unlikely(stage == laststage)) { if (enable_fwd_latency) cnt = perf_process_last_stage_latency(pool, prod_crypto_type, &ev, w, bufs, sz, cnt); @@ -84,8 +66,7 @@ perf_queue_worker(void *arg, const int enable_fwd_latency) } else { fwd_event(&ev, sched_type_list, nb_stages); do { - enq = rte_event_enqueue_burst(dev, port, &ev, - 1); + enq = rte_event_enqueue_burst(dev, port, &ev, 1); } while (!enq && !t->done); } } @@ -101,7 +82,9 @@ perf_queue_worker_burst(void *arg, const int enable_fwd_latency) /* +1 to avoid prefetch out of array check */ struct rte_event ev[BURST_SIZE + 1]; uint16_t enq = 0, nb_rx = 0; + struct perf_elt *pe = NULL; PERF_WORKER_INIT; + uint8_t stage; uint16_t i; while (t->done == false) { @@ -113,35 +96,21 @@ perf_queue_worker_burst(void *arg, const int enable_fwd_latency) } for (i = 0; i < nb_rx; i++) { - if (prod_crypto_type && - (ev[i].event_type == RTE_EVENT_TYPE_CRYPTODEV)) { - struct rte_crypto_op *op = ev[i].event_ptr; - - if (op->status == - RTE_CRYPTO_OP_STATUS_SUCCESS) { - if (op->sym->m_dst == NULL) - ev[i].event_ptr = - op->sym->m_src; - else - ev[i].event_ptr = - op->sym->m_dst; - rte_crypto_op_free(op); - } else { - rte_crypto_op_free(op); + if (prod_crypto_type && (ev[i].event_type == RTE_EVENT_TYPE_CRYPTODEV)) { + if (perf_handle_crypto_ev(&ev[i], &pe, enable_fwd_latency)) continue; - } } - if (enable_fwd_latency && !prod_timer_type) { + stage = ev[i].queue_id % nb_stages; + if (enable_fwd_latency && !prod_timer_type && stage == 0) { rte_prefetch0(ev[i+1].event_ptr); /* first queue in pipeline. * mark time stamp to compute fwd latency */ - mark_fwd_latency(&ev[i], nb_stages); + perf_mark_fwd_latency(ev[i].event_ptr); } /* last stage in pipeline */ - if (unlikely((ev[i].queue_id % nb_stages) == - laststage)) { + if (unlikely(stage == laststage)) { if (enable_fwd_latency) cnt = perf_process_last_stage_latency(pool, prod_crypto_type, &ev[i], w, bufs, sz, cnt); -- 2.25.1