From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CA44CA0542 for ; Fri, 4 Nov 2022 13:26:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 960C742D19; Fri, 4 Nov 2022 13:26:10 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1375242D10; Fri, 4 Nov 2022 13:26:08 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2A48JbL2010013; Fri, 4 Nov 2022 05:26:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=g0hX2ake+18FORK+Jb9J988j2bhbr/93i4NSXcEBDvM=; b=kMwngZmTJ+xSCHmya2sx50+m4mLCDUqpJeRcNw5urLNFuVw8pXyTk7ECnJG9jyCbPpBh 37RspKGBw/bQw0JUfE4eup8ufOP8xXHuPrPWxx3YAS81/J7Sz3Mez8vQImWw6KfpRQQ/ Td1c6SHdjqPZvzpnu8loMfT1NX6eNvxwzwyxCkLHEK1wpOaXXTIMbYxLXcSFkm+Xvx4Q I6yxkqHkZ04/pgSyO7SpsT5uXHkW5BsmoG50+OPZ3VrMoA/yTcOR1PBTBJk914j5bDV1 tG2+jCLd8rhziI8l+nyX+k6DuSWSThkz/XkN7m8FxAo33FMM/1IWvNNynZTnUJ4j8f2Q EQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3kmy4grpdg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 04 Nov 2022 05:26:08 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 4 Nov 2022 05:26:06 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 4 Nov 2022 05:26:06 -0700 Received: from localhost.localdomain (unknown [10.28.34.39]) by maili.marvell.com (Postfix) with ESMTP id 80F0A3F7083; Fri, 4 Nov 2022 05:26:03 -0700 (PDT) From: Volodymyr Fialko To: , Jerin Jacob , Abhinandan Gujjar , Fan Zhang , "Akhil Goyal" , Anoob Joseph CC: Volodymyr Fialko , Subject: [PATCH v2 2/3] app/testeventdev: fix asymmetric last stage handling Date: Fri, 4 Nov 2022 13:25:55 +0100 Message-ID: <20221104122556.751286-3-vfialko@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221104122556.751286-1-vfialko@marvell.com> References: <20221103175347.651579-1-vfialko@marvell.com> <20221104122556.751286-1-vfialko@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: NqwR5f5JEox7mVOOpDyXap7p4BVMW2C- X-Proofpoint-GUID: NqwR5f5JEox7mVOOpDyXap7p4BVMW2C- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-04_08,2022-11-03_01,2022-06-22_01 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org For asymmetric crypto producer check for event type in `process_crypto_request` will not pass in case of multiple stages, due to overwrite of event type during event forward. Use producer type to dispatch. Fixes: 8f5b549502d1 ("app/eventdev: support asym ops for crypto adapter") Cc: stable@dpdk.org Signed-off-by: Volodymyr Fialko --- app/test-eventdev/test_perf_atq.c | 10 +++++----- app/test-eventdev/test_perf_common.h | 11 +++++------ app/test-eventdev/test_perf_queue.c | 10 +++++----- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c index 8326f54045..2b71f30b66 100644 --- a/app/test-eventdev/test_perf_atq.c +++ b/app/test-eventdev/test_perf_atq.c @@ -74,10 +74,10 @@ perf_atq_worker(void *arg, const int enable_fwd_latency) /* last stage in pipeline */ if (unlikely((ev.sub_event_type % nb_stages) == laststage)) { if (enable_fwd_latency) - cnt = perf_process_last_stage_latency(pool, + cnt = perf_process_last_stage_latency(pool, prod_crypto_type, &ev, w, bufs, sz, cnt); else - cnt = perf_process_last_stage(pool, &ev, w, + cnt = perf_process_last_stage(pool, prod_crypto_type, &ev, w, bufs, sz, cnt); } else { atq_fwd_event(&ev, sched_type_list, nb_stages); @@ -141,10 +141,10 @@ perf_atq_worker_burst(void *arg, const int enable_fwd_latency) if (unlikely((ev[i].sub_event_type % nb_stages) == laststage)) { if (enable_fwd_latency) - cnt = perf_process_last_stage_latency( - pool, &ev[i], w, bufs, sz, cnt); + cnt = perf_process_last_stage_latency(pool, + prod_crypto_type, &ev[i], w, bufs, sz, cnt); else - cnt = perf_process_last_stage(pool, + cnt = perf_process_last_stage(pool, prod_crypto_type, &ev[i], w, bufs, sz, cnt); ev[i].op = RTE_EVENT_OP_RELEASE; diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h index d06d52cdf8..5b075bfbc4 100644 --- a/app/test-eventdev/test_perf_common.h +++ b/app/test-eventdev/test_perf_common.h @@ -108,7 +108,7 @@ struct perf_elt { rte_lcore_id(), dev, port) static __rte_always_inline int -perf_process_last_stage(struct rte_mempool *const pool, +perf_process_last_stage(struct rte_mempool *const pool, uint8_t prod_crypto_type, struct rte_event *const ev, struct worker_data *const w, void *bufs[], int const buf_sz, uint8_t count) { @@ -119,7 +119,7 @@ perf_process_last_stage(struct rte_mempool *const pool, rte_atomic_thread_fence(__ATOMIC_RELEASE); w->processed_pkts++; - if (ev->event_type == RTE_EVENT_TYPE_CRYPTODEV && + if (prod_crypto_type && ((struct rte_crypto_op *)ev->event_ptr)->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { struct rte_crypto_op *op = ev->event_ptr; @@ -137,7 +137,7 @@ perf_process_last_stage(struct rte_mempool *const pool, } static __rte_always_inline uint8_t -perf_process_last_stage_latency(struct rte_mempool *const pool, +perf_process_last_stage_latency(struct rte_mempool *const pool, uint8_t prod_crypto_type, struct rte_event *const ev, struct worker_data *const w, void *bufs[], int const buf_sz, uint8_t count) { @@ -151,9 +151,8 @@ perf_process_last_stage_latency(struct rte_mempool *const pool, rte_atomic_thread_fence(__ATOMIC_RELEASE); w->processed_pkts++; - if (ev->event_type == RTE_EVENT_TYPE_CRYPTODEV && - ((struct rte_crypto_op *)m)->type == - RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { + if (prod_crypto_type && + ((struct rte_crypto_op *)m)->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { rte_free(((struct rte_crypto_op *)m)->asym->modex.result.data); rte_crypto_op_free((struct rte_crypto_op *)m); } else { diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c index 814ab9f9bd..38509eddbb 100644 --- a/app/test-eventdev/test_perf_queue.c +++ b/app/test-eventdev/test_perf_queue.c @@ -76,10 +76,10 @@ perf_queue_worker(void *arg, const int enable_fwd_latency) /* last stage in pipeline */ if (unlikely((ev.queue_id % nb_stages) == laststage)) { if (enable_fwd_latency) - cnt = perf_process_last_stage_latency(pool, + cnt = perf_process_last_stage_latency(pool, prod_crypto_type, &ev, w, bufs, sz, cnt); else - cnt = perf_process_last_stage(pool, + cnt = perf_process_last_stage(pool, prod_crypto_type, &ev, w, bufs, sz, cnt); } else { fwd_event(&ev, sched_type_list, nb_stages); @@ -143,10 +143,10 @@ perf_queue_worker_burst(void *arg, const int enable_fwd_latency) if (unlikely((ev[i].queue_id % nb_stages) == laststage)) { if (enable_fwd_latency) - cnt = perf_process_last_stage_latency( - pool, &ev[i], w, bufs, sz, cnt); + cnt = perf_process_last_stage_latency(pool, + prod_crypto_type, &ev[i], w, bufs, sz, cnt); else - cnt = perf_process_last_stage(pool, + cnt = perf_process_last_stage(pool, prod_crypto_type, &ev[i], w, bufs, sz, cnt); ev[i].op = RTE_EVENT_OP_RELEASE; -- 2.25.1