From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C227FA0C46; Sun, 29 Aug 2021 14:52:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8CABB410F8; Sun, 29 Aug 2021 14:52:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 4CADF410F8 for ; Sun, 29 Aug 2021 14:52:31 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17T9XAQf022234; Sun, 29 Aug 2021 05:52:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=zb5ol5Rb7k2vbLF4OI9+fCPyHdBnckqGc67K3kg+2xE=; b=U9ROlexu+rGDY4AIqYuc7+6UjYj+J7HzFiM3gfP/CDb/J8kT3HrXKs/NRpSWHT9Q8uIE Y+XVP7ODraiZEb/eR0u72xLS0/W3uNJaAZs9arLe/404SnXuPYwgHRRYumGUy6PlNzDT 1B6EEUnbpilhA19TxUwlgJxbDVL/HakHuztvGQlWMFw8d4iVUUf0x46zkwaJR4pOzpLr VQw6IQeOoSNjQX1xLW9hkhuhYWXLOAvzI4iaBxWaHKJXOAVrQfnZ8/Z4l+d6zZEKS4Hu lgZcEgw7yebcNd81VD0aUI2OcXDPeng/AsW5k1OFAQ82JlshR2hB6MDn6a4CkD3unvaE /w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmme-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:24 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:52:22 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:52:22 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 94FD83F7070; Sun, 29 Aug 2021 05:52:17 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:37 +0530 Message-ID: <20210829125139.2173235-7-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: q62UWl8MOL8LFGydOSUg41lvMqZt6ICM X-Proofpoint-GUID: q62UWl8MOL8LFGydOSUg41lvMqZt6ICM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 6/8] crypto/scheduler: rename enq-deq functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" scheduler PMD has 4 variants, which uses same name for all the enqueue and dequeue functions. This causes multiple definitions of same function with the new framework of datapath APIs. Hence the function names are updated to specify the the variant it is for. Signed-off-by: Akhil Goyal --- drivers/crypto/scheduler/scheduler_failover.c | 20 +++++++++---------- .../crypto/scheduler/scheduler_multicore.c | 18 ++++++++--------- .../scheduler/scheduler_pkt_size_distr.c | 20 +++++++++---------- .../crypto/scheduler/scheduler_roundrobin.c | 20 +++++++++---------- 4 files changed, 39 insertions(+), 39 deletions(-) diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c index 844312dd1b..88cc8f05f7 100644 --- a/drivers/crypto/scheduler/scheduler_failover.c +++ b/drivers/crypto/scheduler/scheduler_failover.c @@ -37,7 +37,7 @@ failover_worker_enqueue(struct scheduler_worker *worker, } static uint16_t -schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_fo_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct fo_scheduler_qp_ctx *qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -60,14 +60,14 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) static uint16_t -schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_fo_enqueue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; uint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring, nb_ops); - uint16_t nb_ops_enqd = schedule_enqueue(qp, ops, + uint16_t nb_ops_enqd = schedule_fo_enqueue(qp, ops, nb_ops_to_enq); scheduler_order_insert(order_ring, ops, nb_ops_enqd); @@ -76,7 +76,7 @@ schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, } static uint16_t -schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_fo_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct fo_scheduler_qp_ctx *qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -108,13 +108,13 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_fo_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; - schedule_dequeue(qp, ops, nb_ops); + schedule_fo_dequeue(qp, ops, nb_ops); return scheduler_order_drain(order_ring, ops, nb_ops); } @@ -145,11 +145,11 @@ scheduler_start(struct rte_cryptodev *dev) } if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = schedule_enqueue_ordering; - dev->dequeue_burst = schedule_dequeue_ordering; + dev->enqueue_burst = schedule_fo_enqueue_ordering; + dev->dequeue_burst = schedule_fo_dequeue_ordering; } else { - dev->enqueue_burst = schedule_enqueue; - dev->dequeue_burst = schedule_dequeue; + dev->enqueue_burst = schedule_fo_enqueue; + dev->dequeue_burst = schedule_fo_dequeue; } for (i = 0; i < dev->data->nb_queue_pairs; i++) { diff --git a/drivers/crypto/scheduler/scheduler_multicore.c b/drivers/crypto/scheduler/scheduler_multicore.c index 1e2e8dbf9f..bf97343e52 100644 --- a/drivers/crypto/scheduler/scheduler_multicore.c +++ b/drivers/crypto/scheduler/scheduler_multicore.c @@ -36,7 +36,7 @@ struct mc_scheduler_qp_ctx { }; static uint16_t -schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_mc_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct mc_scheduler_qp_ctx *mc_qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -64,14 +64,14 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_mc_enqueue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; uint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring, nb_ops); - uint16_t nb_ops_enqd = schedule_enqueue(qp, ops, + uint16_t nb_ops_enqd = schedule_mc_enqueue(qp, ops, nb_ops_to_enq); scheduler_order_insert(order_ring, ops, nb_ops_enqd); @@ -81,7 +81,7 @@ schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, static uint16_t -schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_mc_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct mc_scheduler_qp_ctx *mc_qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -107,7 +107,7 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_mc_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = @@ -253,11 +253,11 @@ scheduler_start(struct rte_cryptodev *dev) sched_ctx->wc_pool[i]); if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_enqueue_ordering; - dev->dequeue_burst = &schedule_dequeue_ordering; + dev->enqueue_burst = &schedule_mc_enqueue_ordering; + dev->dequeue_burst = &schedule_mc_dequeue_ordering; } else { - dev->enqueue_burst = &schedule_enqueue; - dev->dequeue_burst = &schedule_dequeue; + dev->enqueue_burst = &schedule_mc_enqueue; + dev->dequeue_burst = &schedule_mc_dequeue; } for (i = 0; i < dev->data->nb_queue_pairs; i++) { diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c index 57e330a744..b025ab9736 100644 --- a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c +++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c @@ -34,7 +34,7 @@ struct psd_schedule_op { }; static uint16_t -schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_dist_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct scheduler_qp_ctx *qp_ctx = qp; struct psd_scheduler_qp_ctx *psd_qp_ctx = qp_ctx->private_qp_ctx; @@ -171,14 +171,14 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_dist_enqueue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; uint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring, nb_ops); - uint16_t nb_ops_enqd = schedule_enqueue(qp, ops, + uint16_t nb_ops_enqd = schedule_dist_enqueue(qp, ops, nb_ops_to_enq); scheduler_order_insert(order_ring, ops, nb_ops_enqd); @@ -187,7 +187,7 @@ schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, } static uint16_t -schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_dist_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct psd_scheduler_qp_ctx *qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -224,13 +224,13 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_dist_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; - schedule_dequeue(qp, ops, nb_ops); + schedule_dist_dequeue(qp, ops, nb_ops); return scheduler_order_drain(order_ring, ops, nb_ops); } @@ -281,11 +281,11 @@ scheduler_start(struct rte_cryptodev *dev) } if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_enqueue_ordering; - dev->dequeue_burst = &schedule_dequeue_ordering; + dev->enqueue_burst = &schedule_dist_enqueue_ordering; + dev->dequeue_burst = &schedule_dist_dequeue_ordering; } else { - dev->enqueue_burst = &schedule_enqueue; - dev->dequeue_burst = &schedule_dequeue; + dev->enqueue_burst = &schedule_dist_enqueue; + dev->dequeue_burst = &schedule_dist_dequeue; } return 0; diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c index bc4a632106..95e34401ce 100644 --- a/drivers/crypto/scheduler/scheduler_roundrobin.c +++ b/drivers/crypto/scheduler/scheduler_roundrobin.c @@ -17,7 +17,7 @@ struct rr_scheduler_qp_ctx { }; static uint16_t -schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_rr_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rr_scheduler_qp_ctx *rr_qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -43,14 +43,14 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_rr_enqueue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; uint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring, nb_ops); - uint16_t nb_ops_enqd = schedule_enqueue(qp, ops, + uint16_t nb_ops_enqd = schedule_rr_enqueue(qp, ops, nb_ops_to_enq); scheduler_order_insert(order_ring, ops, nb_ops_enqd); @@ -60,7 +60,7 @@ schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, static uint16_t -schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_rr_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rr_scheduler_qp_ctx *rr_qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -98,13 +98,13 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_rr_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; - schedule_dequeue(qp, ops, nb_ops); + schedule_rr_dequeue(qp, ops, nb_ops); return scheduler_order_drain(order_ring, ops, nb_ops); } @@ -130,11 +130,11 @@ scheduler_start(struct rte_cryptodev *dev) uint16_t i; if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_enqueue_ordering; - dev->dequeue_burst = &schedule_dequeue_ordering; + dev->enqueue_burst = &schedule_rr_enqueue_ordering; + dev->dequeue_burst = &schedule_rr_dequeue_ordering; } else { - dev->enqueue_burst = &schedule_enqueue; - dev->dequeue_burst = &schedule_dequeue; + dev->enqueue_burst = &schedule_rr_enqueue; + dev->dequeue_burst = &schedule_rr_dequeue; } for (i = 0; i < dev->data->nb_queue_pairs; i++) { -- 2.25.1