From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33504A034C; Fri, 19 Aug 2022 20:36:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8035B427F3; Fri, 19 Aug 2022 20:36:17 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id AF3DD40689 for ; Fri, 19 Aug 2022 20:36:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660934173; x=1692470173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NUSg1Kyh42WSfZuRJ04V7zQLjkjpTgoKRkZZlEg05XI=; b=OG9vMTJZRb0jpjZg0DLXFEgCVdiIEnaEBsmp1ytfcBjI/YA+ZYkeVVNd XRPMI3/mALPhQt3mNqwiJ/hj9IfzpfXx4wZ3E3HIWP3qNxsmHTrcrz1ZF 0KYUh3n+do2l/SaMaQG1lVtRq+KanaeC/1EMGpaP2B/G739mevIfcneFJ IR/3BjupZa8I6O1G+ac5ld26KSnkXIyye33+ezaNT1r49J47/A+oWtZ3t 6ay8weFiBSFH8ZZujKfUKahwUYe7D/YCMm+QjSauR16DYcC50PfNckmiY ss6I8sUTWkzRPa/nz541TcnfFzcBwPD8u/Mi/Ty0502B3/zWUV0CZdd4j A==; X-IronPort-AV: E=McAfee;i="6500,9779,10444"; a="319107210" X-IronPort-AV: E=Sophos;i="5.93,248,1654585200"; d="scan'208";a="319107210" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2022 11:36:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,248,1654585200"; d="scan'208";a="608296237" Received: from unknown (HELO csl-npg-qt0.la.intel.com) ([10.233.181.103]) by orsmga002.jf.intel.com with ESMTP; 19 Aug 2022 11:36:12 -0700 From: Hernan Vargas To: dev@dpdk.org, gakhil@marvell.com, trix@redhat.com Cc: nicolas.chautru@intel.com, qi.z.zhang@intel.com, Hernan Vargas Subject: [PATCH v2 03/37] baseband/acc100: add function to check AQ availability Date: Fri, 19 Aug 2022 19:31:23 -0700 Message-Id: <20220820023157.189047-4-hernan.vargas@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220820023157.189047-1-hernan.vargas@intel.com> References: <20220820023157.189047-1-hernan.vargas@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org It is possible for some corner case to run more batch enqueue than supported. A protection is required to avoid that corner case. Enhance all ACC100 enqueue operations with check to see if there is room in the atomic queue for enqueueing batches into the queue manager Check room in AQ for the enqueues batches into Qmgr Signed-off-by: Hernan Vargas --- drivers/baseband/acc100/rte_acc100_pmd.c | 30 +++++++++++++++++------- 1 file changed, 22 insertions(+), 8 deletions(-) diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c index 0598d33582..7349bb5bad 100644 --- a/drivers/baseband/acc100/rte_acc100_pmd.c +++ b/drivers/baseband/acc100/rte_acc100_pmd.c @@ -3635,12 +3635,27 @@ acc100_enqueue_enc_tb(struct rte_bbdev_queue_data *q_data, return i; } +/* Check room in AQ for the enqueues batches into Qmgr */ +static int32_t +acc100_aq_avail(struct rte_bbdev_queue_data *q_data, uint16_t num_ops) +{ + struct acc100_queue *q = q_data->queue_private; + int32_t aq_avail = q->aq_depth - + ((q->aq_enqueued - q->aq_dequeued + + ACC100_MAX_QUEUE_DEPTH) % ACC100_MAX_QUEUE_DEPTH) + - (num_ops >> 7); + if (aq_avail <= 0) + acc100_enqueue_queue_full(q_data); + return aq_avail; +} + /* Enqueue encode operations for ACC100 device. */ static uint16_t acc100_enqueue_enc(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_enc_op **ops, uint16_t num) { - if (unlikely(num == 0)) + int32_t aq_avail = acc100_aq_avail(q_data, num); + if (unlikely((aq_avail <= 0) || (num == 0))) return 0; if (ops[0]->turbo_enc.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) return acc100_enqueue_enc_tb(q_data, ops, num); @@ -3653,7 +3668,8 @@ static uint16_t acc100_enqueue_ldpc_enc(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_enc_op **ops, uint16_t num) { - if (unlikely(num == 0)) + int32_t aq_avail = acc100_aq_avail(q_data, num); + if (unlikely((aq_avail <= 0) || (num == 0))) return 0; if (ops[0]->ldpc_enc.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) return acc100_enqueue_enc_tb(q_data, ops, num); @@ -3850,7 +3866,8 @@ static uint16_t acc100_enqueue_dec(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_dec_op **ops, uint16_t num) { - if (unlikely(num == 0)) + int32_t aq_avail = acc100_aq_avail(q_data, num); + if (unlikely((aq_avail <= 0) || (num == 0))) return 0; if (ops[0]->turbo_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) return acc100_enqueue_dec_tb(q_data, ops, num); @@ -3863,11 +3880,8 @@ static uint16_t acc100_enqueue_ldpc_dec(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_dec_op **ops, uint16_t num) { - struct acc100_queue *q = q_data->queue_private; - int32_t aq_avail = q->aq_depth + - (q->aq_dequeued - q->aq_enqueued) / 128; - - if (unlikely((aq_avail == 0) || (num == 0))) + int32_t aq_avail = acc100_aq_avail(q_data, num); + if (unlikely((aq_avail <= 0) || (num == 0))) return 0; if (ops[0]->ldpc_dec.code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) -- 2.37.1