From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05D3A41C56 for ; Thu, 9 Feb 2023 23:23:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9612D42D31; Thu, 9 Feb 2023 23:23:15 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 1260340E50; Thu, 9 Feb 2023 23:23:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675981389; x=1707517389; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+mZelvjpZkn8ipU8/zAcyjFNcGMSqhYKkCrpzrlTAwQ=; b=L1kRgLnZW8d0PSZUca+hn7aIXiRs5gaHCVLJ93evRnB7fQwzWGycJsi9 yT8swWnFNdOwjXZQdUCZ30U6+VlD8MLZBJZeh0ixOi/CCPgRX67acDRlM wYCAkDncxLr+lxNYI2gIhhaaWV12dWlcRLcD4BDQzQZEFz2m/7ZOjJ7ij XVHSN1f6muQlAk8AO8wKWd4Kh9BaIXh0k7lPS8urNK3g5WbzvQNoB4Rm2 6RfvKlm0HFpWjztJ/xqrgeHrl3jtDg2ZjNxSOXWjCNiZY6+nziteCeoEJ ietKbsWtDkA23WsSNGdGBMhPdxFt51Z0OTgmviDacbnnksR+D3yEDJCZh A==; X-IronPort-AV: E=McAfee;i="6500,9779,10616"; a="331563041" X-IronPort-AV: E=Sophos;i="5.97,285,1669104000"; d="scan'208";a="331563041" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2023 14:23:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10616"; a="736513274" X-IronPort-AV: E=Sophos;i="5.97,285,1669104000"; d="scan'208";a="736513274" Received: from spr-npg-bds1-eec2.sn.intel.com (HELO spr-npg-bds1-eec2..) ([10.233.181.123]) by fmsmga004.fm.intel.com with ESMTP; 09 Feb 2023 14:23:06 -0800 From: Nicolas Chautru To: dev@dpdk.org, maxime.coquelin@redhat.com Cc: hernan.vargas@intel.com, stable@dpdk.or, Nicolas Chautru , stable@dpdk.org Subject: [PATCH v1 5/9] baseband/acc: prevent to dequeue more than requested Date: Thu, 9 Feb 2023 22:19:25 +0000 Message-Id: <20230209221929.265059-6-nicolas.chautru@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230209221929.265059-1-nicolas.chautru@intel.com> References: <20230209221929.265059-1-nicolas.chautru@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Add support for corner-case when more operations are requested than expected, in the case of encoder muxing operations. Fixes: e640f6cdfa84 ("baseband/acc200: add LDPC processing") Cc: stable@dpdk.org Signed-off-by: Nicolas Chautru --- drivers/baseband/acc/rte_vrb_pmd.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c index 8540e3d31c..b251ad25c6 100644 --- a/drivers/baseband/acc/rte_vrb_pmd.c +++ b/drivers/baseband/acc/rte_vrb_pmd.c @@ -2641,7 +2641,8 @@ vrb_enqueue_ldpc_dec(struct rte_bbdev_queue_data *q_data, /* Dequeue one encode operations from device in CB mode. */ static inline int vrb_dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, - uint16_t *dequeued_ops, uint32_t *aq_dequeued, uint16_t *dequeued_descs) + uint16_t *dequeued_ops, uint32_t *aq_dequeued, uint16_t *dequeued_descs, + uint16_t max_requested_ops) { union acc_dma_desc *desc, atom_desc; union acc_dma_rsp_desc rsp; @@ -2654,6 +2655,9 @@ vrb_dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, desc = q->ring_addr + desc_idx; atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + if (*dequeued_ops + desc->req.numCBs > max_requested_ops) + return -1; + /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) return -1; @@ -2695,7 +2699,7 @@ vrb_dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, static inline int vrb_dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, uint16_t *dequeued_ops, uint32_t *aq_dequeued, - uint16_t *dequeued_descs) + uint16_t *dequeued_descs, uint16_t max_requested_ops) { union acc_dma_desc *desc, *last_desc, atom_desc; union acc_dma_rsp_desc rsp; @@ -2706,6 +2710,9 @@ vrb_dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, desc = acc_desc_tail(q, *dequeued_descs); atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + if (*dequeued_ops + 1 > max_requested_ops) + return -1; + /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) return -1; @@ -2966,25 +2973,23 @@ vrb_dequeue_enc(struct rte_bbdev_queue_data *q_data, cbm = op->turbo_enc.code_block_mode; - for (i = 0; i < num; i++) { + for (i = 0; i < avail; i++) { if (cbm == RTE_BBDEV_TRANSPORT_BLOCK) ret = vrb_dequeue_enc_one_op_tb(q, &ops[dequeued_ops], &dequeued_ops, &aq_dequeued, - &dequeued_descs); + &dequeued_descs, num); else ret = vrb_dequeue_enc_one_op_cb(q, &ops[dequeued_ops], &dequeued_ops, &aq_dequeued, - &dequeued_descs); + &dequeued_descs, num); if (ret < 0) break; - if (dequeued_ops >= num) - break; } q->aq_dequeued += aq_dequeued; q->sw_ring_tail += dequeued_descs; - /* Update enqueue stats */ + /* Update enqueue stats. */ q_data->queue_stats.dequeued_count += dequeued_ops; return dequeued_ops; @@ -3010,15 +3015,13 @@ vrb_dequeue_ldpc_enc(struct rte_bbdev_queue_data *q_data, if (cbm == RTE_BBDEV_TRANSPORT_BLOCK) ret = vrb_dequeue_enc_one_op_tb(q, &ops[dequeued_ops], &dequeued_ops, &aq_dequeued, - &dequeued_descs); + &dequeued_descs, num); else ret = vrb_dequeue_enc_one_op_cb(q, &ops[dequeued_ops], &dequeued_ops, &aq_dequeued, - &dequeued_descs); + &dequeued_descs, num); if (ret < 0) break; - if (dequeued_ops >= num) - break; } q->aq_dequeued += aq_dequeued; -- 2.34.1