From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 50707454E2 for ; Mon, 24 Jun 2024 17:07:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3EBD40EDC; Mon, 24 Jun 2024 17:07:15 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) by mails.dpdk.org (Postfix) with ESMTP id 39C6140E1F; Mon, 24 Jun 2024 17:07:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1719241631; x=1750777631; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZfpOpc0sn50yo5n1NYu+yrNHK9RwrycjODD4sWeyj+I=; b=S48Fn/Tj5EXytXR1K+DoOoQe36MwhvMlfdXdnVCBh76S1RgG5M30RR7A zI0DteWm+SBn7WPo9cY9HvFm7huxOnGQdQh1Yk6KWfcybGeyxCmZnGB6Z 5L2j0Rt6wGDi5L5XXNn435JP7FsO2AaQtd8x/gL4K/M24OYnIzrKMVI1u rmuAl8VLBRC/LLOKtmOle2Pw7zCDQ3uHjzz7SGyNNIIQ0gCxj4E576Lym ES77XV5D8BODn+VQhJD6sZVY6UBYJop68re66JS9OB5ts33d7a+TMruhc Gd2lgah9CCGQbRkpWZpakiOgmbISPD4B5zxverxYx2dmx9qFSyrXpIx9p g==; X-CSE-ConnectionGUID: REhquMe/Q9+LMsXD24OqRw== X-CSE-MsgGUID: elZtC2JeThi0j44CaDwg6Q== X-IronPort-AV: E=McAfee;i="6700,10204,11113"; a="16042018" X-IronPort-AV: E=Sophos;i="6.08,262,1712646000"; d="scan'208";a="16042018" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2024 08:07:10 -0700 X-CSE-ConnectionGUID: VwC7Wpz2Sf6A20ggNdsPZA== X-CSE-MsgGUID: RwO3qqLtRyWiTlWwVWuKxQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,262,1712646000"; d="scan'208";a="44025852" Received: from unknown (HELO csl-npg-qt0.la.intel.com) ([10.233.181.103]) by orviesa008.jf.intel.com with ESMTP; 24 Jun 2024 08:07:09 -0700 From: Hernan Vargas To: dev@dpdk.org, gakhil@marvell.com, trix@redhat.com, maxime.coquelin@redhat.com Cc: nicolas.chautru@intel.com, qi.z.zhang@intel.com, Hernan Vargas , stable@dpdk.org Subject: [PATCH v2 3/9] test/bbdev: fix interrupt tests Date: Mon, 24 Jun 2024 08:02:31 -0700 Message-Id: <20240624150237.47169-4-hernan.vargas@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20240624150237.47169-1-hernan.vargas@intel.com> References: <20240624150237.47169-1-hernan.vargas@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Fix possible error with regards to setting the burst size from the enqueue thread. Fixes: b2e2aec3239e ("app/bbdev: enhance interrupt test") Cc: stable@dpdk.org Signed-off-by: Hernan Vargas Reviewed-by: Maxime Coquelin --- app/test-bbdev/test_bbdev_perf.c | 98 ++++++++++++++++---------------- 1 file changed, 49 insertions(+), 49 deletions(-) diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c index 9841464922ac..20cd8df19be7 100644 --- a/app/test-bbdev/test_bbdev_perf.c +++ b/app/test-bbdev/test_bbdev_perf.c @@ -3419,15 +3419,6 @@ throughput_intr_lcore_ldpc_dec(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_ldpc_dec_ops( - tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(num_to_enq != enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3438,6 +3429,15 @@ throughput_intr_lcore_ldpc_dec(void *arg) rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, rte_memory_order_relaxed); + enq = 0; + do { + enq += rte_bbdev_enqueue_ldpc_dec_ops( + tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(num_to_enq != enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3514,14 +3514,6 @@ throughput_intr_lcore_dec(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_dec_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(num_to_enq != enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3532,6 +3524,14 @@ throughput_intr_lcore_dec(void *arg) rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, rte_memory_order_relaxed); + enq = 0; + do { + enq += rte_bbdev_enqueue_dec_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(num_to_enq != enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3603,14 +3603,6 @@ throughput_intr_lcore_enc(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_enc_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3621,6 +3613,14 @@ throughput_intr_lcore_enc(void *arg) rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, rte_memory_order_relaxed); + enq = 0; + do { + enq += rte_bbdev_enqueue_enc_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3694,15 +3694,6 @@ throughput_intr_lcore_ldpc_enc(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_ldpc_enc_ops( - tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3713,6 +3704,15 @@ throughput_intr_lcore_ldpc_enc(void *arg) rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, rte_memory_order_relaxed); + enq = 0; + do { + enq += rte_bbdev_enqueue_ldpc_enc_ops( + tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3786,14 +3786,6 @@ throughput_intr_lcore_fft(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_fft_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3804,6 +3796,14 @@ throughput_intr_lcore_fft(void *arg) rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, rte_memory_order_relaxed); + enq = 0; + do { + enq += rte_bbdev_enqueue_fft_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3872,13 +3872,6 @@ throughput_intr_lcore_mldts(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_mldts_ops(tp->dev_id, - queue_id, &ops[enqueued], num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3889,6 +3882,13 @@ throughput_intr_lcore_mldts(void *arg) rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, rte_memory_order_relaxed); + enq = 0; + do { + enq += rte_bbdev_enqueue_mldts_ops(tp->dev_id, + queue_id, &ops[enqueued], num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ -- 2.37.1