From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44CC541D3B; Wed, 22 Feb 2023 11:55:47 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31D4A40693; Wed, 22 Feb 2023 11:55:47 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 12A2A40689 for ; Wed, 22 Feb 2023 11:55:44 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677063344; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JBAfj9RfvtTYBiqSBDtUjnSlqYa4f+zlUTyBeW86KMY=; b=NVDj1gKoR96ggrqyDUeGEzK09dZvbemG6MjuZDdSpl1nMPu56OtsG9JTEre+uzCVsjSLgd F+6GmvVYB9/aNQg9la41sve2N39wW0wGyC+eqTdCLTfTF/9IqLfmwIgWim0Cispek2VYW5 4rAObrP6apHOk/FauudiLJnc0hVNvY4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-472-Vp03ir4QM-qRudOP7wb1IA-1; Wed, 22 Feb 2023 05:55:41 -0500 X-MC-Unique: Vp03ir4QM-qRudOP7wb1IA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 009B2884340; Wed, 22 Feb 2023 10:55:41 +0000 (UTC) Received: from [10.39.208.21] (unknown [10.39.208.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B4C041121314; Wed, 22 Feb 2023 10:55:39 +0000 (UTC) Message-ID: <18b9df0b-29e2-90d3-e233-5b540a0d765a@redhat.com> Date: Wed, 22 Feb 2023 11:55:38 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1 Subject: Re: [PATCH v2 13/16] test/bbdev: remove iteration count check To: Hernan Vargas , dev@dpdk.org, gakhil@marvell.com, trix@redhat.com Cc: nicolas.chautru@intel.com, qi.z.zhang@intel.com References: <20230215170949.60569-1-hernan.vargas@intel.com> <20230215170949.60569-14-hernan.vargas@intel.com> From: Maxime Coquelin In-Reply-To: <20230215170949.60569-14-hernan.vargas@intel.com> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2/15/23 18:09, Hernan Vargas wrote: > To make the test compatible with devices that do not support early > termination, the iteration count assert can be removed. > > Signed-off-by: Hernan Vargas > --- > app/test-bbdev/test_bbdev_perf.c | 24 ++++++++---------------- > 1 file changed, 8 insertions(+), 16 deletions(-) > > diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c > index 2ce1c7e7d3..5259404ff6 100644 > --- a/app/test-bbdev/test_bbdev_perf.c > +++ b/app/test-bbdev/test_bbdev_perf.c > @@ -2288,7 +2288,7 @@ validate_op_so_chain(struct rte_bbdev_op_data *op, > > static int > validate_dec_op(struct rte_bbdev_dec_op **ops, const uint16_t n, > - struct rte_bbdev_dec_op *ref_op, const int vector_mask) > + struct rte_bbdev_dec_op *ref_op) > { > unsigned int i; > int ret; > @@ -2299,17 +2299,12 @@ validate_dec_op(struct rte_bbdev_dec_op **ops, const uint16_t n, > struct rte_bbdev_op_turbo_dec *ops_td; > struct rte_bbdev_op_data *hard_output; > struct rte_bbdev_op_data *soft_output; > - struct rte_bbdev_op_turbo_dec *ref_td = &ref_op->turbo_dec; > > for (i = 0; i < n; ++i) { > ops_td = &ops[i]->turbo_dec; > hard_output = &ops_td->hard_output; > soft_output = &ops_td->soft_output; > > - if (vector_mask & TEST_BBDEV_VF_EXPECTED_ITER_COUNT) > - TEST_ASSERT(ops_td->iter_count <= ref_td->iter_count, > - "Returned iter_count (%d) > expected iter_count (%d)", > - ops_td->iter_count, ref_td->iter_count); > ret = check_dec_status_and_ordering(ops[i], i, ref_op->status); > TEST_ASSERT_SUCCESS(ret, > "Checking status and ordering for decoder failed"); > @@ -3058,8 +3053,7 @@ dequeue_event_callback(uint16_t dev_id, > > if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC) { > struct rte_bbdev_dec_op *ref_op = tp->op_params->ref_dec_op; > - ret = validate_dec_op(tp->dec_ops, num_ops, ref_op, > - tp->op_params->vector_mask); > + ret = validate_dec_op(tp->dec_ops, num_ops, ref_op); > /* get the max of iter_count for all dequeued ops */ > for (i = 0; i < num_ops; ++i) > tp->iter_count = RTE_MAX( > @@ -3660,8 +3654,7 @@ throughput_pmd_lcore_dec(void *arg) > } > > if (test_vector.op_type != RTE_BBDEV_OP_NONE) { > - ret = validate_dec_op(ops_deq, num_ops, ref_op, > - tp->op_params->vector_mask); > + ret = validate_dec_op(ops_deq, num_ops, ref_op); > TEST_ASSERT_SUCCESS(ret, "Validation failed!"); > } > > @@ -4649,7 +4642,7 @@ throughput_test(struct active_device *ad, > static int > latency_test_dec(struct rte_mempool *mempool, > struct test_buffers *bufs, struct rte_bbdev_dec_op *ref_op, > - int vector_mask, uint16_t dev_id, uint16_t queue_id, > + uint16_t dev_id, uint16_t queue_id, > const uint16_t num_to_process, uint16_t burst_sz, > uint64_t *total_time, uint64_t *min_time, uint64_t *max_time, bool disable_et) > { > @@ -4709,8 +4702,7 @@ latency_test_dec(struct rte_mempool *mempool, > *total_time += last_time; > > if (test_vector.op_type != RTE_BBDEV_OP_NONE) { > - ret = validate_dec_op(ops_deq, burst_sz, ref_op, > - vector_mask); > + ret = validate_dec_op(ops_deq, burst_sz, ref_op); > TEST_ASSERT_SUCCESS(ret, "Validation failed!"); > } > > @@ -5065,9 +5057,9 @@ validation_latency_test(struct active_device *ad, > > if (op_type == RTE_BBDEV_OP_TURBO_DEC) > iter = latency_test_dec(op_params->mp, bufs, > - op_params->ref_dec_op, op_params->vector_mask, > - ad->dev_id, queue_id, num_to_process, > - burst_sz, &total_time, &min_time, &max_time, latency_flag); > + op_params->ref_dec_op, ad->dev_id, queue_id, > + num_to_process, burst_sz, &total_time, > + &min_time, &max_time, latency_flag); > else if (op_type == RTE_BBDEV_OP_LDPC_ENC) > iter = latency_test_ldpc_enc(op_params->mp, bufs, > op_params->ref_enc_op, ad->dev_id, queue_id, Reviewed-by: Maxime Coquelin Thanks, Maxime