From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id B15D839EA for ; Thu, 16 Feb 2017 14:51:17 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP; 16 Feb 2017 05:51:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.35,169,1484035200"; d="scan'208";a="46076811" Received: from gklab-246-019.igk.intel.com (HELO intel.com) ([10.217.246.19]) by orsmga002.jf.intel.com with SMTP; 16 Feb 2017 05:51:14 -0800 Received: by intel.com (sSMTP sendmail emulation); Thu, 16 Feb 2017 16:51:11 +0100 From: Slawomir Mrozowicz To: declan.doherty@intel.com Cc: dev@dpdk.org, Slawomir Mrozowicz Date: Thu, 16 Feb 2017 16:51:08 +0100 Message-Id: <1487260268-19846-1-git-send-email-slawomirx.mrozowicz@intel.com> X-Mailer: git-send-email 1.9.1 Subject: [dpdk-dev] [PATCH] app/crypto-perf: fix invalid latency for QAT PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Feb 2017 13:51:18 -0000 Fixes invalid latency result when using the performance application and hardware QAT PMD. It occurred when the number of processed packets was higher then the size of the internal QAT PMD ring buffer and the buffer was overflowed. Fixed by correcting the registration of the enqueued packets and freeing memory space for not enqueued packets. Fixes: f8be1786b1b8 ("app/crypto-perf: introduce performance test application") Signed-off-by: Slawomir Mrozowicz --- app/test-crypto-perf/cperf_test_latency.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c index 5ec1b2c..239a8e5 100644 --- a/app/test-crypto-perf/cperf_test_latency.c +++ b/app/test-crypto-perf/cperf_test_latency.c @@ -424,7 +424,6 @@ cperf_latency_test_runner(void *arg) struct rte_crypto_op *ops[ctx->options->burst_sz]; struct rte_crypto_op *ops_processed[ctx->options->burst_sz]; uint64_t ops_enqd = 0, ops_deqd = 0; - uint16_t ops_unused = 0; uint64_t m_idx = 0, b_idx = 0, i; uint64_t tsc_val, tsc_end, tsc_start; @@ -460,19 +459,18 @@ cperf_latency_test_runner(void *arg) ctx->options->burst_sz : ctx->options->total_ops - enqd_tot; - uint16_t ops_needed = burst_size - ops_unused; /* Allocate crypto ops from pool */ - if (ops_needed != rte_crypto_op_bulk_alloc( + if (burst_size != rte_crypto_op_bulk_alloc( ctx->crypto_op_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, - ops, ops_needed)) + ops, burst_size)) return -1; /* Setup crypto op, attach mbuf etc */ (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx], &ctx->mbufs_out[m_idx], - ops_needed, ctx->sess, ctx->options, + burst_size, ctx->sess, ctx->options, ctx->test_vector); tsc_start = rte_rdtsc_precise(); @@ -498,17 +496,15 @@ cperf_latency_test_runner(void *arg) tsc_end = rte_rdtsc_precise(); - for (i = 0; i < ops_needed; i++) { + for (i = 0; i < ops_enqd; i++) { ctx->res[tsc_idx].tsc_start = tsc_start; ops[i]->opaque_data = (void *)&ctx->res[tsc_idx]; tsc_idx++; } - /* - * Calculate number of ops not enqueued (mainly for hw - * accelerators whose ingress queue can fill up). - */ - ops_unused = burst_size - ops_enqd; + /* Free memory for not enqueued operations */ + for (i = ops_enqd; i < burst_size; i++) + rte_crypto_op_free(ops[i]); if (likely(ops_deqd)) { /* @@ -535,7 +531,7 @@ cperf_latency_test_runner(void *arg) enqd_max = max(ops_enqd, enqd_max); enqd_min = min(ops_enqd, enqd_min); - m_idx += ops_needed; + m_idx += ops_enqd; m_idx = m_idx + ctx->options->burst_sz > ctx->options->pool_sz ? 0 : m_idx; b_idx++; -- 2.5.0