DPDK patches and discussions
 help / color / mirror / Atom feed
From: Pablo de Lara <pablo.de.lara.guarch@intel.com>
To: dev@dpdk.org
Cc: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Subject: [dpdk-dev] [PATCH] app/crypto-perf: fix incorrect IV allocation
Date: Tue,  1 Aug 2017 01:33:36 +0100	[thread overview]
Message-ID: <20170801003336.15796-1-pablo.de.lara.guarch@intel.com> (raw)

Memory is reserved after each crypto operation
for the necessary IV(s), which could be for cipher,
authentication or AEAD algorithms.
However, for AEAD algorithms (such as AES-GCM), this
memory was not being reserved, leading to potential
memory overflow.

Fixes: 8a5b494a7f99 ("app/test-crypto-perf: add AEAD parameters")

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 app/test-crypto-perf/cperf_test_latency.c    | 3 ++-
 app/test-crypto-perf/cperf_test_throughput.c | 2 +-
 app/test-crypto-perf/cperf_test_verify.c     | 2 +-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 20e9686..7e610d9 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -291,7 +291,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
 
 	uint16_t priv_size = sizeof(struct priv_op_data) +
 			test_vector->cipher_iv.length +
-			test_vector->auth_iv.length;
+			test_vector->auth_iv.length +
+			test_vector->aead_iv.length;
 	ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
 			RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
 			512, priv_size, rte_socket_id());
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 85ed21a..3bb1cb0 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -271,7 +271,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
 			dev_id);
 
 	uint16_t priv_size = test_vector->cipher_iv.length +
-		test_vector->auth_iv.length;
+		test_vector->auth_iv.length + test_vector->aead_iv.length;
 
 	ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
 			RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 1827e6f..a314646 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -275,7 +275,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
 			dev_id);
 
 	uint16_t priv_size = test_vector->cipher_iv.length +
-		test_vector->auth_iv.length;
+		test_vector->auth_iv.length + test_vector->aead_iv.length;
 	ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
 			RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
 			512, priv_size, rte_socket_id());
-- 
2.9.4

             reply	other threads:[~2017-08-01  8:33 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-01  0:33 Pablo de Lara [this message]
2017-08-03 21:48 ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170801003336.15796-1-pablo.de.lara.guarch@intel.com \
    --to=pablo.de.lara.guarch@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).