* [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements
@ 2017-08-18 8:05 Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 1/6] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
` (8 more replies)
0 siblings, 9 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-08-18 8:05 UTC (permalink / raw)
To: declan.doherty, fiona.trahe, deepak.k.jain, john.griffin,
jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
This patchset includes some improvements in the
Crypto performance application.
The last patch, in particular, introduces performance improvements.
Currently, crypto operations are allocated in a mempool and mbufs
in a different one.
Since crypto operations and mbufs are mapped 1:1, the can share
the same mempool object (similar to having the mbuf in the private
data of the crypto operation).
This improves performance, as it is only required to handle
a single mempool, improving cache usage.
Pablo de Lara (6):
app/crypto-perf: set AAD after the crypto operation
app/crypto-perf: parse AEAD data from vectors
app/crypto-perf: parse segment size
app/crypto-perf: overwrite mbuf when verifying
app/crypto-perf: do not populate the mbufs at init
app/crypto-perf: use single mempool
app/test-crypto-perf/cperf_ops.c | 136 ++++++--
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_options.h | 4 +-
app/test-crypto-perf/cperf_options_parsing.c | 45 +--
app/test-crypto-perf/cperf_test_latency.c | 365 ++++++++++------------
app/test-crypto-perf/cperf_test_throughput.c | 361 ++++++++++-----------
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++
app/test-crypto-perf/cperf_test_verify.c | 382 +++++++++++------------
doc/guides/tools/cryptoperf.rst | 6 +-
9 files changed, 717 insertions(+), 639 deletions(-)
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH 1/6] app/crypto-perf: set AAD after the crypto operation
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
@ 2017-08-18 8:05 ` Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 2/6] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
` (7 subsequent siblings)
8 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-08-18 8:05 UTC (permalink / raw)
To: declan.doherty, fiona.trahe, deepak.k.jain, john.griffin,
jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Instead of prepending the AAD (Additional Authenticated Data)
in the mbuf, it is easier to set after the crypto operation,
as it is a read-only value, like the IV, and then it is not
restricted to the size of the mbuf headroom.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 16 ++++++++++++----
app/test-crypto-perf/cperf_test_latency.c | 16 ++++------------
app/test-crypto-perf/cperf_test_throughput.c | 15 +++------------
app/test-crypto-perf/cperf_test_verify.c | 20 ++++++--------------
4 files changed, 25 insertions(+), 42 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 88fb972..5be20d9 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -307,6 +307,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
uint16_t iv_offset)
{
uint16_t i;
+ uint16_t aad_offset = iv_offset +
+ RTE_ALIGN_CEIL(test_vector->aead_iv.length, 16);
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
@@ -318,11 +320,12 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
- sym_op->aead.data.offset =
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+ sym_op->aead.data.offset = 0;
- sym_op->aead.aad.data = rte_pktmbuf_mtod(bufs_in[i], uint8_t *);
- sym_op->aead.aad.phys_addr = rte_pktmbuf_mtophys(bufs_in[i]);
+ sym_op->aead.aad.data = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, aad_offset);
+ sym_op->aead.aad.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
+ aad_offset);
if (options->aead_op == RTE_CRYPTO_AEAD_OP_DECRYPT) {
sym_op->aead.digest.data = test_vector->digest.data;
@@ -360,6 +363,11 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
memcpy(iv_ptr, test_vector->aead_iv.data,
test_vector->aead_iv.length);
+
+ /* Copy AAD after the IV */
+ memcpy(ops[i]->sym->aead.aad.data,
+ test_vector->aad.data,
+ test_vector->aad.length);
}
}
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 58b21ab..2a46af9 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -174,16 +174,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -289,10 +279,12 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = sizeof(struct priv_op_data) +
+ uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
test_vector->cipher_iv.length +
test_vector->auth_iv.length +
- test_vector->aead_iv.length;
+ test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
512, priv_size, rte_socket_id());
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 3bb1cb0..07aea6a 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -158,16 +158,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -270,8 +260,9 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index a314646..bc07eb6 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -162,16 +162,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -274,8 +264,10 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
512, priv_size, rte_socket_id());
@@ -362,9 +354,9 @@ cperf_verify_op(struct rte_crypto_op *op,
break;
case CPERF_AEAD:
cipher = 1;
- cipher_offset = vector->aad.length;
+ cipher_offset = 0;
auth = 1;
- auth_offset = vector->aad.length + options->test_buffer_size;
+ auth_offset = options->test_buffer_size;
break;
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH 2/6] app/crypto-perf: parse AEAD data from vectors
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 1/6] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
@ 2017-08-18 8:05 ` Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 3/6] app/crypto-perf: parse segment size Pablo de Lara
` (6 subsequent siblings)
8 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-08-18 8:05 UTC (permalink / raw)
To: declan.doherty, fiona.trahe, deepak.k.jain, john.griffin,
jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara, stable
Since DPDK 17.08, there is specific parameters
for AEAD algorithm, like AES-GCM. When verifying
crypto operations with test vectors, the parser
was not reading AEAD data (such as IV or key).
Fixes: 8a5b494a7f99 ("app/test-crypto-perf: add AEAD parameters")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/app/test-crypto-perf/cperf_test_vector_parsing.c b/app/test-crypto-perf/cperf_test_vector_parsing.c
index 148a604..3952632 100644
--- a/app/test-crypto-perf/cperf_test_vector_parsing.c
+++ b/app/test-crypto-perf/cperf_test_vector_parsing.c
@@ -116,6 +116,20 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_key.data) {
+ printf("\naead_key =\n");
+ for (i = 0; i < test_vector->aead_key.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_key.length - 1))
+ printf("0x%02x", test_vector->aead_key.data[i]);
+ else
+ printf("0x%02x, ",
+ test_vector->aead_key.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->cipher_iv.data) {
printf("\ncipher_iv =\n");
for (i = 0; i < test_vector->cipher_iv.length; ++i) {
@@ -142,6 +156,19 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_iv.data) {
+ printf("\naead_iv =\n");
+ for (i = 0; i < test_vector->aead_iv.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_iv.length - 1))
+ printf("0x%02x", test_vector->aead_iv.data[i]);
+ else
+ printf("0x%02x, ", test_vector->aead_iv.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->ciphertext.data) {
printf("\nciphertext =\n");
for (i = 0; i < test_vector->ciphertext.length; ++i) {
@@ -345,6 +372,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_key.length = opts->auth_key_sz;
}
+ } else if (strstr(key_token, "aead_key")) {
+ rte_free(vector->aead_key.data);
+ vector->aead_key.data = data;
+ if (tc_found)
+ vector->aead_key.length = data_length;
+ else {
+ if (opts->aead_key_sz > data_length) {
+ printf("Global aead_key shorter than "
+ "aead_key_sz\n");
+ return -1;
+ }
+ vector->aead_key.length = opts->aead_key_sz;
+ }
+
} else if (strstr(key_token, "cipher_iv")) {
rte_free(vector->cipher_iv.data);
vector->cipher_iv.data = data;
@@ -373,6 +414,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_iv.length = opts->auth_iv_sz;
}
+ } else if (strstr(key_token, "aead_iv")) {
+ rte_free(vector->aead_iv.data);
+ vector->aead_iv.data = data;
+ if (tc_found)
+ vector->aead_iv.length = data_length;
+ else {
+ if (opts->aead_iv_sz > data_length) {
+ printf("Global aead iv shorter than "
+ "aead_iv_sz\n");
+ return -1;
+ }
+ vector->aead_iv.length = opts->aead_iv_sz;
+ }
+
} else if (strstr(key_token, "ciphertext")) {
rte_free(vector->ciphertext.data);
vector->ciphertext.data = data;
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH 3/6] app/crypto-perf: parse segment size
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 1/6] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 2/6] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
@ 2017-08-18 8:05 ` Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 4/6] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
` (5 subsequent siblings)
8 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-08-18 8:05 UTC (permalink / raw)
To: declan.doherty, fiona.trahe, deepak.k.jain, john.griffin,
jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Instead of parsing number of segments, from the command line,
parse segment size, as it is a more usual case to have
the segment size fixed and then different packet sizes
will require different number of segments.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 24 ++++++++
app/test-crypto-perf/cperf_options.h | 4 +-
app/test-crypto-perf/cperf_options_parsing.c | 38 +++++++++----
app/test-crypto-perf/cperf_test_latency.c | 82 +++++++++++++++++-----------
app/test-crypto-perf/cperf_test_throughput.c | 82 +++++++++++++++++-----------
app/test-crypto-perf/cperf_test_verify.c | 82 +++++++++++++++++-----------
doc/guides/tools/cryptoperf.rst | 6 +-
7 files changed, 207 insertions(+), 111 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 5be20d9..ad32065 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -175,6 +175,14 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -256,6 +264,14 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -346,6 +362,14 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index 10cd2d8..5f2b28b 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -11,7 +11,7 @@
#define CPERF_TOTAL_OPS ("total-ops")
#define CPERF_BURST_SIZE ("burst-sz")
#define CPERF_BUFFER_SIZE ("buffer-sz")
-#define CPERF_SEGMENTS_NB ("segments-nb")
+#define CPERF_SEGMENT_SIZE ("segment-sz")
#define CPERF_DEVTYPE ("devtype")
#define CPERF_OPTYPE ("optype")
@@ -66,7 +66,7 @@ struct cperf_options {
uint32_t pool_sz;
uint32_t total_ops;
- uint32_t segments_nb;
+ uint32_t segment_sz;
uint32_t test_buffer_size;
uint32_t sessionless:1;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 085aa8f..dbe87df 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -322,17 +322,17 @@ parse_buffer_sz(struct cperf_options *opts, const char *arg)
}
static int
-parse_segments_nb(struct cperf_options *opts, const char *arg)
+parse_segment_sz(struct cperf_options *opts, const char *arg)
{
- int ret = parse_uint32_t(&opts->segments_nb, arg);
+ int ret = parse_uint32_t(&opts->segment_sz, arg);
if (ret) {
- RTE_LOG(ERR, USER1, "failed to parse segments number\n");
+ RTE_LOG(ERR, USER1, "failed to parse segment size\n");
return -1;
}
- if ((opts->segments_nb == 0) || (opts->segments_nb > 255)) {
- RTE_LOG(ERR, USER1, "invalid segments number specified\n");
+ if (opts->segment_sz == 0) {
+ RTE_LOG(ERR, USER1, "Segment size has to be bigger than 0\n");
return -1;
}
@@ -640,7 +640,7 @@ static struct option lgopts[] = {
{ CPERF_TOTAL_OPS, required_argument, 0, 0 },
{ CPERF_BURST_SIZE, required_argument, 0, 0 },
{ CPERF_BUFFER_SIZE, required_argument, 0, 0 },
- { CPERF_SEGMENTS_NB, required_argument, 0, 0 },
+ { CPERF_SEGMENT_SIZE, required_argument, 0, 0 },
{ CPERF_DEVTYPE, required_argument, 0, 0 },
{ CPERF_OPTYPE, required_argument, 0, 0 },
@@ -697,7 +697,11 @@ cperf_options_default(struct cperf_options *opts)
opts->min_burst_size = 32;
opts->inc_burst_size = 0;
- opts->segments_nb = 1;
+ /*
+ * Will be parsed from command line or set to
+ * maximum buffer size + digest, later
+ */
+ opts->segment_sz = 0;
strncpy(opts->device_type, "crypto_aesni_mb",
sizeof(opts->device_type));
@@ -739,7 +743,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_TOTAL_OPS, parse_total_ops },
{ CPERF_BURST_SIZE, parse_burst_sz },
{ CPERF_BUFFER_SIZE, parse_buffer_sz },
- { CPERF_SEGMENTS_NB, parse_segments_nb },
+ { CPERF_SEGMENT_SIZE, parse_segment_sz },
{ CPERF_DEVTYPE, parse_device_type },
{ CPERF_OPTYPE, parse_op_type },
{ CPERF_SESSIONLESS, parse_sessionless },
@@ -847,9 +851,21 @@ check_cipher_buffer_length(struct cperf_options *options)
int
cperf_options_check(struct cperf_options *options)
{
- if (options->segments_nb > options->min_buffer_size) {
+ if (options->op_type == CPERF_CIPHER_ONLY)
+ options->digest_sz = 0;
+
+ /*
+ * If segment size is not set, assume only one segment,
+ * big enough to contain the largest buffer and the digest
+ */
+ if (options->segment_sz == 0)
+ options->segment_sz = options->max_buffer_size +
+ options->digest_sz;
+
+ if (options->segment_sz < options->digest_sz) {
RTE_LOG(ERR, USER1,
- "Segments number greater than buffer size.\n");
+ "Segment size should be at least "
+ "the size of the digest\n");
return -EINVAL;
}
@@ -965,7 +981,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("%u ", opts->burst_size_list[size_idx]);
printf("\n");
}
- printf("\n# segments per buffer: %u\n", opts->segments_nb);
+ printf("\n# segment size: %u\n", opts->segment_sz);
printf("#\n");
printf("# cryptodev type: %s\n", opts->device_type);
printf("#\n");
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 2a46af9..b272bb1 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -116,18 +116,18 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -137,11 +137,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -154,22 +161,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
+
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -217,13 +234,14 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -236,7 +254,9 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -251,9 +271,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -267,8 +285,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -339,7 +357,7 @@ cperf_latency_test_runner(void *arg)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 07aea6a..d5e93f7 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -100,18 +100,18 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -121,11 +121,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -138,22 +145,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
+
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -200,13 +217,14 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -218,7 +236,9 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -232,9 +252,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -248,8 +266,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -297,7 +315,7 @@ cperf_throughput_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index bc07eb6..6f790ce 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -104,18 +104,18 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -125,11 +125,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -142,22 +149,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ if (m == NULL)
+ goto error;
+
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -204,13 +221,14 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -222,7 +240,9 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -236,9 +256,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -252,8 +270,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -405,7 +423,7 @@ cperf_verify_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 457f817..23b2b98 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -170,9 +170,11 @@ The following are the appication command-line options:
* List of values, up to 32 values, separated in commas (i.e. ``--buffer-sz 32,64,128``)
-* ``--segments-nb <n>``
+* ``--segment-sz <n>``
- Set the number of segments per packet.
+ Set the size of the segment to use, for Scatter Gather List testing.
+ By default, it is set to the size of the maximum buffer size, including the digest size,
+ so a single segment is created.
* ``--devtype <name>``
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH 4/6] app/crypto-perf: overwrite mbuf when verifying
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
` (2 preceding siblings ...)
2017-08-18 8:05 ` [dpdk-dev] [PATCH 3/6] app/crypto-perf: parse segment size Pablo de Lara
@ 2017-08-18 8:05 ` Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 5/6] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
` (4 subsequent siblings)
8 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-08-18 8:05 UTC (permalink / raw)
To: declan.doherty, fiona.trahe, deepak.k.jain, john.griffin,
jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
When running the verify test, mbufs in the pool were
populated with the test vector loaded from a file.
To avoid limiting the number of operations to the pool size,
mbufs will be rewritten with the test vector, before
linking them to the crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_options_parsing.c | 7 ------
app/test-crypto-perf/cperf_test_verify.c | 35 ++++++++++++++++++++++++++++
2 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index dbe87df..25a66c9 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -898,13 +898,6 @@ cperf_options_check(struct cperf_options *options)
}
if (options->test == CPERF_TEST_TYPE_VERIFY &&
- options->total_ops > options->pool_sz) {
- RTE_LOG(ERR, USER1, "Total number of ops must be less than or"
- " equal to the pool size.\n");
- return -EINVAL;
- }
-
- if (options->test == CPERF_TEST_TYPE_VERIFY &&
(options->inc_buffer_size != 0 ||
options->buffer_size_count > 1)) {
RTE_LOG(ERR, USER1, "Only one buffer size is allowed when "
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 6f790ce..03474cb 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -187,6 +187,34 @@ cperf_mbuf_create(struct rte_mempool *mempool,
return NULL;
}
+static void
+cperf_mbuf_set(struct rte_mbuf *mbuf,
+ const struct cperf_options *options,
+ const struct cperf_test_vector *test_vector)
+{
+ uint32_t segment_sz = options->segment_sz;
+ uint8_t *mbuf_data;
+ uint8_t *test_data =
+ (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ test_vector->plaintext.data :
+ test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
+
+ while (remaining_bytes) {
+ mbuf_data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ return;
+ }
+
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ mbuf = mbuf->next;
+ }
+}
+
void *
cperf_verify_test_constructor(struct rte_mempool *sess_mp,
uint8_t dev_id, uint16_t qp_id,
@@ -469,6 +497,13 @@ cperf_verify_test_runner(void *test_ctx)
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
+
+ /* Populate the mbuf with the test vector, for verification */
+ for (i = 0; i < ops_needed; i++)
+ cperf_mbuf_set(ops[i]->sym->m_src,
+ ctx->options,
+ ctx->test_vector);
+
#ifdef CPERF_LINEARIZATION_ENABLE
if (linearize) {
/* PMD doesn't support scatter-gather and source buffer
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH 5/6] app/crypto-perf: do not populate the mbufs at init
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
` (3 preceding siblings ...)
2017-08-18 8:05 ` [dpdk-dev] [PATCH 4/6] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
@ 2017-08-18 8:05 ` Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool Pablo de Lara
` (3 subsequent siblings)
8 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-08-18 8:05 UTC (permalink / raw)
To: declan.doherty, fiona.trahe, deepak.k.jain, john.griffin,
jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
For throughput and latency tests, it is not required
to populate the mbufs with any test vector.
For verify test, there is already a function that rewrites
the mbufs every time they are going to be used with
crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_latency.c | 31 ++++++++--------------------
app/test-crypto-perf/cperf_test_throughput.c | 31 ++++++++--------------------
app/test-crypto-perf/cperf_test_verify.c | 31 ++++++++--------------------
3 files changed, 27 insertions(+), 66 deletions(-)
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index b272bb1..997844a 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -118,15 +118,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -137,15 +132,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -161,15 +152,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -257,7 +244,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -286,7 +273,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index d5e93f7..121ceb1 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -102,15 +102,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -121,15 +116,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -145,15 +136,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -239,7 +226,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -267,7 +254,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 03474cb..b18426c 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -106,15 +106,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -125,15 +120,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -149,15 +140,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -271,7 +258,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -299,7 +286,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
` (4 preceding siblings ...)
2017-08-18 8:05 ` [dpdk-dev] [PATCH 5/6] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
@ 2017-08-18 8:05 ` Pablo de Lara
2017-08-30 8:30 ` Akhil Goyal
2017-09-04 13:08 ` [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Zhang, Roy Fan
` (2 subsequent siblings)
8 siblings, 1 reply; 49+ messages in thread
From: Pablo de Lara @ 2017-08-18 8:05 UTC (permalink / raw)
To: declan.doherty, fiona.trahe, deepak.k.jain, john.griffin,
jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
In order to improve memory utilization, a single mempool
is created, containing the crypto operation and mbufs
(one if operation is in-place, two if out-of-place).
This way, a single object is allocated and freed
per operation, reducing the amount of memory in cache,
which improves scalability.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 96 ++++++--
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_test_latency.c | 350 ++++++++++++--------------
app/test-crypto-perf/cperf_test_throughput.c | 347 ++++++++++++--------------
app/test-crypto-perf/cperf_test_verify.c | 356 ++++++++++++---------------
5 files changed, 553 insertions(+), 598 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index ad32065..f76dbdd 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -37,7 +37,7 @@
static int
cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector __rte_unused,
@@ -48,10 +48,18 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
sym_op->cipher.data.length = options->test_buffer_size;
@@ -63,7 +71,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
static int
cperf_set_ops_null_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector __rte_unused,
@@ -74,10 +82,18 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* auth parameters */
sym_op->auth.data.length = options->test_buffer_size;
@@ -89,7 +105,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_cipher(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -100,10 +116,18 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -132,7 +156,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
static int
cperf_set_ops_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -143,10 +167,18 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
if (test_vector->auth_iv.length) {
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
@@ -167,9 +199,9 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
@@ -219,7 +251,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -230,10 +262,18 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -256,9 +296,9 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
@@ -316,7 +356,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_aead(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -329,10 +369,18 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
@@ -354,9 +402,9 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index 1f8fa93..94951cc 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -47,7 +47,7 @@ typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
uint16_t iv_offset);
typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 997844a..2415d77 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -50,17 +50,15 @@ struct cperf_latency_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
struct cperf_op_result *res;
@@ -74,116 +72,128 @@ struct priv_op_data {
#define min(a, b) (a < b ? (uint64_t)a : (uint64_t)b)
static void
-cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
+cperf_latency_test_free(struct cperf_latency_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx->res);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint32_t segment_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr = (phys_addr_t)((uint8_t *)next_seg_phys_addr +
+ mbuf_hdr_size + segment_sz);
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segment_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
void *
@@ -194,7 +204,6 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_latency_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
ctx = rte_malloc(NULL, sizeof(struct cperf_latency_ctx), 0);
@@ -218,83 +227,52 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = sizeof(struct priv_op_data) +
+ test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
- uint32_t segment_nb = (max_size % options->segment_sz) ?
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segment_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
-
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segment_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
- }
-
- if (options->out_of_place == 1) {
-
- snprintf(pool_name, sizeof(pool_name),
- "cperf_pool_out_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
- }
-
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
- }
-
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
+ snprintf(pool_name, sizeof(pool_name), "pool_in_cdev_%d",
dev_id);
- uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
- test_vector->cipher_iv.length +
- test_vector->auth_iv.length +
- test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment online */
+ obj_size += max_size;
+ }
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
+ ctx->pool = rte_mempool_create(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ NULL, NULL, mempool_obj_init,
+ (void *)¶ms,
+ rte_socket_id(), 0);
- if (ctx->crypto_op_pool == NULL)
+ if (ctx->pool == NULL)
goto err;
ctx->res = rte_malloc(NULL, sizeof(struct cperf_op_result) *
@@ -305,7 +283,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
return ctx;
err:
- cperf_latency_test_free(ctx, mbuf_idx);
+ cperf_latency_test_free(ctx);
return NULL;
}
@@ -370,7 +348,7 @@ cperf_latency_test_runner(void *arg)
while (test_burst_size <= ctx->options->max_burst_size) {
uint64_t ops_enqd = 0, ops_deqd = 0;
- uint64_t m_idx = 0, b_idx = 0;
+ uint64_t b_idx = 0;
uint64_t tsc_val, tsc_end, tsc_start;
uint64_t tsc_max = 0, tsc_min = ~0UL, tsc_tot = 0, tsc_idx = 0;
@@ -385,11 +363,9 @@ cperf_latency_test_runner(void *arg)
ctx->options->total_ops -
enqd_tot;
- /* Allocate crypto ops from pool */
- if (burst_size != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, burst_size)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ burst_size) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -399,8 +375,8 @@ cperf_latency_test_runner(void *arg)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
burst_size, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
@@ -429,7 +405,7 @@ cperf_latency_test_runner(void *arg)
/* Free memory for not enqueued operations */
if (ops_enqd != burst_size)
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)&ops[ops_enqd],
burst_size - ops_enqd);
@@ -445,16 +421,11 @@ cperf_latency_test_runner(void *arg)
}
if (likely(ops_deqd)) {
- /*
- * free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
+ /* free crypto ops so they can be reused. */
for (i = 0; i < ops_deqd; i++)
store_timestamp(ops_processed[i], tsc_end);
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
deqd_tot += ops_deqd;
@@ -466,9 +437,6 @@ cperf_latency_test_runner(void *arg)
enqd_max = max(ops_enqd, enqd_max);
enqd_min = min(ops_enqd, enqd_min);
- m_idx += ops_enqd;
- m_idx = m_idx + test_burst_size > ctx->options->pool_sz ?
- 0 : m_idx;
b_idx++;
}
@@ -487,7 +455,7 @@ cperf_latency_test_runner(void *arg)
for (i = 0; i < ops_deqd; i++)
store_timestamp(ops_processed[i], tsc_end);
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
deqd_tot += ops_deqd;
@@ -585,5 +553,5 @@ cperf_latency_test_destructor(void *arg)
rte_cryptodev_stop(ctx->dev_id);
- cperf_latency_test_free(ctx, ctx->options->pool_sz);
+ cperf_latency_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 121ceb1..46889c4 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -43,131 +43,141 @@ struct cperf_throughput_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
static void
-cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
+cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint32_t segment_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr = (phys_addr_t)((uint8_t *)next_seg_phys_addr +
+ mbuf_hdr_size + segment_sz);
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segment_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
void *
@@ -178,7 +188,6 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_throughput_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
ctx = rte_malloc(NULL, sizeof(struct cperf_throughput_ctx), 0);
@@ -201,83 +210,56 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
- uint32_t segment_nb = (max_size % options->segment_sz) ?
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segment_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
-
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segment_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
- }
-
- if (options->out_of_place == 1) {
-
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
- }
+ snprintf(pool_name, sizeof(pool_name), "pool_in_cdev_%d",
+ dev_id);
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment online */
+ obj_size += max_size;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+ ctx->pool = rte_mempool_create(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ NULL, NULL, mempool_obj_init,
+ (void *)¶ms,
+ rte_socket_id(), 0);
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ if (ctx->pool == NULL)
goto err;
return ctx;
err:
- cperf_throughput_test_free(ctx, mbuf_idx);
+ cperf_throughput_test_free(ctx);
return NULL;
}
@@ -329,7 +311,7 @@ cperf_throughput_test_runner(void *test_ctx)
uint64_t ops_enqd = 0, ops_enqd_total = 0, ops_enqd_failed = 0;
uint64_t ops_deqd = 0, ops_deqd_total = 0, ops_deqd_failed = 0;
- uint64_t m_idx = 0, tsc_start, tsc_end, tsc_duration;
+ uint64_t tsc_start, tsc_end, tsc_duration;
uint16_t ops_unused = 0;
@@ -345,11 +327,9 @@ cperf_throughput_test_runner(void *test_ctx)
uint16_t ops_needed = burst_size - ops_unused;
- /* Allocate crypto ops from pool */
- if (ops_needed != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, ops_needed)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ ops_needed) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -359,10 +339,11 @@ cperf_throughput_test_runner(void *test_ctx)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
- ops_needed, ctx->sess, ctx->options,
- ctx->test_vector, iv_offset);
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
+ ops_needed, ctx->sess,
+ ctx->options, ctx->test_vector,
+ iv_offset);
/**
* When ops_needed is smaller than ops_enqd, the
@@ -407,12 +388,8 @@ cperf_throughput_test_runner(void *test_ctx)
ops_processed, test_burst_size);
if (likely(ops_deqd)) {
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
@@ -425,9 +402,6 @@ cperf_throughput_test_runner(void *test_ctx)
ops_deqd_failed++;
}
- m_idx += ops_needed;
- m_idx = m_idx + test_burst_size > ctx->options->pool_sz ?
- 0 : m_idx;
}
/* Dequeue any operations still in the crypto device */
@@ -442,9 +416,8 @@ cperf_throughput_test_runner(void *test_ctx)
if (ops_deqd == 0)
ops_deqd_failed++;
else {
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
-
ops_deqd_total += ops_deqd;
}
}
@@ -532,5 +505,5 @@ cperf_throughput_test_destructor(void *arg)
rte_cryptodev_stop(ctx->dev_id);
- cperf_throughput_test_free(ctx, ctx->options->pool_sz);
+ cperf_throughput_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index b18426c..aa065ed 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -43,135 +43,141 @@ struct cperf_verify_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
-struct cperf_op_result {
- enum rte_crypto_op_status status;
-};
-
static void
-cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
+cperf_verify_test_free(struct cperf_verify_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint32_t segment_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr = (phys_addr_t)((uint8_t *)next_seg_phys_addr +
+ mbuf_hdr_size + segment_sz);
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segment_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
static void
@@ -210,7 +216,6 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_verify_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
ctx = rte_malloc(NULL, sizeof(struct cperf_verify_ctx), 0);
@@ -224,7 +229,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->options = options;
ctx->test_vector = test_vector;
- /* IV goes at the end of the cryptop operation */
+ /* IV goes at the end of the crypto operation */
uint16_t iv_offset = sizeof(struct rte_crypto_op) +
sizeof(struct rte_crypto_sym_op);
@@ -233,83 +238,56 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
- uint32_t segment_nb = (max_size % options->segment_sz) ?
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segment_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
-
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segment_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
- }
-
- if (options->out_of_place == 1) {
-
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
- }
+ snprintf(pool_name, sizeof(pool_name), "pool_in_cdev_%d",
+ dev_id);
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment online */
+ obj_size += max_size;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+ ctx->pool = rte_mempool_create(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ NULL, NULL, mempool_obj_init,
+ (void *)¶ms,
+ rte_socket_id(), 0);
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ if (ctx->pool == NULL)
goto err;
return ctx;
err:
- cperf_verify_test_free(ctx, mbuf_idx);
+ cperf_verify_test_free(ctx);
return NULL;
}
@@ -425,7 +403,7 @@ cperf_verify_test_runner(void *test_ctx)
static int only_once;
- uint64_t i, m_idx = 0;
+ uint64_t i;
uint16_t ops_unused = 0;
struct rte_crypto_op *ops[ctx->options->max_burst_size];
@@ -465,11 +443,9 @@ cperf_verify_test_runner(void *test_ctx)
uint16_t ops_needed = burst_size - ops_unused;
- /* Allocate crypto ops from pool */
- if (ops_needed != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, ops_needed)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ ops_needed) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -479,8 +455,8 @@ cperf_verify_test_runner(void *test_ctx)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
@@ -520,10 +496,6 @@ cperf_verify_test_runner(void *test_ctx)
ops_deqd = rte_cryptodev_dequeue_burst(ctx->dev_id, ctx->qp_id,
ops_processed, ctx->options->max_burst_size);
- m_idx += ops_needed;
- if (m_idx + ctx->options->max_burst_size > ctx->options->pool_sz)
- m_idx = 0;
-
if (ops_deqd == 0) {
/**
* Count dequeue polls which didn't return any
@@ -538,13 +510,10 @@ cperf_verify_test_runner(void *test_ctx)
if (cperf_verify_op(ops_processed[i], ctx->options,
ctx->test_vector))
ops_failed++;
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_crypto_op_free(ops_processed[i]);
}
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
+ (void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
}
@@ -566,13 +535,10 @@ cperf_verify_test_runner(void *test_ctx)
if (cperf_verify_op(ops_processed[i], ctx->options,
ctx->test_vector))
ops_failed++;
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_crypto_op_free(ops_processed[i]);
}
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
+ (void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
}
@@ -628,5 +594,5 @@ cperf_verify_test_destructor(void *arg)
rte_cryptodev_stop(ctx->dev_id);
- cperf_verify_test_free(ctx, ctx->options->pool_sz);
+ cperf_verify_test_free(ctx);
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool
2017-08-18 8:05 ` [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool Pablo de Lara
@ 2017-08-30 8:30 ` Akhil Goyal
[not found] ` <9F7182E3F746AB4EA17801C148F3C60433039119@IRSMSX101.ger.corp.intel.com>
0 siblings, 1 reply; 49+ messages in thread
From: Akhil Goyal @ 2017-08-30 8:30 UTC (permalink / raw)
To: Pablo de Lara, declan.doherty, fiona.trahe, deepak.k.jain,
john.griffin, jerin.jacob, hemant.agrawal
Cc: dev
Hi Pablo,
On 8/18/2017 1:35 PM, Pablo de Lara wrote:
> In order to improve memory utilization, a single mempool
> is created, containing the crypto operation and mbufs
> (one if operation is in-place, two if out-of-place).
> This way, a single object is allocated and freed
> per operation, reducing the amount of memory in cache,
> which improves scalability.
>
> Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> ---
> app/test-crypto-perf/cperf_ops.c | 96 ++++++--
> app/test-crypto-perf/cperf_ops.h | 2 +-
> app/test-crypto-perf/cperf_test_latency.c | 350 ++++++++++++--------------
> app/test-crypto-perf/cperf_test_throughput.c | 347 ++++++++++++--------------
> app/test-crypto-perf/cperf_test_verify.c | 356 ++++++++++++---------------
> 5 files changed, 553 insertions(+), 598 deletions(-)
>
NACK.
This patch replaces rte_pktmbuf_pool_create with the rte_mempool_create
for mbufs, which is not a preferred way to allocate memory for pktmbuf.
Any example/test application in DPDK should not be using this, as this
kind of usages will not be compatible for all dpdk drivers in general.
This kind of usages of rte_mempool_create will not work for any devices
using hw offloaded memory pools for pktmbuf.
one such example is dpaa2.
-Akhil
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
` (5 preceding siblings ...)
2017-08-18 8:05 ` [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool Pablo de Lara
@ 2017-09-04 13:08 ` Zhang, Roy Fan
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
8 siblings, 0 replies; 49+ messages in thread
From: Zhang, Roy Fan @ 2017-09-04 13:08 UTC (permalink / raw)
To: De Lara Guarch, Pablo; +Cc: dev, Zhang, Roy Fan
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pablo de Lara
> Sent: Friday, August 18, 2017 9:05 AM
> To: Doherty, Declan <declan.doherty@intel.com>; Trahe, Fiona
> <fiona.trahe@intel.com>; Jain, Deepak K <deepak.k.jain@intel.com>; Griffin,
> John <john.griffin@intel.com>; jerin.jacob@caviumnetworks.com;
> akhil.goyal@nxp.com; hemant.agrawal@nxp.com
> Cc: dev@dpdk.org; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
> Subject: [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements
>
> This patchset includes some improvements in the Crypto performance
> application.
>
> The last patch, in particular, introduces performance improvements.
> Currently, crypto operations are allocated in a mempool and mbufs in a
> different one.
> Since crypto operations and mbufs are mapped 1:1, the can share the same
> mempool object (similar to having the mbuf in the private data of the crypto
> operation).
> This improves performance, as it is only required to handle a single mempool,
> improving cache usage.
>
> Pablo de Lara (6):
> app/crypto-perf: set AAD after the crypto operation
> app/crypto-perf: parse AEAD data from vectors
> app/crypto-perf: parse segment size
> app/crypto-perf: overwrite mbuf when verifying
> app/crypto-perf: do not populate the mbufs at init
> app/crypto-perf: use single mempool
>
> app/test-crypto-perf/cperf_ops.c | 136 ++++++--
> app/test-crypto-perf/cperf_ops.h | 2 +-
> app/test-crypto-perf/cperf_options.h | 4 +-
> app/test-crypto-perf/cperf_options_parsing.c | 45 +--
> app/test-crypto-perf/cperf_test_latency.c | 365 ++++++++++------------
> app/test-crypto-perf/cperf_test_throughput.c | 361 ++++++++++----------
> -
> app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++
> app/test-crypto-perf/cperf_test_verify.c | 382 +++++++++++------------
> doc/guides/tools/cryptoperf.rst | 6 +-
> 9 files changed, 717 insertions(+), 639 deletions(-)
>
> --
> 2.9.4
Series-Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool
[not found] ` <9F7182E3F746AB4EA17801C148F3C60433039119@IRSMSX101.ger.corp.intel.com>
@ 2017-09-11 11:08 ` De Lara Guarch, Pablo
2017-09-11 13:10 ` Shreyansh Jain
0 siblings, 1 reply; 49+ messages in thread
From: De Lara Guarch, Pablo @ 2017-09-11 11:08 UTC (permalink / raw)
To: Akhil Goyal, Doherty, Declan, Trahe, Fiona, Jain, Deepak K,
Griffin, John, jerin.jacob, hemant.agrawal
Cc: dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Akhil Goyal
> Sent: Wednesday, August 30, 2017 9:31 AM
> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Doherty,
> Declan <declan.doherty@intel.com>; Trahe, Fiona
> <fiona.trahe@intel.com>; Jain, Deepak K <deepak.k.jain@intel.com>;
> Griffin, John <john.griffin@intel.com>;
> jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single
> mempool
>
> Hi Pablo,
> On 8/18/2017 1:35 PM, Pablo de Lara wrote:
> > In order to improve memory utilization, a single mempool is created,
> > containing the crypto operation and mbufs (one if operation is
> > in-place, two if out-of-place).
> > This way, a single object is allocated and freed per operation,
> > reducing the amount of memory in cache, which improves scalability.
> >
> > Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> > ---
> > app/test-crypto-perf/cperf_ops.c | 96 ++++++--
> > app/test-crypto-perf/cperf_ops.h | 2 +-
> > app/test-crypto-perf/cperf_test_latency.c | 350 ++++++++++++-------
---
> ----
> > app/test-crypto-perf/cperf_test_throughput.c | 347
> > ++++++++++++------
> --------
> > app/test-crypto-perf/cperf_test_verify.c | 356 ++++++++++++--------
---
> ----
> > 5 files changed, 553 insertions(+), 598 deletions(-)
> >
> NACK.
> This patch replaces rte_pktmbuf_pool_create with the
> rte_mempool_create for mbufs, which is not a preferred way to allocate
memory for pktmbuf.
>
> Any example/test application in DPDK should not be using this, as this
> kind of usages will not be compatible for all dpdk drivers in general.
>
> This kind of usages of rte_mempool_create will not work for any
> devices using hw offloaded memory pools for pktmbuf.
> one such example is dpaa2.
Hi Akhil,
Sorry for the delay on this reply and thanks for the review.
I think, since we are not getting the buffers from the NIC, but we are allocating
them ourselves, it is not strictly required to call rte_pktmbuf_pool_create.
In the end, we only need them for memory for the crypto PMDs and we are not touching
anything in them, so I think using calling rte_mempool_create should work ok.
Having a single mempool would be way more performant and would avoid the scalability
issues that we are having in this application now, and knowing that this application
was created to test crypto PMD performance, I think it is worth trying this out.
What is it exactly needed for dpaa2? Is the mempool handler?
Would it work for you if I create the mempool in a similar way as what
rte_pktmbuf_pool_create is doing? Calling rte_mempool_set_ops_byname?
Thanks!
Pablo
>
> -Akhil
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool
2017-09-11 11:08 ` De Lara Guarch, Pablo
@ 2017-09-11 13:10 ` Shreyansh Jain
2017-09-11 13:56 ` De Lara Guarch, Pablo
0 siblings, 1 reply; 49+ messages in thread
From: Shreyansh Jain @ 2017-09-11 13:10 UTC (permalink / raw)
To: De Lara Guarch, Pablo, Akhil Goyal
Cc: Doherty, Declan, Trahe, Fiona, Jain, Deepak K, Griffin, John,
jerin.jacob, hemant.agrawal, dev
Hello Pablo,
I have a comment inline:
On Monday 11 September 2017 04:38 PM, De Lara Guarch, Pablo wrote:
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Akhil Goyal
>> Sent: Wednesday, August 30, 2017 9:31 AM
>> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Doherty,
>> Declan <declan.doherty@intel.com>; Trahe, Fiona
>> <fiona.trahe@intel.com>; Jain, Deepak K <deepak.k.jain@intel.com>;
>> Griffin, John <john.griffin@intel.com>;
>> jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com
>> Cc: dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single
>> mempool
>>
>> Hi Pablo,
>> On 8/18/2017 1:35 PM, Pablo de Lara wrote:
>>> In order to improve memory utilization, a single mempool is created,
>>> containing the crypto operation and mbufs (one if operation is
>>> in-place, two if out-of-place).
>>> This way, a single object is allocated and freed per operation,
>>> reducing the amount of memory in cache, which improves scalability.
>>>
>>> Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
>>> ---
>>> app/test-crypto-perf/cperf_ops.c | 96 ++++++--
>>> app/test-crypto-perf/cperf_ops.h | 2 +-
>>> app/test-crypto-perf/cperf_test_latency.c | 350 ++++++++++++-------
> ---
>> ----
>>> app/test-crypto-perf/cperf_test_throughput.c | 347
>>> ++++++++++++------
>> --------
>>> app/test-crypto-perf/cperf_test_verify.c | 356 ++++++++++++--------
> ---
>> ----
>>> 5 files changed, 553 insertions(+), 598 deletions(-)
>>>
>> NACK.
>> This patch replaces rte_pktmbuf_pool_create with the
>> rte_mempool_create for mbufs, which is not a preferred way to allocate
> memory for pktmbuf.
>>
>> Any example/test application in DPDK should not be using this, as this
>> kind of usages will not be compatible for all dpdk drivers in general.
>>
>> This kind of usages of rte_mempool_create will not work for any
>> devices using hw offloaded memory pools for pktmbuf.
>> one such example is dpaa2.
>
> Hi Akhil,
>
> Sorry for the delay on this reply and thanks for the review.
>
> I think, since we are not getting the buffers from the NIC, but we are allocating
> them ourselves, it is not strictly required to call rte_pktmbuf_pool_create.
> In the end, we only need them for memory for the crypto PMDs and we are not touching
> anything in them, so I think using calling rte_mempool_create should work ok.
> Having a single mempool would be way more performant and would avoid the scalability
> issues that we are having in this application now, and knowing that this application
> was created to test crypto PMD performance, I think it is worth trying this out.
>
> What is it exactly needed for dpaa2? Is the mempool handler?
If I recall correctly:
This is the call flow when rte_pktmbuf_pool_create is called:
- rte_pktmbuf_pool_create
`-> rte_mempool_create_empty
`-> allocate and fill mempool object with defaults
`-> rte_mempool_set_ops_byname
`-> sets mempool handler to RTE_MBUF_DEFAULT_MEMPOOL_OPS
`-> rte_mempool_populate_default
`-> calls pool handler specific enqueue/dequeue
but that of rte_mempool_create is:
- rte_mempool_create
`-> rte_mempool_create_empty
`-> allocate and fill mempool object with defaults
`-> rte_mempool_set_ops_byname
`-> set to one of ring_*_*
No check/logic for configuration defined handler
like RTE_MBUF_DEFAULT_MEMPOOL_OPS
`-> rte_mempool_populate_default
`-> calls ring* handler specific enqueue/dequeue
Calling rte_mempool_create bypasses the check for any mempool handler
configured through the build system.
> Would it work for you if I create the mempool in a similar way as what
> rte_pktmbuf_pool_create is doing? Calling rte_mempool_set_ops_byname?
Yes, but that would mean using the combination of
rte_mempool_create_empty and rte_mempool_set_ops_byname which,
eventually, would be equal to using rte_pktmbuf_pool_create.
rte_mempool_set_ops_byname over a mempool created by rte_mempool_create
would mean changing the enqueue/dequeue operations *after* the mempool
has been populated. That would be incorrect.
I am not sure of what the intent it - whether these buffers should be
allowed to be offloaded to hardware. If yes, then rte_mempool_create
wouldn't help.
>
> Thanks!
> Pablo
>
>
>>
>> -Akhil
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool
2017-09-11 13:10 ` Shreyansh Jain
@ 2017-09-11 13:56 ` De Lara Guarch, Pablo
0 siblings, 0 replies; 49+ messages in thread
From: De Lara Guarch, Pablo @ 2017-09-11 13:56 UTC (permalink / raw)
To: Shreyansh Jain, Akhil Goyal
Cc: Doherty, Declan, Trahe, Fiona, Jain, Deepak K, Griffin, John,
jerin.jacob, hemant.agrawal, dev
> -----Original Message-----
> From: Shreyansh Jain [mailto:shreyansh.jain@nxp.com]
> Sent: Monday, September 11, 2017 2:11 PM
> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Akhil Goyal
> <akhil.goyal@nxp.com>
> Cc: Doherty, Declan <declan.doherty@intel.com>; Trahe, Fiona
> <fiona.trahe@intel.com>; Jain, Deepak K <deepak.k.jain@intel.com>;
> Griffin, John <john.griffin@intel.com>; jerin.jacob@caviumnetworks.com;
> hemant.agrawal@nxp.com; dev@dpdk.org
> Subject: Re: [PATCH 6/6] app/crypto-perf: use single mempool
>
> Hello Pablo,
>
> I have a comment inline:
>
> On Monday 11 September 2017 04:38 PM, De Lara Guarch, Pablo wrote:
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Akhil Goyal
> >> Sent: Wednesday, August 30, 2017 9:31 AM
> >> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Doherty,
> >> Declan <declan.doherty@intel.com>; Trahe, Fiona
> >> <fiona.trahe@intel.com>; Jain, Deepak K <deepak.k.jain@intel.com>;
> >> Griffin, John <john.griffin@intel.com>;
> >> jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com
> >> Cc: dev@dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single
> >> mempool
> >>
> >> Hi Pablo,
> >> On 8/18/2017 1:35 PM, Pablo de Lara wrote:
> >>> In order to improve memory utilization, a single mempool is created,
> >>> containing the crypto operation and mbufs (one if operation is
> >>> in-place, two if out-of-place).
> >>> This way, a single object is allocated and freed per operation,
> >>> reducing the amount of memory in cache, which improves scalability.
> >>>
> >>> Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> >>> ---
> >>> app/test-crypto-perf/cperf_ops.c | 96 ++++++--
> >>> app/test-crypto-perf/cperf_ops.h | 2 +-
> >>> app/test-crypto-perf/cperf_test_latency.c | 350 ++++++++++++-----
> --
> > ---
> >> ----
> >>> app/test-crypto-perf/cperf_test_throughput.c | 347
> >>> ++++++++++++------
> >> --------
> >>> app/test-crypto-perf/cperf_test_verify.c | 356 ++++++++++++-------
> -
> > ---
> >> ----
> >>> 5 files changed, 553 insertions(+), 598 deletions(-)
> >>>
> >> NACK.
> >> This patch replaces rte_pktmbuf_pool_create with the
> >> rte_mempool_create for mbufs, which is not a preferred way to
> >> allocate
> > memory for pktmbuf.
> >>
> >> Any example/test application in DPDK should not be using this, as
> >> this kind of usages will not be compatible for all dpdk drivers in general.
> >>
> >> This kind of usages of rte_mempool_create will not work for any
> >> devices using hw offloaded memory pools for pktmbuf.
> >> one such example is dpaa2.
> >
> > Hi Akhil,
> >
> > Sorry for the delay on this reply and thanks for the review.
> >
> > I think, since we are not getting the buffers from the NIC, but we are
> > allocating them ourselves, it is not strictly required to call
> rte_pktmbuf_pool_create.
> > In the end, we only need them for memory for the crypto PMDs and we
> > are not touching anything in them, so I think using calling
> rte_mempool_create should work ok.
> > Having a single mempool would be way more performant and would
> avoid
> > the scalability issues that we are having in this application now, and
> > knowing that this application was created to test crypto PMD
> performance, I think it is worth trying this out.
> >
> > What is it exactly needed for dpaa2? Is the mempool handler?
>
> If I recall correctly:
> This is the call flow when rte_pktmbuf_pool_create is called:
> - rte_pktmbuf_pool_create
> `-> rte_mempool_create_empty
> `-> allocate and fill mempool object with defaults
> `-> rte_mempool_set_ops_byname
> `-> sets mempool handler to RTE_MBUF_DEFAULT_MEMPOOL_OPS
> `-> rte_mempool_populate_default
> `-> calls pool handler specific enqueue/dequeue
>
> but that of rte_mempool_create is:
> - rte_mempool_create
> `-> rte_mempool_create_empty
> `-> allocate and fill mempool object with defaults
> `-> rte_mempool_set_ops_byname
> `-> set to one of ring_*_*
> No check/logic for configuration defined handler
> like RTE_MBUF_DEFAULT_MEMPOOL_OPS
> `-> rte_mempool_populate_default
> `-> calls ring* handler specific enqueue/dequeue
>
> Calling rte_mempool_create bypasses the check for any mempool handler
> configured through the build system.
>
> > Would it work for you if I create the mempool in a similar way as what
> > rte_pktmbuf_pool_create is doing? Calling
> rte_mempool_set_ops_byname?
>
> Yes, but that would mean using the combination of
> rte_mempool_create_empty and rte_mempool_set_ops_byname which,
> eventually, would be equal to using rte_pktmbuf_pool_create.
>
> rte_mempool_set_ops_byname over a mempool created by
> rte_mempool_create would mean changing the enqueue/dequeue
> operations *after* the mempool has been populated. That would be
> incorrect.
>
> I am not sure of what the intent it - whether these buffers should be
> allowed to be offloaded to hardware. If yes, then rte_mempool_create
> wouldn't help.
Ok, got it. I think I would go for the option of imitating what rte_pktmbuf_pool_create,
but adding the flexibility of having a crypto operation and mbuf, instead of just the mbuf.
Thanks for the input.
Pablo
>
> >
> > Thanks!
> > Pablo
> >
> >
> >>
> >> -Akhil
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 0/7] Crypto-perf app improvements
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
` (6 preceding siblings ...)
2017-09-04 13:08 ` [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Zhang, Roy Fan
@ 2017-09-13 7:20 ` Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
` (5 more replies)
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
8 siblings, 6 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:20 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
This patchset includes some improvements in the
Crypto performance application, including app fixes
and new parameter additions.
The last patch, in particular, introduces performance improvements.
Currently, crypto operations are allocated in a mempool and mbufs
in a different one. Then mbufs are extracted to an array,
which is looped through for all the crypto operations, impacting
greatly the performance, as much more memory is used.
Since crypto operations and mbufs are mapped 1:1, the can share
the same mempool object (similar to having the mbuf in the private
data of the crypto operation).
This improves performance, as it is only required to handle
a single mempool and the mbufs are obtained from the cache
of the mempoool, and not from an static array.
Changes in v2:
- Added support for multiple queue pairs
- Mempool for crypto operations and mbufs is now created
using rte_mempool_create_empty(), rte_mempool_set_ops_byname(),
rte_mempool_populate_default() and rte_mempool_obj_iter(),
so mempool handler is set, as per Akhil's request.
Pablo de Lara (7):
app/crypto-perf: set AAD after the crypto operation
app/crypto-perf: parse AEAD data from vectors
app/crypto-perf: parse segment size
app/crypto-perf: overwrite mbuf when verifying
app/crypto-perf: do not populate the mbufs at init
app/crypto-perf: support multiple queue pairs
app/crypto-perf: use single mempool
app/test-crypto-perf/cperf_ops.c | 136 ++++++--
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_options.h | 6 +-
app/test-crypto-perf/cperf_options_parsing.c | 67 +++-
app/test-crypto-perf/cperf_test_latency.c | 380 +++++++++++----------
app/test-crypto-perf/cperf_test_throughput.c | 378 +++++++++++----------
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++
app/test-crypto-perf/cperf_test_verify.c | 399 ++++++++++++-----------
app/test-crypto-perf/main.c | 56 ++--
doc/guides/tools/cryptoperf.rst | 11 +-
10 files changed, 833 insertions(+), 657 deletions(-)
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
@ 2017-09-13 7:20 ` Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
` (4 subsequent siblings)
5 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:20 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
Instead of prepending the AAD (Additional Authenticated Data)
in the mbuf, it is easier to set after the crypto operation,
as it is a read-only value, like the IV, and then it is not
restricted to the size of the mbuf headroom.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 16 ++++++++++++----
app/test-crypto-perf/cperf_test_latency.c | 16 ++++------------
app/test-crypto-perf/cperf_test_throughput.c | 15 +++------------
app/test-crypto-perf/cperf_test_verify.c | 20 ++++++--------------
4 files changed, 25 insertions(+), 42 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 88fb972..5be20d9 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -307,6 +307,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
uint16_t iv_offset)
{
uint16_t i;
+ uint16_t aad_offset = iv_offset +
+ RTE_ALIGN_CEIL(test_vector->aead_iv.length, 16);
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
@@ -318,11 +320,12 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
- sym_op->aead.data.offset =
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+ sym_op->aead.data.offset = 0;
- sym_op->aead.aad.data = rte_pktmbuf_mtod(bufs_in[i], uint8_t *);
- sym_op->aead.aad.phys_addr = rte_pktmbuf_mtophys(bufs_in[i]);
+ sym_op->aead.aad.data = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, aad_offset);
+ sym_op->aead.aad.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
+ aad_offset);
if (options->aead_op == RTE_CRYPTO_AEAD_OP_DECRYPT) {
sym_op->aead.digest.data = test_vector->digest.data;
@@ -360,6 +363,11 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
memcpy(iv_ptr, test_vector->aead_iv.data,
test_vector->aead_iv.length);
+
+ /* Copy AAD after the IV */
+ memcpy(ops[i]->sym->aead.aad.data,
+ test_vector->aad.data,
+ test_vector->aad.length);
}
}
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 58b21ab..2a46af9 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -174,16 +174,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -289,10 +279,12 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = sizeof(struct priv_op_data) +
+ uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
test_vector->cipher_iv.length +
test_vector->auth_iv.length +
- test_vector->aead_iv.length;
+ test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
512, priv_size, rte_socket_id());
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 3bb1cb0..07aea6a 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -158,16 +158,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -270,8 +260,9 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index a314646..bc07eb6 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -162,16 +162,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -274,8 +264,10 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
512, priv_size, rte_socket_id());
@@ -362,9 +354,9 @@ cperf_verify_op(struct rte_crypto_op *op,
break;
case CPERF_AEAD:
cipher = 1;
- cipher_offset = vector->aad.length;
+ cipher_offset = 0;
auth = 1;
- auth_offset = vector->aad.length + options->test_buffer_size;
+ auth_offset = options->test_buffer_size;
break;
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 2/7] app/crypto-perf: parse AEAD data from vectors
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
@ 2017-09-13 7:20 ` Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 3/7] app/crypto-perf: parse segment size Pablo de Lara
` (3 subsequent siblings)
5 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:20 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara, stable
Since DPDK 17.08, there is specific parameters
for AEAD algorithm, like AES-GCM. When verifying
crypto operations with test vectors, the parser
was not reading AEAD data (such as IV or key).
Fixes: 8a5b494a7f99 ("app/test-crypto-perf: add AEAD parameters")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/app/test-crypto-perf/cperf_test_vector_parsing.c b/app/test-crypto-perf/cperf_test_vector_parsing.c
index 148a604..3952632 100644
--- a/app/test-crypto-perf/cperf_test_vector_parsing.c
+++ b/app/test-crypto-perf/cperf_test_vector_parsing.c
@@ -116,6 +116,20 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_key.data) {
+ printf("\naead_key =\n");
+ for (i = 0; i < test_vector->aead_key.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_key.length - 1))
+ printf("0x%02x", test_vector->aead_key.data[i]);
+ else
+ printf("0x%02x, ",
+ test_vector->aead_key.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->cipher_iv.data) {
printf("\ncipher_iv =\n");
for (i = 0; i < test_vector->cipher_iv.length; ++i) {
@@ -142,6 +156,19 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_iv.data) {
+ printf("\naead_iv =\n");
+ for (i = 0; i < test_vector->aead_iv.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_iv.length - 1))
+ printf("0x%02x", test_vector->aead_iv.data[i]);
+ else
+ printf("0x%02x, ", test_vector->aead_iv.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->ciphertext.data) {
printf("\nciphertext =\n");
for (i = 0; i < test_vector->ciphertext.length; ++i) {
@@ -345,6 +372,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_key.length = opts->auth_key_sz;
}
+ } else if (strstr(key_token, "aead_key")) {
+ rte_free(vector->aead_key.data);
+ vector->aead_key.data = data;
+ if (tc_found)
+ vector->aead_key.length = data_length;
+ else {
+ if (opts->aead_key_sz > data_length) {
+ printf("Global aead_key shorter than "
+ "aead_key_sz\n");
+ return -1;
+ }
+ vector->aead_key.length = opts->aead_key_sz;
+ }
+
} else if (strstr(key_token, "cipher_iv")) {
rte_free(vector->cipher_iv.data);
vector->cipher_iv.data = data;
@@ -373,6 +414,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_iv.length = opts->auth_iv_sz;
}
+ } else if (strstr(key_token, "aead_iv")) {
+ rte_free(vector->aead_iv.data);
+ vector->aead_iv.data = data;
+ if (tc_found)
+ vector->aead_iv.length = data_length;
+ else {
+ if (opts->aead_iv_sz > data_length) {
+ printf("Global aead iv shorter than "
+ "aead_iv_sz\n");
+ return -1;
+ }
+ vector->aead_iv.length = opts->aead_iv_sz;
+ }
+
} else if (strstr(key_token, "ciphertext")) {
rte_free(vector->ciphertext.data);
vector->ciphertext.data = data;
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 3/7] app/crypto-perf: parse segment size
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
@ 2017-09-13 7:20 ` Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
` (2 subsequent siblings)
5 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:20 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
Instead of parsing number of segments, from the command line,
parse segment size, as it is a more usual case to have
the segment size fixed and then different packet sizes
will require different number of segments.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 24 ++++++++
app/test-crypto-perf/cperf_options.h | 4 +-
app/test-crypto-perf/cperf_options_parsing.c | 38 +++++++++----
app/test-crypto-perf/cperf_test_latency.c | 82 +++++++++++++++++-----------
app/test-crypto-perf/cperf_test_throughput.c | 82 +++++++++++++++++-----------
app/test-crypto-perf/cperf_test_verify.c | 82 +++++++++++++++++-----------
doc/guides/tools/cryptoperf.rst | 6 +-
7 files changed, 207 insertions(+), 111 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 5be20d9..ad32065 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -175,6 +175,14 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -256,6 +264,14 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -346,6 +362,14 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index 10cd2d8..5f2b28b 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -11,7 +11,7 @@
#define CPERF_TOTAL_OPS ("total-ops")
#define CPERF_BURST_SIZE ("burst-sz")
#define CPERF_BUFFER_SIZE ("buffer-sz")
-#define CPERF_SEGMENTS_NB ("segments-nb")
+#define CPERF_SEGMENT_SIZE ("segment-sz")
#define CPERF_DEVTYPE ("devtype")
#define CPERF_OPTYPE ("optype")
@@ -66,7 +66,7 @@ struct cperf_options {
uint32_t pool_sz;
uint32_t total_ops;
- uint32_t segments_nb;
+ uint32_t segment_sz;
uint32_t test_buffer_size;
uint32_t sessionless:1;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 663f53f..9e5f486 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -324,17 +324,17 @@ parse_buffer_sz(struct cperf_options *opts, const char *arg)
}
static int
-parse_segments_nb(struct cperf_options *opts, const char *arg)
+parse_segment_sz(struct cperf_options *opts, const char *arg)
{
- int ret = parse_uint32_t(&opts->segments_nb, arg);
+ int ret = parse_uint32_t(&opts->segment_sz, arg);
if (ret) {
- RTE_LOG(ERR, USER1, "failed to parse segments number\n");
+ RTE_LOG(ERR, USER1, "failed to parse segment size\n");
return -1;
}
- if ((opts->segments_nb == 0) || (opts->segments_nb > 255)) {
- RTE_LOG(ERR, USER1, "invalid segments number specified\n");
+ if (opts->segment_sz == 0) {
+ RTE_LOG(ERR, USER1, "Segment size has to be bigger than 0\n");
return -1;
}
@@ -642,7 +642,7 @@ static struct option lgopts[] = {
{ CPERF_TOTAL_OPS, required_argument, 0, 0 },
{ CPERF_BURST_SIZE, required_argument, 0, 0 },
{ CPERF_BUFFER_SIZE, required_argument, 0, 0 },
- { CPERF_SEGMENTS_NB, required_argument, 0, 0 },
+ { CPERF_SEGMENT_SIZE, required_argument, 0, 0 },
{ CPERF_DEVTYPE, required_argument, 0, 0 },
{ CPERF_OPTYPE, required_argument, 0, 0 },
@@ -699,7 +699,11 @@ cperf_options_default(struct cperf_options *opts)
opts->min_burst_size = 32;
opts->inc_burst_size = 0;
- opts->segments_nb = 1;
+ /*
+ * Will be parsed from command line or set to
+ * maximum buffer size + digest, later
+ */
+ opts->segment_sz = 0;
strncpy(opts->device_type, "crypto_aesni_mb",
sizeof(opts->device_type));
@@ -741,7 +745,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_TOTAL_OPS, parse_total_ops },
{ CPERF_BURST_SIZE, parse_burst_sz },
{ CPERF_BUFFER_SIZE, parse_buffer_sz },
- { CPERF_SEGMENTS_NB, parse_segments_nb },
+ { CPERF_SEGMENT_SIZE, parse_segment_sz },
{ CPERF_DEVTYPE, parse_device_type },
{ CPERF_OPTYPE, parse_op_type },
{ CPERF_SESSIONLESS, parse_sessionless },
@@ -849,9 +853,21 @@ check_cipher_buffer_length(struct cperf_options *options)
int
cperf_options_check(struct cperf_options *options)
{
- if (options->segments_nb > options->min_buffer_size) {
+ if (options->op_type == CPERF_CIPHER_ONLY)
+ options->digest_sz = 0;
+
+ /*
+ * If segment size is not set, assume only one segment,
+ * big enough to contain the largest buffer and the digest
+ */
+ if (options->segment_sz == 0)
+ options->segment_sz = options->max_buffer_size +
+ options->digest_sz;
+
+ if (options->segment_sz < options->digest_sz) {
RTE_LOG(ERR, USER1,
- "Segments number greater than buffer size.\n");
+ "Segment size should be at least "
+ "the size of the digest\n");
return -EINVAL;
}
@@ -967,7 +983,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("%u ", opts->burst_size_list[size_idx]);
printf("\n");
}
- printf("\n# segments per buffer: %u\n", opts->segments_nb);
+ printf("\n# segment size: %u\n", opts->segment_sz);
printf("#\n");
printf("# cryptodev type: %s\n", opts->device_type);
printf("#\n");
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 2a46af9..b272bb1 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -116,18 +116,18 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -137,11 +137,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -154,22 +161,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
+
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -217,13 +234,14 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -236,7 +254,9 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -251,9 +271,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -267,8 +285,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -339,7 +357,7 @@ cperf_latency_test_runner(void *arg)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 07aea6a..d5e93f7 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -100,18 +100,18 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -121,11 +121,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -138,22 +145,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
+
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -200,13 +217,14 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -218,7 +236,9 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -232,9 +252,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -248,8 +266,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -297,7 +315,7 @@ cperf_throughput_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index bc07eb6..6f790ce 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -104,18 +104,18 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -125,11 +125,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -142,22 +149,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ if (m == NULL)
+ goto error;
+
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -204,13 +221,14 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -222,7 +240,9 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -236,9 +256,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -252,8 +270,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -405,7 +423,7 @@ cperf_verify_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 457f817..23b2b98 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -170,9 +170,11 @@ The following are the appication command-line options:
* List of values, up to 32 values, separated in commas (i.e. ``--buffer-sz 32,64,128``)
-* ``--segments-nb <n>``
+* ``--segment-sz <n>``
- Set the number of segments per packet.
+ Set the size of the segment to use, for Scatter Gather List testing.
+ By default, it is set to the size of the maximum buffer size, including the digest size,
+ so a single segment is created.
* ``--devtype <name>``
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 4/7] app/crypto-perf: overwrite mbuf when verifying
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
` (2 preceding siblings ...)
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 3/7] app/crypto-perf: parse segment size Pablo de Lara
@ 2017-09-13 7:20 ` Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
5 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:20 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
When running the verify test, mbufs in the pool were
populated with the test vector loaded from a file.
To avoid limiting the number of operations to the pool size,
mbufs will be rewritten with the test vector, before
linking them to the crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_options_parsing.c | 7 ------
app/test-crypto-perf/cperf_test_verify.c | 35 ++++++++++++++++++++++++++++
2 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 9e5f486..52b884f 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -900,13 +900,6 @@ cperf_options_check(struct cperf_options *options)
}
if (options->test == CPERF_TEST_TYPE_VERIFY &&
- options->total_ops > options->pool_sz) {
- RTE_LOG(ERR, USER1, "Total number of ops must be less than or"
- " equal to the pool size.\n");
- return -EINVAL;
- }
-
- if (options->test == CPERF_TEST_TYPE_VERIFY &&
(options->inc_buffer_size != 0 ||
options->buffer_size_count > 1)) {
RTE_LOG(ERR, USER1, "Only one buffer size is allowed when "
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 6f790ce..03474cb 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -187,6 +187,34 @@ cperf_mbuf_create(struct rte_mempool *mempool,
return NULL;
}
+static void
+cperf_mbuf_set(struct rte_mbuf *mbuf,
+ const struct cperf_options *options,
+ const struct cperf_test_vector *test_vector)
+{
+ uint32_t segment_sz = options->segment_sz;
+ uint8_t *mbuf_data;
+ uint8_t *test_data =
+ (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ test_vector->plaintext.data :
+ test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
+
+ while (remaining_bytes) {
+ mbuf_data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ return;
+ }
+
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ mbuf = mbuf->next;
+ }
+}
+
void *
cperf_verify_test_constructor(struct rte_mempool *sess_mp,
uint8_t dev_id, uint16_t qp_id,
@@ -469,6 +497,13 @@ cperf_verify_test_runner(void *test_ctx)
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
+
+ /* Populate the mbuf with the test vector, for verification */
+ for (i = 0; i < ops_needed; i++)
+ cperf_mbuf_set(ops[i]->sym->m_src,
+ ctx->options,
+ ctx->test_vector);
+
#ifdef CPERF_LINEARIZATION_ENABLE
if (linearize) {
/* PMD doesn't support scatter-gather and source buffer
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 5/7] app/crypto-perf: do not populate the mbufs at init
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
` (3 preceding siblings ...)
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
@ 2017-09-13 7:20 ` Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
5 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:20 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
For throughput and latency tests, it is not required
to populate the mbufs with any test vector.
For verify test, there is already a function that rewrites
the mbufs every time they are going to be used with
crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_latency.c | 31 ++++++++--------------------
app/test-crypto-perf/cperf_test_throughput.c | 31 ++++++++--------------------
app/test-crypto-perf/cperf_test_verify.c | 31 ++++++++--------------------
3 files changed, 27 insertions(+), 66 deletions(-)
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index b272bb1..997844a 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -118,15 +118,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -137,15 +132,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -161,15 +152,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -257,7 +244,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -286,7 +273,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index d5e93f7..121ceb1 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -102,15 +102,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -121,15 +116,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -145,15 +136,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -239,7 +226,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -267,7 +254,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 03474cb..b18426c 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -106,15 +106,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -125,15 +120,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -149,15 +140,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -271,7 +258,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -299,7 +286,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 0/7] Crypto-perf app improvements
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
` (7 preceding siblings ...)
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
@ 2017-09-13 7:22 ` Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
` (6 more replies)
8 siblings, 7 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:22 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
This patchset includes some improvements in the
Crypto performance application, including app fixes
and new parameter additions.
The last patch, in particular, introduces performance improvements.
Currently, crypto operations are allocated in a mempool and mbufs
in a different one. Then mbufs are extracted to an array,
which is looped through for all the crypto operations, impacting
greatly the performance, as much more memory is used.
Since crypto operations and mbufs are mapped 1:1, the can share
the same mempool object (similar to having the mbuf in the private
data of the crypto operation).
This improves performance, as it is only required to handle
a single mempool and the mbufs are obtained from the cache
of the mempoool, and not from an static array.
Changes in v2:
- Added support for multiple queue pairs
- Mempool for crypto operations and mbufs is now created
using rte_mempool_create_empty(), rte_mempool_set_ops_byname(),
rte_mempool_populate_default() and rte_mempool_obj_iter(),
so mempool handler is set, as per Akhil's request.
Pablo de Lara (7):
app/crypto-perf: set AAD after the crypto operation
app/crypto-perf: parse AEAD data from vectors
app/crypto-perf: parse segment size
app/crypto-perf: overwrite mbuf when verifying
app/crypto-perf: do not populate the mbufs at init
app/crypto-perf: support multiple queue pairs
app/crypto-perf: use single mempool
app/test-crypto-perf/cperf_ops.c | 136 ++++++--
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_options.h | 6 +-
app/test-crypto-perf/cperf_options_parsing.c | 67 +++-
app/test-crypto-perf/cperf_test_latency.c | 380 +++++++++++----------
app/test-crypto-perf/cperf_test_throughput.c | 378 +++++++++++----------
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++
app/test-crypto-perf/cperf_test_verify.c | 399 ++++++++++++-----------
app/test-crypto-perf/main.c | 56 ++--
doc/guides/tools/cryptoperf.rst | 11 +-
10 files changed, 833 insertions(+), 657 deletions(-)
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
@ 2017-09-13 7:22 ` Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
` (5 subsequent siblings)
6 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:22 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
Instead of prepending the AAD (Additional Authenticated Data)
in the mbuf, it is easier to set after the crypto operation,
as it is a read-only value, like the IV, and then it is not
restricted to the size of the mbuf headroom.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 16 ++++++++++++----
app/test-crypto-perf/cperf_test_latency.c | 16 ++++------------
app/test-crypto-perf/cperf_test_throughput.c | 15 +++------------
app/test-crypto-perf/cperf_test_verify.c | 20 ++++++--------------
4 files changed, 25 insertions(+), 42 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 88fb972..5be20d9 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -307,6 +307,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
uint16_t iv_offset)
{
uint16_t i;
+ uint16_t aad_offset = iv_offset +
+ RTE_ALIGN_CEIL(test_vector->aead_iv.length, 16);
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
@@ -318,11 +320,12 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
- sym_op->aead.data.offset =
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+ sym_op->aead.data.offset = 0;
- sym_op->aead.aad.data = rte_pktmbuf_mtod(bufs_in[i], uint8_t *);
- sym_op->aead.aad.phys_addr = rte_pktmbuf_mtophys(bufs_in[i]);
+ sym_op->aead.aad.data = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, aad_offset);
+ sym_op->aead.aad.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
+ aad_offset);
if (options->aead_op == RTE_CRYPTO_AEAD_OP_DECRYPT) {
sym_op->aead.digest.data = test_vector->digest.data;
@@ -360,6 +363,11 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
memcpy(iv_ptr, test_vector->aead_iv.data,
test_vector->aead_iv.length);
+
+ /* Copy AAD after the IV */
+ memcpy(ops[i]->sym->aead.aad.data,
+ test_vector->aad.data,
+ test_vector->aad.length);
}
}
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 58b21ab..2a46af9 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -174,16 +174,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -289,10 +279,12 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = sizeof(struct priv_op_data) +
+ uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
test_vector->cipher_iv.length +
test_vector->auth_iv.length +
- test_vector->aead_iv.length;
+ test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
512, priv_size, rte_socket_id());
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 3bb1cb0..07aea6a 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -158,16 +158,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -270,8 +260,9 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index a314646..bc07eb6 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -162,16 +162,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -274,8 +264,10 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
512, priv_size, rte_socket_id());
@@ -362,9 +354,9 @@ cperf_verify_op(struct rte_crypto_op *op,
break;
case CPERF_AEAD:
cipher = 1;
- cipher_offset = vector->aad.length;
+ cipher_offset = 0;
auth = 1;
- auth_offset = vector->aad.length + options->test_buffer_size;
+ auth_offset = options->test_buffer_size;
break;
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 2/7] app/crypto-perf: parse AEAD data from vectors
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
@ 2017-09-13 7:22 ` Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 3/7] app/crypto-perf: parse segment size Pablo de Lara
` (4 subsequent siblings)
6 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:22 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara, stable
Since DPDK 17.08, there is specific parameters
for AEAD algorithm, like AES-GCM. When verifying
crypto operations with test vectors, the parser
was not reading AEAD data (such as IV or key).
Fixes: 8a5b494a7f99 ("app/test-crypto-perf: add AEAD parameters")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/app/test-crypto-perf/cperf_test_vector_parsing.c b/app/test-crypto-perf/cperf_test_vector_parsing.c
index 148a604..3952632 100644
--- a/app/test-crypto-perf/cperf_test_vector_parsing.c
+++ b/app/test-crypto-perf/cperf_test_vector_parsing.c
@@ -116,6 +116,20 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_key.data) {
+ printf("\naead_key =\n");
+ for (i = 0; i < test_vector->aead_key.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_key.length - 1))
+ printf("0x%02x", test_vector->aead_key.data[i]);
+ else
+ printf("0x%02x, ",
+ test_vector->aead_key.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->cipher_iv.data) {
printf("\ncipher_iv =\n");
for (i = 0; i < test_vector->cipher_iv.length; ++i) {
@@ -142,6 +156,19 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_iv.data) {
+ printf("\naead_iv =\n");
+ for (i = 0; i < test_vector->aead_iv.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_iv.length - 1))
+ printf("0x%02x", test_vector->aead_iv.data[i]);
+ else
+ printf("0x%02x, ", test_vector->aead_iv.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->ciphertext.data) {
printf("\nciphertext =\n");
for (i = 0; i < test_vector->ciphertext.length; ++i) {
@@ -345,6 +372,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_key.length = opts->auth_key_sz;
}
+ } else if (strstr(key_token, "aead_key")) {
+ rte_free(vector->aead_key.data);
+ vector->aead_key.data = data;
+ if (tc_found)
+ vector->aead_key.length = data_length;
+ else {
+ if (opts->aead_key_sz > data_length) {
+ printf("Global aead_key shorter than "
+ "aead_key_sz\n");
+ return -1;
+ }
+ vector->aead_key.length = opts->aead_key_sz;
+ }
+
} else if (strstr(key_token, "cipher_iv")) {
rte_free(vector->cipher_iv.data);
vector->cipher_iv.data = data;
@@ -373,6 +414,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_iv.length = opts->auth_iv_sz;
}
+ } else if (strstr(key_token, "aead_iv")) {
+ rte_free(vector->aead_iv.data);
+ vector->aead_iv.data = data;
+ if (tc_found)
+ vector->aead_iv.length = data_length;
+ else {
+ if (opts->aead_iv_sz > data_length) {
+ printf("Global aead iv shorter than "
+ "aead_iv_sz\n");
+ return -1;
+ }
+ vector->aead_iv.length = opts->aead_iv_sz;
+ }
+
} else if (strstr(key_token, "ciphertext")) {
rte_free(vector->ciphertext.data);
vector->ciphertext.data = data;
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 3/7] app/crypto-perf: parse segment size
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
@ 2017-09-13 7:22 ` Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
` (3 subsequent siblings)
6 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:22 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
Instead of parsing number of segments, from the command line,
parse segment size, as it is a more usual case to have
the segment size fixed and then different packet sizes
will require different number of segments.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 24 ++++++++
app/test-crypto-perf/cperf_options.h | 4 +-
app/test-crypto-perf/cperf_options_parsing.c | 38 +++++++++----
app/test-crypto-perf/cperf_test_latency.c | 82 +++++++++++++++++-----------
app/test-crypto-perf/cperf_test_throughput.c | 82 +++++++++++++++++-----------
app/test-crypto-perf/cperf_test_verify.c | 82 +++++++++++++++++-----------
doc/guides/tools/cryptoperf.rst | 6 +-
7 files changed, 207 insertions(+), 111 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 5be20d9..ad32065 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -175,6 +175,14 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -256,6 +264,14 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -346,6 +362,14 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index 10cd2d8..5f2b28b 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -11,7 +11,7 @@
#define CPERF_TOTAL_OPS ("total-ops")
#define CPERF_BURST_SIZE ("burst-sz")
#define CPERF_BUFFER_SIZE ("buffer-sz")
-#define CPERF_SEGMENTS_NB ("segments-nb")
+#define CPERF_SEGMENT_SIZE ("segment-sz")
#define CPERF_DEVTYPE ("devtype")
#define CPERF_OPTYPE ("optype")
@@ -66,7 +66,7 @@ struct cperf_options {
uint32_t pool_sz;
uint32_t total_ops;
- uint32_t segments_nb;
+ uint32_t segment_sz;
uint32_t test_buffer_size;
uint32_t sessionless:1;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 663f53f..9e5f486 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -324,17 +324,17 @@ parse_buffer_sz(struct cperf_options *opts, const char *arg)
}
static int
-parse_segments_nb(struct cperf_options *opts, const char *arg)
+parse_segment_sz(struct cperf_options *opts, const char *arg)
{
- int ret = parse_uint32_t(&opts->segments_nb, arg);
+ int ret = parse_uint32_t(&opts->segment_sz, arg);
if (ret) {
- RTE_LOG(ERR, USER1, "failed to parse segments number\n");
+ RTE_LOG(ERR, USER1, "failed to parse segment size\n");
return -1;
}
- if ((opts->segments_nb == 0) || (opts->segments_nb > 255)) {
- RTE_LOG(ERR, USER1, "invalid segments number specified\n");
+ if (opts->segment_sz == 0) {
+ RTE_LOG(ERR, USER1, "Segment size has to be bigger than 0\n");
return -1;
}
@@ -642,7 +642,7 @@ static struct option lgopts[] = {
{ CPERF_TOTAL_OPS, required_argument, 0, 0 },
{ CPERF_BURST_SIZE, required_argument, 0, 0 },
{ CPERF_BUFFER_SIZE, required_argument, 0, 0 },
- { CPERF_SEGMENTS_NB, required_argument, 0, 0 },
+ { CPERF_SEGMENT_SIZE, required_argument, 0, 0 },
{ CPERF_DEVTYPE, required_argument, 0, 0 },
{ CPERF_OPTYPE, required_argument, 0, 0 },
@@ -699,7 +699,11 @@ cperf_options_default(struct cperf_options *opts)
opts->min_burst_size = 32;
opts->inc_burst_size = 0;
- opts->segments_nb = 1;
+ /*
+ * Will be parsed from command line or set to
+ * maximum buffer size + digest, later
+ */
+ opts->segment_sz = 0;
strncpy(opts->device_type, "crypto_aesni_mb",
sizeof(opts->device_type));
@@ -741,7 +745,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_TOTAL_OPS, parse_total_ops },
{ CPERF_BURST_SIZE, parse_burst_sz },
{ CPERF_BUFFER_SIZE, parse_buffer_sz },
- { CPERF_SEGMENTS_NB, parse_segments_nb },
+ { CPERF_SEGMENT_SIZE, parse_segment_sz },
{ CPERF_DEVTYPE, parse_device_type },
{ CPERF_OPTYPE, parse_op_type },
{ CPERF_SESSIONLESS, parse_sessionless },
@@ -849,9 +853,21 @@ check_cipher_buffer_length(struct cperf_options *options)
int
cperf_options_check(struct cperf_options *options)
{
- if (options->segments_nb > options->min_buffer_size) {
+ if (options->op_type == CPERF_CIPHER_ONLY)
+ options->digest_sz = 0;
+
+ /*
+ * If segment size is not set, assume only one segment,
+ * big enough to contain the largest buffer and the digest
+ */
+ if (options->segment_sz == 0)
+ options->segment_sz = options->max_buffer_size +
+ options->digest_sz;
+
+ if (options->segment_sz < options->digest_sz) {
RTE_LOG(ERR, USER1,
- "Segments number greater than buffer size.\n");
+ "Segment size should be at least "
+ "the size of the digest\n");
return -EINVAL;
}
@@ -967,7 +983,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("%u ", opts->burst_size_list[size_idx]);
printf("\n");
}
- printf("\n# segments per buffer: %u\n", opts->segments_nb);
+ printf("\n# segment size: %u\n", opts->segment_sz);
printf("#\n");
printf("# cryptodev type: %s\n", opts->device_type);
printf("#\n");
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 2a46af9..b272bb1 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -116,18 +116,18 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -137,11 +137,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -154,22 +161,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
+
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -217,13 +234,14 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -236,7 +254,9 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -251,9 +271,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -267,8 +285,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -339,7 +357,7 @@ cperf_latency_test_runner(void *arg)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 07aea6a..d5e93f7 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -100,18 +100,18 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -121,11 +121,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -138,22 +145,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
+
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -200,13 +217,14 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -218,7 +236,9 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -232,9 +252,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -248,8 +266,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -297,7 +315,7 @@ cperf_throughput_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index bc07eb6..6f790ce 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -104,18 +104,18 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint32_t segment_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -125,11 +125,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
+ segment_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -142,22 +149,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
+ segment_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segment_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ if (m == NULL)
+ goto error;
+
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -204,13 +221,14 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint32_t segment_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segment_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -222,7 +240,9 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segment_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -236,9 +256,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -252,8 +270,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -405,7 +423,7 @@ cperf_verify_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 457f817..23b2b98 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -170,9 +170,11 @@ The following are the appication command-line options:
* List of values, up to 32 values, separated in commas (i.e. ``--buffer-sz 32,64,128``)
-* ``--segments-nb <n>``
+* ``--segment-sz <n>``
- Set the number of segments per packet.
+ Set the size of the segment to use, for Scatter Gather List testing.
+ By default, it is set to the size of the maximum buffer size, including the digest size,
+ so a single segment is created.
* ``--devtype <name>``
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 4/7] app/crypto-perf: overwrite mbuf when verifying
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
` (2 preceding siblings ...)
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 3/7] app/crypto-perf: parse segment size Pablo de Lara
@ 2017-09-13 7:22 ` Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
` (2 subsequent siblings)
6 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:22 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
When running the verify test, mbufs in the pool were
populated with the test vector loaded from a file.
To avoid limiting the number of operations to the pool size,
mbufs will be rewritten with the test vector, before
linking them to the crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_options_parsing.c | 7 ------
app/test-crypto-perf/cperf_test_verify.c | 35 ++++++++++++++++++++++++++++
2 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 9e5f486..52b884f 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -900,13 +900,6 @@ cperf_options_check(struct cperf_options *options)
}
if (options->test == CPERF_TEST_TYPE_VERIFY &&
- options->total_ops > options->pool_sz) {
- RTE_LOG(ERR, USER1, "Total number of ops must be less than or"
- " equal to the pool size.\n");
- return -EINVAL;
- }
-
- if (options->test == CPERF_TEST_TYPE_VERIFY &&
(options->inc_buffer_size != 0 ||
options->buffer_size_count > 1)) {
RTE_LOG(ERR, USER1, "Only one buffer size is allowed when "
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 6f790ce..03474cb 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -187,6 +187,34 @@ cperf_mbuf_create(struct rte_mempool *mempool,
return NULL;
}
+static void
+cperf_mbuf_set(struct rte_mbuf *mbuf,
+ const struct cperf_options *options,
+ const struct cperf_test_vector *test_vector)
+{
+ uint32_t segment_sz = options->segment_sz;
+ uint8_t *mbuf_data;
+ uint8_t *test_data =
+ (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ test_vector->plaintext.data :
+ test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
+
+ while (remaining_bytes) {
+ mbuf_data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ return;
+ }
+
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ mbuf = mbuf->next;
+ }
+}
+
void *
cperf_verify_test_constructor(struct rte_mempool *sess_mp,
uint8_t dev_id, uint16_t qp_id,
@@ -469,6 +497,13 @@ cperf_verify_test_runner(void *test_ctx)
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
+
+ /* Populate the mbuf with the test vector, for verification */
+ for (i = 0; i < ops_needed; i++)
+ cperf_mbuf_set(ops[i]->sym->m_src,
+ ctx->options,
+ ctx->test_vector);
+
#ifdef CPERF_LINEARIZATION_ENABLE
if (linearize) {
/* PMD doesn't support scatter-gather and source buffer
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 5/7] app/crypto-perf: do not populate the mbufs at init
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
` (3 preceding siblings ...)
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
@ 2017-09-13 7:22 ` Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 6/7] app/crypto-perf: support multiple queue pairs Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 7/7] app/crypto-perf: use single mempool Pablo de Lara
6 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:22 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
For throughput and latency tests, it is not required
to populate the mbufs with any test vector.
For verify test, there is already a function that rewrites
the mbufs every time they are going to be used with
crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_latency.c | 31 ++++++++--------------------
app/test-crypto-perf/cperf_test_throughput.c | 31 ++++++++--------------------
app/test-crypto-perf/cperf_test_verify.c | 31 ++++++++--------------------
3 files changed, 27 insertions(+), 66 deletions(-)
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index b272bb1..997844a 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -118,15 +118,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -137,15 +132,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -161,15 +152,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -257,7 +244,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -286,7 +273,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index d5e93f7..121ceb1 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -102,15 +102,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -121,15 +116,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -145,15 +136,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -239,7 +226,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -267,7 +254,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 03474cb..b18426c 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -106,15 +106,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segment_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -125,15 +120,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segment_nb--;
while (remaining_bytes) {
@@ -149,15 +140,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segment_nb--;
}
@@ -271,7 +258,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segment_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -299,7 +286,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 6/7] app/crypto-perf: support multiple queue pairs
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
` (4 preceding siblings ...)
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
@ 2017-09-13 7:22 ` Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 7/7] app/crypto-perf: use single mempool Pablo de Lara
6 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:22 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
Add parameter "qps" in crypto performance app,
to create multiple queue pairs per device.
This new parameter is useful to have multiple logical
cores using a single crypto device, without needing
to initialize a crypto device per core.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_options.h | 2 +
app/test-crypto-perf/cperf_options_parsing.c | 22 +++++++++++
app/test-crypto-perf/cperf_test_latency.c | 14 +++----
app/test-crypto-perf/cperf_test_throughput.c | 14 +++----
app/test-crypto-perf/cperf_test_verify.c | 14 +++----
app/test-crypto-perf/main.c | 56 +++++++++++++++++-----------
doc/guides/tools/cryptoperf.rst | 5 +++
7 files changed, 81 insertions(+), 46 deletions(-)
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index 5f2b28b..362f762 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -14,6 +14,7 @@
#define CPERF_SEGMENT_SIZE ("segment-sz")
#define CPERF_DEVTYPE ("devtype")
+#define CPERF_NUM_QPS ("qps")
#define CPERF_OPTYPE ("optype")
#define CPERF_SESSIONLESS ("sessionless")
#define CPERF_OUT_OF_PLACE ("out-of-place")
@@ -67,6 +68,7 @@ struct cperf_options {
uint32_t pool_sz;
uint32_t total_ops;
uint32_t segment_sz;
+ uint32_t num_qps;
uint32_t test_buffer_size;
uint32_t sessionless:1;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 52b884f..d4f2c31 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -342,6 +342,24 @@ parse_segment_sz(struct cperf_options *opts, const char *arg)
}
static int
+parse_num_qps(struct cperf_options *opts, const char *arg)
+{
+ int ret = parse_uint32_t(&opts->num_qps, arg);
+
+ if (ret) {
+ RTE_LOG(ERR, USER1, "failed to parse number of queue pairs\n");
+ return -1;
+ }
+
+ if ((opts->num_qps == 0) || (opts->num_qps > 256)) {
+ RTE_LOG(ERR, USER1, "invalid number of queue pairs specified\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
parse_device_type(struct cperf_options *opts, const char *arg)
{
if (strlen(arg) > (sizeof(opts->device_type) - 1))
@@ -645,6 +663,7 @@ static struct option lgopts[] = {
{ CPERF_SEGMENT_SIZE, required_argument, 0, 0 },
{ CPERF_DEVTYPE, required_argument, 0, 0 },
+ { CPERF_NUM_QPS, required_argument, 0, 0 },
{ CPERF_OPTYPE, required_argument, 0, 0 },
{ CPERF_SILENT, no_argument, 0, 0 },
@@ -707,6 +726,7 @@ cperf_options_default(struct cperf_options *opts)
strncpy(opts->device_type, "crypto_aesni_mb",
sizeof(opts->device_type));
+ opts->num_qps = 1;
opts->op_type = CPERF_CIPHER_THEN_AUTH;
@@ -747,6 +767,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_BUFFER_SIZE, parse_buffer_sz },
{ CPERF_SEGMENT_SIZE, parse_segment_sz },
{ CPERF_DEVTYPE, parse_device_type },
+ { CPERF_NUM_QPS, parse_num_qps },
{ CPERF_OPTYPE, parse_op_type },
{ CPERF_SESSIONLESS, parse_sessionless },
{ CPERF_OUT_OF_PLACE, parse_out_of_place },
@@ -980,6 +1001,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("#\n");
printf("# cryptodev type: %s\n", opts->device_type);
printf("#\n");
+ printf("# number of queue pairs per device: %u\n", opts->num_qps);
printf("# crypto operation: %s\n", cperf_op_type_strs[opts->op_type]);
printf("# sessionless: %s\n", opts->sessionless ? "yes" : "no");
printf("# out of place: %s\n", opts->out_of_place ? "yes" : "no");
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 997844a..ae016a6 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -218,8 +218,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint32_t segment_nb = (max_size % options->segment_sz) ?
@@ -252,8 +252,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
snprintf(pool_name, sizeof(pool_name),
- "cperf_pool_out_cdev_%d",
- dev_id);
+ "cperf_pool_out_cdev_%d_qp_%d",
+ dev_id, qp_id);
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
@@ -281,8 +281,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
}
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
test_vector->cipher_iv.length +
@@ -583,7 +583,5 @@ cperf_latency_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_latency_test_free(ctx, ctx->options->pool_sz);
}
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 121ceb1..57d9245 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -201,8 +201,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint32_t segment_nb = (max_size % options->segment_sz) ?
@@ -233,8 +233,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
+ dev_id, qp_id);
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
@@ -262,8 +262,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
}
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
@@ -530,7 +530,5 @@ cperf_throughput_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_throughput_test_free(ctx, ctx->options->pool_sz);
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index b18426c..c7c59d4 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -233,8 +233,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint32_t segment_nb = (max_size % options->segment_sz) ?
@@ -265,8 +265,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
+ dev_id, qp_id);
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
@@ -294,8 +294,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
}
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
@@ -626,7 +626,5 @@ cperf_verify_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_verify_test_free(ctx, ctx->options->pool_sz);
}
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 99f5d3e..4641a22 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -83,7 +83,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
struct rte_mempool *session_pool_socket[])
{
uint8_t enabled_cdev_count = 0, nb_lcores, cdev_id;
- unsigned int i;
+ unsigned int i, j;
int ret;
enabled_cdev_count = rte_cryptodev_devices_get(opts->device_type,
@@ -118,8 +118,8 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
uint8_t socket_id = rte_cryptodev_socket_id(cdev_id);
struct rte_cryptodev_config conf = {
- .nb_queue_pairs = 1,
- .socket_id = socket_id
+ .nb_queue_pairs = opts->num_qps,
+ .socket_id = socket_id
};
struct rte_cryptodev_qp_conf qp_conf = {
@@ -158,14 +158,16 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
return -EINVAL;
}
- ret = rte_cryptodev_queue_pair_setup(cdev_id, 0,
+ for (j = 0; j < opts->num_qps; j++) {
+ ret = rte_cryptodev_queue_pair_setup(cdev_id, j,
&qp_conf, socket_id,
session_pool_socket[socket_id]);
if (ret < 0) {
printf("Failed to setup queue pair %u on "
- "cryptodev %u", 0, cdev_id);
+ "cryptodev %u", j, cdev_id);
return -EINVAL;
}
+ }
ret = rte_cryptodev_start(cdev_id);
if (ret < 0) {
@@ -464,23 +466,29 @@ main(int argc, char **argv)
if (!opts.silent)
show_test_vector(t_vec);
+ uint16_t total_num_qps = nb_cryptodevs * opts.num_qps;
+
i = 0;
+ uint8_t qp_id = 0, cdev_index = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_num_qps)
break;
- cdev_id = enabled_cdevs[i];
+ cdev_id = enabled_cdevs[cdev_index];
uint8_t socket_id = rte_cryptodev_socket_id(cdev_id);
- ctx[cdev_id] = cperf_testmap[opts.test].constructor(
- session_pool_socket[socket_id], cdev_id, 0,
+ ctx[i] = cperf_testmap[opts.test].constructor(
+ session_pool_socket[socket_id], cdev_id, qp_id,
&opts, t_vec, &op_fns);
- if (ctx[cdev_id] == NULL) {
+ if (ctx[i] == NULL) {
RTE_LOG(ERR, USER1, "Test run constructor failed\n");
goto err;
}
+ qp_id = (qp_id + 1) % opts.num_qps;
+ if (qp_id == 0)
+ cdev_index++;
i++;
}
@@ -494,19 +502,17 @@ main(int argc, char **argv)
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_num_qps)
break;
- cdev_id = enabled_cdevs[i];
-
rte_eal_remote_launch(cperf_testmap[opts.test].runner,
- ctx[cdev_id], lcore_id);
+ ctx[i], lcore_id);
i++;
}
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_num_qps)
break;
rte_eal_wait_lcore(lcore_id);
i++;
@@ -525,15 +531,17 @@ main(int argc, char **argv)
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_num_qps)
break;
- cdev_id = enabled_cdevs[i];
-
- cperf_testmap[opts.test].destructor(ctx[cdev_id]);
+ cperf_testmap[opts.test].destructor(ctx[i]);
i++;
}
+ for (i = 0; i < nb_cryptodevs &&
+ i < RTE_CRYPTO_MAX_DEVS; i++)
+ rte_cryptodev_stop(enabled_cdevs[i]);
+
free_test_vector(t_vec, &opts);
printf("\n");
@@ -542,16 +550,20 @@ main(int argc, char **argv)
err:
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_num_qps)
break;
cdev_id = enabled_cdevs[i];
- if (ctx[cdev_id] && cperf_testmap[opts.test].destructor)
- cperf_testmap[opts.test].destructor(ctx[cdev_id]);
+ if (ctx[i] && cperf_testmap[opts.test].destructor)
+ cperf_testmap[opts.test].destructor(ctx[i]);
i++;
}
+ for (i = 0; i < nb_cryptodevs &&
+ i < RTE_CRYPTO_MAX_DEVS; i++)
+ rte_cryptodev_stop(enabled_cdevs[i]);
+
free_test_vector(t_vec, &opts);
printf("\n");
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 23b2b98..2fb0c66 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -192,6 +192,11 @@ The following are the appication command-line options:
crypto_armv8
crypto_scheduler
+* ``--qps <n>``
+
+ Set the number of queue pairs per device (1 by default).
+
+
* ``--optype <name>``
Set operation type, where ``name`` is one of the following::
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v2 7/7] app/crypto-perf: use single mempool
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
` (5 preceding siblings ...)
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 6/7] app/crypto-perf: support multiple queue pairs Pablo de Lara
@ 2017-09-13 7:22 ` Pablo de Lara
6 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-13 7:22 UTC (permalink / raw)
To: declan.doherty, akhil.goyal, hemant.agrawal, jerin.jacob,
fiona.trahe, deepak.k.jain, john.griffin
Cc: dev, Pablo de Lara
In order to improve memory utilization, a single mempool
is created, containing the crypto operation and mbufs
(one if operation is in-place, two if out-of-place).
This way, a single object is allocated and freed
per operation, reducing the amount of memory in cache,
which improves scalability.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 96 +++++--
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_test_latency.c | 363 +++++++++++++-------------
app/test-crypto-perf/cperf_test_throughput.c | 360 +++++++++++++-------------
app/test-crypto-perf/cperf_test_verify.c | 369 +++++++++++++--------------
5 files changed, 604 insertions(+), 586 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index ad32065..f76dbdd 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -37,7 +37,7 @@
static int
cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector __rte_unused,
@@ -48,10 +48,18 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
sym_op->cipher.data.length = options->test_buffer_size;
@@ -63,7 +71,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
static int
cperf_set_ops_null_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector __rte_unused,
@@ -74,10 +82,18 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* auth parameters */
sym_op->auth.data.length = options->test_buffer_size;
@@ -89,7 +105,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_cipher(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -100,10 +116,18 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -132,7 +156,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
static int
cperf_set_ops_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -143,10 +167,18 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
if (test_vector->auth_iv.length) {
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
@@ -167,9 +199,9 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
@@ -219,7 +251,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -230,10 +262,18 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -256,9 +296,9 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
@@ -316,7 +356,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_aead(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -329,10 +369,18 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
@@ -354,9 +402,9 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index 1f8fa93..94951cc 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -47,7 +47,7 @@ typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
uint16_t iv_offset);
typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index ae016a6..8c46e9d 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -50,17 +50,15 @@ struct cperf_latency_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
struct cperf_op_result *res;
@@ -74,116 +72,127 @@ struct priv_op_data {
#define min(a, b) (a < b ? (uint64_t)a : (uint64_t)b)
static void
-cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
+cperf_latency_test_free(struct cperf_latency_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx->res);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint32_t segment_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr += mbuf_hdr_size + segment_sz;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segment_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
void *
@@ -194,8 +203,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_latency_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
+ int ret;
ctx = rte_malloc(NULL, sizeof(struct cperf_latency_ctx), 0);
if (ctx == NULL)
@@ -218,84 +227,74 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
- dev_id, qp_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = sizeof(struct priv_op_data) +
+ test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
- uint32_t segment_nb = (max_size % options->segment_sz) ?
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segment_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
+ snprintf(pool_name, sizeof(pool_name), "pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segment_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment online */
+ obj_size += max_size;
}
- if (options->out_of_place == 1) {
+ ctx->pool = rte_mempool_create_empty(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ rte_socket_id(), 0);
- snprintf(pool_name, sizeof(pool_name),
- "cperf_pool_out_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
+ if (ctx->pool == NULL) {
+ RTE_LOG(ERR, USER1,
+ "Cannot allocate mempool for device %u\n",
+ dev_id);
+ goto err;
}
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ret = rte_mempool_set_ops_byname(ctx->pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+ if (ret != 0) {
+ RTE_LOG(ERR, USER1,
+ "Error setting mempool handler for device %u\n",
+ dev_id);
+ goto err;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
- test_vector->cipher_iv.length +
- test_vector->auth_iv.length +
- test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
-
- if (ctx->crypto_op_pool == NULL)
+ ret = rte_mempool_populate_default(ctx->pool);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1,
+ "Error populating mempool for device %u\n",
+ dev_id);
goto err;
+ }
+
+ rte_mempool_obj_iter(ctx->pool, mempool_obj_init, (void *)¶ms);
ctx->res = rte_malloc(NULL, sizeof(struct cperf_op_result) *
ctx->options->total_ops, 0);
@@ -305,7 +304,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
return ctx;
err:
- cperf_latency_test_free(ctx, mbuf_idx);
+ cperf_latency_test_free(ctx);
return NULL;
}
@@ -370,7 +369,7 @@ cperf_latency_test_runner(void *arg)
while (test_burst_size <= ctx->options->max_burst_size) {
uint64_t ops_enqd = 0, ops_deqd = 0;
- uint64_t m_idx = 0, b_idx = 0;
+ uint64_t b_idx = 0;
uint64_t tsc_val, tsc_end, tsc_start;
uint64_t tsc_max = 0, tsc_min = ~0UL, tsc_tot = 0, tsc_idx = 0;
@@ -385,11 +384,9 @@ cperf_latency_test_runner(void *arg)
ctx->options->total_ops -
enqd_tot;
- /* Allocate crypto ops from pool */
- if (burst_size != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, burst_size)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ burst_size) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -399,8 +396,8 @@ cperf_latency_test_runner(void *arg)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
burst_size, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
@@ -429,7 +426,7 @@ cperf_latency_test_runner(void *arg)
/* Free memory for not enqueued operations */
if (ops_enqd != burst_size)
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)&ops[ops_enqd],
burst_size - ops_enqd);
@@ -445,16 +442,11 @@ cperf_latency_test_runner(void *arg)
}
if (likely(ops_deqd)) {
- /*
- * free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
+ /* free crypto ops so they can be reused. */
for (i = 0; i < ops_deqd; i++)
store_timestamp(ops_processed[i], tsc_end);
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
deqd_tot += ops_deqd;
@@ -466,9 +458,6 @@ cperf_latency_test_runner(void *arg)
enqd_max = max(ops_enqd, enqd_max);
enqd_min = min(ops_enqd, enqd_min);
- m_idx += ops_enqd;
- m_idx = m_idx + test_burst_size > ctx->options->pool_sz ?
- 0 : m_idx;
b_idx++;
}
@@ -487,7 +476,7 @@ cperf_latency_test_runner(void *arg)
for (i = 0; i < ops_deqd; i++)
store_timestamp(ops_processed[i], tsc_end);
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
deqd_tot += ops_deqd;
@@ -583,5 +572,5 @@ cperf_latency_test_destructor(void *arg)
if (ctx == NULL)
return;
- cperf_latency_test_free(ctx, ctx->options->pool_sz);
+ cperf_latency_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 57d9245..7c11fd2 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -43,131 +43,140 @@ struct cperf_throughput_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
static void
-cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
+cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint32_t segment_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr += mbuf_hdr_size + segment_sz;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segment_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
void *
@@ -178,8 +187,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_throughput_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
+ int ret;
ctx = rte_malloc(NULL, sizeof(struct cperf_throughput_ctx), 0);
if (ctx == NULL)
@@ -201,83 +210,77 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
- dev_id, qp_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
- uint32_t segment_nb = (max_size % options->segment_sz) ?
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segment_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
+ snprintf(pool_name, sizeof(pool_name), "pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segment_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment online */
+ obj_size += max_size;
}
- if (options->out_of_place == 1) {
+ ctx->pool = rte_mempool_create_empty(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ rte_socket_id(), 0);
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
+ if (ctx->pool == NULL) {
+ RTE_LOG(ERR, USER1,
+ "Cannot allocate mempool for device %u\n",
+ dev_id);
+ goto err;
}
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ret = rte_mempool_set_ops_byname(ctx->pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+ if (ret != 0) {
+ RTE_LOG(ERR, USER1,
+ "Error setting mempool handler for device %u\n",
+ dev_id);
+ goto err;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ ret = rte_mempool_populate_default(ctx->pool);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1,
+ "Error populating mempool for device %u\n",
+ dev_id);
goto err;
+ }
+
+ rte_mempool_obj_iter(ctx->pool, mempool_obj_init, (void *)¶ms);
return ctx;
err:
- cperf_throughput_test_free(ctx, mbuf_idx);
+ cperf_throughput_test_free(ctx);
return NULL;
}
@@ -329,7 +332,7 @@ cperf_throughput_test_runner(void *test_ctx)
uint64_t ops_enqd = 0, ops_enqd_total = 0, ops_enqd_failed = 0;
uint64_t ops_deqd = 0, ops_deqd_total = 0, ops_deqd_failed = 0;
- uint64_t m_idx = 0, tsc_start, tsc_end, tsc_duration;
+ uint64_t tsc_start, tsc_end, tsc_duration;
uint16_t ops_unused = 0;
@@ -345,11 +348,9 @@ cperf_throughput_test_runner(void *test_ctx)
uint16_t ops_needed = burst_size - ops_unused;
- /* Allocate crypto ops from pool */
- if (ops_needed != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, ops_needed)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ ops_needed) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -359,10 +360,11 @@ cperf_throughput_test_runner(void *test_ctx)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
- ops_needed, ctx->sess, ctx->options,
- ctx->test_vector, iv_offset);
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
+ ops_needed, ctx->sess,
+ ctx->options, ctx->test_vector,
+ iv_offset);
/**
* When ops_needed is smaller than ops_enqd, the
@@ -407,12 +409,8 @@ cperf_throughput_test_runner(void *test_ctx)
ops_processed, test_burst_size);
if (likely(ops_deqd)) {
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
@@ -425,9 +423,6 @@ cperf_throughput_test_runner(void *test_ctx)
ops_deqd_failed++;
}
- m_idx += ops_needed;
- m_idx = m_idx + test_burst_size > ctx->options->pool_sz ?
- 0 : m_idx;
}
/* Dequeue any operations still in the crypto device */
@@ -442,9 +437,8 @@ cperf_throughput_test_runner(void *test_ctx)
if (ops_deqd == 0)
ops_deqd_failed++;
else {
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
-
ops_deqd_total += ops_deqd;
}
}
@@ -530,5 +524,5 @@ cperf_throughput_test_destructor(void *arg)
if (ctx == NULL)
return;
- cperf_throughput_test_free(ctx, ctx->options->pool_sz);
+ cperf_throughput_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index c7c59d4..b3415f8 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -43,135 +43,140 @@ struct cperf_verify_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
-struct cperf_op_result {
- enum rte_crypto_op_status status;
-};
-
static void
-cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
+cperf_verify_test_free(struct cperf_verify_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint32_t segment_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr += mbuf_hdr_size + segment_sz;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segment_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segment_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
static void
@@ -210,8 +215,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_verify_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
+ int ret;
ctx = rte_malloc(NULL, sizeof(struct cperf_verify_ctx), 0);
if (ctx == NULL)
@@ -224,7 +229,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->options = options;
ctx->test_vector = test_vector;
- /* IV goes at the end of the cryptop operation */
+ /* IV goes at the end of the crypto operation */
uint16_t iv_offset = sizeof(struct rte_crypto_op) +
sizeof(struct rte_crypto_sym_op);
@@ -233,83 +238,77 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
- dev_id, qp_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
- uint32_t segment_nb = (max_size % options->segment_sz) ?
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segment_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
+ snprintf(pool_name, sizeof(pool_name), "pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segment_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment online */
+ obj_size += max_size;
}
- if (options->out_of_place == 1) {
-
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
+ ctx->pool = rte_mempool_create_empty(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ rte_socket_id(), 0);
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
+ if (ctx->pool == NULL) {
+ RTE_LOG(ERR, USER1,
+ "Cannot allocate mempool for device %u\n",
+ dev_id);
+ goto err;
}
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ret = rte_mempool_set_ops_byname(ctx->pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+ if (ret != 0) {
+ RTE_LOG(ERR, USER1,
+ "Error setting mempool handler for device %u\n",
+ dev_id);
+ goto err;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ ret = rte_mempool_populate_default(ctx->pool);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1,
+ "Error populating mempool for device %u\n",
+ dev_id);
goto err;
+ }
+
+ rte_mempool_obj_iter(ctx->pool, mempool_obj_init, (void *)¶ms);
return ctx;
err:
- cperf_verify_test_free(ctx, mbuf_idx);
+ cperf_verify_test_free(ctx);
return NULL;
}
@@ -425,7 +424,7 @@ cperf_verify_test_runner(void *test_ctx)
static int only_once;
- uint64_t i, m_idx = 0;
+ uint64_t i;
uint16_t ops_unused = 0;
struct rte_crypto_op *ops[ctx->options->max_burst_size];
@@ -465,11 +464,9 @@ cperf_verify_test_runner(void *test_ctx)
uint16_t ops_needed = burst_size - ops_unused;
- /* Allocate crypto ops from pool */
- if (ops_needed != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, ops_needed)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ ops_needed) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -479,8 +476,8 @@ cperf_verify_test_runner(void *test_ctx)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
@@ -520,10 +517,6 @@ cperf_verify_test_runner(void *test_ctx)
ops_deqd = rte_cryptodev_dequeue_burst(ctx->dev_id, ctx->qp_id,
ops_processed, ctx->options->max_burst_size);
- m_idx += ops_needed;
- if (m_idx + ctx->options->max_burst_size > ctx->options->pool_sz)
- m_idx = 0;
-
if (ops_deqd == 0) {
/**
* Count dequeue polls which didn't return any
@@ -538,13 +531,10 @@ cperf_verify_test_runner(void *test_ctx)
if (cperf_verify_op(ops_processed[i], ctx->options,
ctx->test_vector))
ops_failed++;
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_crypto_op_free(ops_processed[i]);
}
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
+ (void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
}
@@ -566,13 +556,10 @@ cperf_verify_test_runner(void *test_ctx)
if (cperf_verify_op(ops_processed[i], ctx->options,
ctx->test_vector))
ops_failed++;
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_crypto_op_free(ops_processed[i]);
}
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
+ (void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
}
@@ -626,5 +613,5 @@ cperf_verify_test_destructor(void *arg)
if (ctx == NULL)
return;
- cperf_verify_test_free(ctx, ctx->options->pool_sz);
+ cperf_verify_test_free(ctx);
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
` (4 preceding siblings ...)
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
@ 2017-09-22 7:55 ` Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
` (7 more replies)
5 siblings, 8 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-22 7:55 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
This patchset includes some improvements in the Crypto performance
application, including app fixes and new parameter additions.
The last patch, in particular, introduces performance improvements.
Currently, crypto operations are allocated in a mempool and mbufs
in a different one. Then mbufs are extracted to an array, which is
looped through for all the crypto operations, impacting greatly
the performance, as much more memory is used.
Since crypto operations and mbufs are mapped 1:1, the can share the
same mempool object (similar to having the mbuf in the private data
of the crypto operation).
This improves performance, as it is only required to handle a single
mempool and the mbufs are obtained from the cache of the mempoool,
and not from an static array.
Changes in v3:
- Renamed "number of queue pairs" option from "--qps" to "--qp-nb",
for more consistency
Changes in v2:
- Added support for multiple queue pairs
- Mempool for crypto operations and mbufs is now created
using rte_mempool_create_empty(), rte_mempool_set_ops_byname(),
rte_mempool_populate_default() and rte_mempool_obj_iter(),
so mempool handler is set, as per Akhil's request.
Pablo de Lara (7):
app/crypto-perf: set AAD after the crypto operation
app/crypto-perf: parse AEAD data from vectors
app/crypto-perf: parse segment size
app/crypto-perf: overwrite mbuf when verifying
app/crypto-perf: do not populate the mbufs at init
app/crypto-perf: support multiple queue pairs
app/crypto-perf: use single mempool
app/test-crypto-perf/cperf_ops.c | 136 ++++++--
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_options.h | 6 +-
app/test-crypto-perf/cperf_options_parsing.c | 67 +++-
app/test-crypto-perf/cperf_test_latency.c | 380 +++++++++++----------
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 382 +++++++++++-----------
app/test-crypto-perf/cperf_test_throughput.c | 378 +++++++++++----------
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++
app/test-crypto-perf/cperf_test_verify.c | 399 ++++++++++++-----------
app/test-crypto-perf/main.c | 56 ++--
doc/guides/tools/cryptoperf.rst | 10 +-
11 files changed, 1031 insertions(+), 840 deletions(-)
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v3 1/7] app/crypto-perf: set AAD after the crypto operation
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
@ 2017-09-22 7:55 ` Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
` (6 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-22 7:55 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
Instead of prepending the AAD (Additional Authenticated Data)
in the mbuf, it is easier to set after the crypto operation,
as it is a read-only value, like the IV, and then it is not
restricted to the size of the mbuf headroom.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 16 ++++++++++++----
app/test-crypto-perf/cperf_test_latency.c | 16 ++++------------
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 16 +++-------------
app/test-crypto-perf/cperf_test_throughput.c | 15 +++------------
app/test-crypto-perf/cperf_test_verify.c | 20 ++++++--------------
5 files changed, 28 insertions(+), 55 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 88fb972..5be20d9 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -307,6 +307,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
uint16_t iv_offset)
{
uint16_t i;
+ uint16_t aad_offset = iv_offset +
+ RTE_ALIGN_CEIL(test_vector->aead_iv.length, 16);
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
@@ -318,11 +320,12 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
- sym_op->aead.data.offset =
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+ sym_op->aead.data.offset = 0;
- sym_op->aead.aad.data = rte_pktmbuf_mtod(bufs_in[i], uint8_t *);
- sym_op->aead.aad.phys_addr = rte_pktmbuf_mtophys(bufs_in[i]);
+ sym_op->aead.aad.data = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, aad_offset);
+ sym_op->aead.aad.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
+ aad_offset);
if (options->aead_op == RTE_CRYPTO_AEAD_OP_DECRYPT) {
sym_op->aead.digest.data = test_vector->digest.data;
@@ -360,6 +363,11 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
memcpy(iv_ptr, test_vector->aead_iv.data,
test_vector->aead_iv.length);
+
+ /* Copy AAD after the IV */
+ memcpy(ops[i]->sym->aead.aad.data,
+ test_vector->aad.data,
+ test_vector->aad.length);
}
}
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 58b21ab..2a46af9 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -174,16 +174,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -289,10 +279,12 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = sizeof(struct priv_op_data) +
+ uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
test_vector->cipher_iv.length +
test_vector->auth_iv.length +
- test_vector->aead_iv.length;
+ test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
512, priv_size, rte_socket_id());
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 0c949f0..ef1aa7e 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -191,16 +191,6 @@ cperf_mbuf_create(struct rte_mempool *mempool, uint32_t segments_nb,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(
- mbuf, RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -301,9 +291,9 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d", dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length +
- test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 512,
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 3bb1cb0..07aea6a 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -158,16 +158,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -270,8 +260,9 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index a314646..bc07eb6 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -162,16 +162,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -274,8 +264,10 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
512, priv_size, rte_socket_id());
@@ -362,9 +354,9 @@ cperf_verify_op(struct rte_crypto_op *op,
break;
case CPERF_AEAD:
cipher = 1;
- cipher_offset = vector->aad.length;
+ cipher_offset = 0;
auth = 1;
- auth_offset = vector->aad.length + options->test_buffer_size;
+ auth_offset = options->test_buffer_size;
break;
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v3 2/7] app/crypto-perf: parse AEAD data from vectors
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
@ 2017-09-22 7:55 ` Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 3/7] app/crypto-perf: parse segment size Pablo de Lara
` (5 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-22 7:55 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara, stable
Since DPDK 17.08, there is specific parameters
for AEAD algorithm, like AES-GCM. When verifying
crypto operations with test vectors, the parser
was not reading AEAD data (such as IV or key).
Fixes: 8a5b494a7f99 ("app/test-crypto-perf: add AEAD parameters")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/app/test-crypto-perf/cperf_test_vector_parsing.c b/app/test-crypto-perf/cperf_test_vector_parsing.c
index 148a604..3952632 100644
--- a/app/test-crypto-perf/cperf_test_vector_parsing.c
+++ b/app/test-crypto-perf/cperf_test_vector_parsing.c
@@ -116,6 +116,20 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_key.data) {
+ printf("\naead_key =\n");
+ for (i = 0; i < test_vector->aead_key.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_key.length - 1))
+ printf("0x%02x", test_vector->aead_key.data[i]);
+ else
+ printf("0x%02x, ",
+ test_vector->aead_key.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->cipher_iv.data) {
printf("\ncipher_iv =\n");
for (i = 0; i < test_vector->cipher_iv.length; ++i) {
@@ -142,6 +156,19 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_iv.data) {
+ printf("\naead_iv =\n");
+ for (i = 0; i < test_vector->aead_iv.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_iv.length - 1))
+ printf("0x%02x", test_vector->aead_iv.data[i]);
+ else
+ printf("0x%02x, ", test_vector->aead_iv.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->ciphertext.data) {
printf("\nciphertext =\n");
for (i = 0; i < test_vector->ciphertext.length; ++i) {
@@ -345,6 +372,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_key.length = opts->auth_key_sz;
}
+ } else if (strstr(key_token, "aead_key")) {
+ rte_free(vector->aead_key.data);
+ vector->aead_key.data = data;
+ if (tc_found)
+ vector->aead_key.length = data_length;
+ else {
+ if (opts->aead_key_sz > data_length) {
+ printf("Global aead_key shorter than "
+ "aead_key_sz\n");
+ return -1;
+ }
+ vector->aead_key.length = opts->aead_key_sz;
+ }
+
} else if (strstr(key_token, "cipher_iv")) {
rte_free(vector->cipher_iv.data);
vector->cipher_iv.data = data;
@@ -373,6 +414,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_iv.length = opts->auth_iv_sz;
}
+ } else if (strstr(key_token, "aead_iv")) {
+ rte_free(vector->aead_iv.data);
+ vector->aead_iv.data = data;
+ if (tc_found)
+ vector->aead_iv.length = data_length;
+ else {
+ if (opts->aead_iv_sz > data_length) {
+ printf("Global aead iv shorter than "
+ "aead_iv_sz\n");
+ return -1;
+ }
+ vector->aead_iv.length = opts->aead_iv_sz;
+ }
+
} else if (strstr(key_token, "ciphertext")) {
rte_free(vector->ciphertext.data);
vector->ciphertext.data = data;
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v3 3/7] app/crypto-perf: parse segment size
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
@ 2017-09-22 7:55 ` Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
` (4 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-22 7:55 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
Instead of parsing number of segments, from the command line,
parse segment size, as it is a more usual case to have
the segment size fixed and then different packet sizes
will require different number of segments.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 24 ++++++++
app/test-crypto-perf/cperf_options.h | 4 +-
app/test-crypto-perf/cperf_options_parsing.c | 38 ++++++++----
app/test-crypto-perf/cperf_test_latency.c | 78 +++++++++++++++---------
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 78 +++++++++++++++---------
app/test-crypto-perf/cperf_test_throughput.c | 78 +++++++++++++++---------
app/test-crypto-perf/cperf_test_verify.c | 78 +++++++++++++++---------
doc/guides/tools/cryptoperf.rst | 6 +-
8 files changed, 249 insertions(+), 135 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 5be20d9..ad32065 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -175,6 +175,14 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -256,6 +264,14 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -346,6 +362,14 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index 2f42cb6..6d339f4 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -11,7 +11,7 @@
#define CPERF_TOTAL_OPS ("total-ops")
#define CPERF_BURST_SIZE ("burst-sz")
#define CPERF_BUFFER_SIZE ("buffer-sz")
-#define CPERF_SEGMENTS_NB ("segments-nb")
+#define CPERF_SEGMENT_SIZE ("segment-sz")
#define CPERF_DESC_NB ("desc-nb")
#define CPERF_DEVTYPE ("devtype")
@@ -71,7 +71,7 @@ struct cperf_options {
uint32_t pool_sz;
uint32_t total_ops;
- uint32_t segments_nb;
+ uint32_t segment_sz;
uint32_t test_buffer_size;
uint32_t nb_descriptors;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index f3508a4..d372691 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -328,17 +328,17 @@ parse_buffer_sz(struct cperf_options *opts, const char *arg)
}
static int
-parse_segments_nb(struct cperf_options *opts, const char *arg)
+parse_segment_sz(struct cperf_options *opts, const char *arg)
{
- int ret = parse_uint32_t(&opts->segments_nb, arg);
+ int ret = parse_uint32_t(&opts->segment_sz, arg);
if (ret) {
- RTE_LOG(ERR, USER1, "failed to parse segments number\n");
+ RTE_LOG(ERR, USER1, "failed to parse segment size\n");
return -1;
}
- if ((opts->segments_nb == 0) || (opts->segments_nb > 255)) {
- RTE_LOG(ERR, USER1, "invalid segments number specified\n");
+ if (opts->segment_sz == 0) {
+ RTE_LOG(ERR, USER1, "Segment size has to be bigger than 0\n");
return -1;
}
@@ -678,7 +678,7 @@ static struct option lgopts[] = {
{ CPERF_TOTAL_OPS, required_argument, 0, 0 },
{ CPERF_BURST_SIZE, required_argument, 0, 0 },
{ CPERF_BUFFER_SIZE, required_argument, 0, 0 },
- { CPERF_SEGMENTS_NB, required_argument, 0, 0 },
+ { CPERF_SEGMENT_SIZE, required_argument, 0, 0 },
{ CPERF_DESC_NB, required_argument, 0, 0 },
{ CPERF_DEVTYPE, required_argument, 0, 0 },
@@ -739,7 +739,11 @@ cperf_options_default(struct cperf_options *opts)
opts->min_burst_size = 32;
opts->inc_burst_size = 0;
- opts->segments_nb = 1;
+ /*
+ * Will be parsed from command line or set to
+ * maximum buffer size + digest, later
+ */
+ opts->segment_sz = 0;
strncpy(opts->device_type, "crypto_aesni_mb",
sizeof(opts->device_type));
@@ -783,7 +787,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_TOTAL_OPS, parse_total_ops },
{ CPERF_BURST_SIZE, parse_burst_sz },
{ CPERF_BUFFER_SIZE, parse_buffer_sz },
- { CPERF_SEGMENTS_NB, parse_segments_nb },
+ { CPERF_SEGMENT_SIZE, parse_segment_sz },
{ CPERF_DESC_NB, parse_desc_nb },
{ CPERF_DEVTYPE, parse_device_type },
{ CPERF_OPTYPE, parse_op_type },
@@ -893,9 +897,21 @@ check_cipher_buffer_length(struct cperf_options *options)
int
cperf_options_check(struct cperf_options *options)
{
- if (options->segments_nb > options->min_buffer_size) {
+ if (options->op_type == CPERF_CIPHER_ONLY)
+ options->digest_sz = 0;
+
+ /*
+ * If segment size is not set, assume only one segment,
+ * big enough to contain the largest buffer and the digest
+ */
+ if (options->segment_sz == 0)
+ options->segment_sz = options->max_buffer_size +
+ options->digest_sz;
+
+ if (options->segment_sz < options->digest_sz) {
RTE_LOG(ERR, USER1,
- "Segments number greater than buffer size.\n");
+ "Segment size should be at least "
+ "the size of the digest\n");
return -EINVAL;
}
@@ -1019,7 +1035,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("%u ", opts->burst_size_list[size_idx]);
printf("\n");
}
- printf("\n# segments per buffer: %u\n", opts->segments_nb);
+ printf("\n# segment size: %u\n", opts->segment_sz);
printf("#\n");
printf("# cryptodev type: %s\n", opts->device_type);
printf("#\n");
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 2a46af9..7b9dc9f 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -116,18 +116,18 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint16_t segments_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -137,11 +137,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
segments_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -154,22 +161,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
segments_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segments_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
+
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -217,13 +234,14 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segments_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -236,7 +254,9 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segments_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -251,9 +271,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -267,8 +285,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -339,7 +357,7 @@ cperf_latency_test_runner(void *arg)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index ef1aa7e..872124f 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -133,18 +133,18 @@ cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx,
}
static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool, uint32_t segments_nb,
+cperf_mbuf_create(struct rte_mempool *mempool,
+ uint32_t segment_sz, uint16_t segments_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -154,11 +154,18 @@ cperf_mbuf_create(struct rte_mempool *mempool, uint32_t segments_nb,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
segments_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -171,22 +178,32 @@ cperf_mbuf_create(struct rte_mempool *mempool, uint32_t segments_nb,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
segments_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segments_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(
- mbuf, options->digest_sz);
+ if (m == NULL)
+ goto error;
+
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -209,13 +226,7 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
struct cperf_pmd_cyclecount_ctx *ctx = NULL;
unsigned int mbuf_idx = 0;
char pool_name[32] = "";
- uint16_t dataroom_sz = RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size /
- options->segments_nb) +
- (options->max_buffer_size %
- options->segments_nb) +
- options->digest_sz);
+ uint32_t dataroom_sz = RTE_PKTMBUF_HEADROOM + options->segment_sz;
/* preallocate buffers for crypto ops as they can get quite big */
size_t alloc_sz = sizeof(struct rte_crypto_op *) *
@@ -243,8 +254,12 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d", dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
+ options->pool_sz * segments_nb, 0, 0,
dataroom_sz, rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -256,7 +271,9 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segments_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -267,7 +284,8 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
dev_id);
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz, 0, 0, dataroom_sz,
+ options->pool_sz, 0, 0,
+ RTE_PKTMBUF_HEADROOM + max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -280,8 +298,8 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1, options,
- test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -573,7 +591,7 @@ cperf_pmd_cyclecount_test_runner(void *test_ctx)
struct rte_cryptodev_info dev_info;
/* Check if source mbufs require coalescing */
- if (opts->segments_nb > 1) {
+ if (opts->segments_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(state.ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) ==
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 07aea6a..fcb82a8 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -100,18 +100,18 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint16_t segments_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -121,11 +121,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
segments_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -138,22 +145,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
segments_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segments_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
+
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -200,13 +217,14 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segments_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -218,7 +236,9 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segments_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -232,9 +252,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -248,8 +266,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -297,7 +315,7 @@ cperf_throughput_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index bc07eb6..ba9621b 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -104,18 +104,18 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
+ uint32_t segment_sz,
+ uint16_t segments_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -125,11 +125,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
segments_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -142,22 +149,32 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
segments_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segments_nb != 0) {
+ struct rte_mbuf *m;
- memcpy(mbuf_data, test_data, last_sz);
- }
+ m = rte_pktmbuf_alloc(mempool);
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ if (m == NULL)
+ goto error;
+
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -204,13 +221,14 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segments_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (ctx->pkt_mbuf_pool_in == NULL)
@@ -222,7 +240,9 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
+ ctx->pkt_mbuf_pool_in,
+ options->segment_sz,
+ segments_nb,
options, test_vector);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
@@ -236,9 +256,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ max_size,
rte_socket_id());
if (ctx->pkt_mbuf_pool_out == NULL)
@@ -252,8 +270,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
+ ctx->pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
@@ -405,7 +423,7 @@ cperf_verify_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 2f526c6..d587c20 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -172,9 +172,11 @@ The following are the appication command-line options:
* List of values, up to 32 values, separated in commas (i.e. ``--buffer-sz 32,64,128``)
-* ``--segments-nb <n>``
+* ``--segment-sz <n>``
- Set the number of segments per packet.
+ Set the size of the segment to use, for Scatter Gather List testing.
+ By default, it is set to the size of the maximum buffer size, including the digest size,
+ so a single segment is created.
* ``--devtype <name>``
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v3 4/7] app/crypto-perf: overwrite mbuf when verifying
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
` (2 preceding siblings ...)
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 3/7] app/crypto-perf: parse segment size Pablo de Lara
@ 2017-09-22 7:55 ` Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
` (3 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-22 7:55 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
When running the verify test, mbufs in the pool were
populated with the test vector loaded from a file.
To avoid limiting the number of operations to the pool size,
mbufs will be rewritten with the test vector, before
linking them to the crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_options_parsing.c | 7 ------
app/test-crypto-perf/cperf_test_verify.c | 35 ++++++++++++++++++++++++++++
2 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index d372691..89f86a2 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -944,13 +944,6 @@ cperf_options_check(struct cperf_options *options)
}
if (options->test == CPERF_TEST_TYPE_VERIFY &&
- options->total_ops > options->pool_sz) {
- RTE_LOG(ERR, USER1, "Total number of ops must be less than or"
- " equal to the pool size.\n");
- return -EINVAL;
- }
-
- if (options->test == CPERF_TEST_TYPE_VERIFY &&
(options->inc_buffer_size != 0 ||
options->buffer_size_count > 1)) {
RTE_LOG(ERR, USER1, "Only one buffer size is allowed when "
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index ba9621b..dbfa661 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -187,6 +187,34 @@ cperf_mbuf_create(struct rte_mempool *mempool,
return NULL;
}
+static void
+cperf_mbuf_set(struct rte_mbuf *mbuf,
+ const struct cperf_options *options,
+ const struct cperf_test_vector *test_vector)
+{
+ uint32_t segment_sz = options->segment_sz;
+ uint8_t *mbuf_data;
+ uint8_t *test_data =
+ (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ test_vector->plaintext.data :
+ test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
+
+ while (remaining_bytes) {
+ mbuf_data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ return;
+ }
+
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ mbuf = mbuf->next;
+ }
+}
+
void *
cperf_verify_test_constructor(struct rte_mempool *sess_mp,
uint8_t dev_id, uint16_t qp_id,
@@ -469,6 +497,13 @@ cperf_verify_test_runner(void *test_ctx)
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
+
+ /* Populate the mbuf with the test vector, for verification */
+ for (i = 0; i < ops_needed; i++)
+ cperf_mbuf_set(ops[i]->sym->m_src,
+ ctx->options,
+ ctx->test_vector);
+
#ifdef CPERF_LINEARIZATION_ENABLE
if (linearize) {
/* PMD doesn't support scatter-gather and source buffer
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v3 5/7] app/crypto-perf: do not populate the mbufs at init
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
` (3 preceding siblings ...)
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
@ 2017-09-22 7:55 ` Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 6/7] app/crypto-perf: support multiple queue pairs Pablo de Lara
` (2 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-09-22 7:55 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
For throughput and latency tests, it is not required
to populate the mbufs with any test vector.
For verify test, there is already a function that rewrites
the mbufs every time they are going to be used with
crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_latency.c | 31 +++++++-----------------
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 31 +++++++-----------------
app/test-crypto-perf/cperf_test_throughput.c | 31 +++++++-----------------
app/test-crypto-perf/cperf_test_verify.c | 31 +++++++-----------------
4 files changed, 36 insertions(+), 88 deletions(-)
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 7b9dc9f..acd8545 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -118,15 +118,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint16_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -137,15 +132,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segments_nb--;
while (remaining_bytes) {
@@ -161,15 +152,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segments_nb--;
}
@@ -257,7 +244,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segments_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -286,7 +273,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 872124f..962dc69 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -135,15 +135,10 @@ cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx,
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz, uint16_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -154,15 +149,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segments_nb--;
while (remaining_bytes) {
@@ -178,15 +169,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segments_nb--;
}
@@ -274,7 +261,7 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segments_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -299,7 +286,7 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index fcb82a8..e4da0d5 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -102,15 +102,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint16_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -121,15 +116,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segments_nb--;
while (remaining_bytes) {
@@ -145,15 +136,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segments_nb--;
}
@@ -239,7 +226,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segments_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -267,7 +254,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index dbfa661..3159361 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -106,15 +106,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint16_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -125,15 +120,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segments_nb--;
while (remaining_bytes) {
@@ -149,15 +140,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segments_nb--;
}
@@ -271,7 +258,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->pkt_mbuf_pool_in,
options->segment_sz,
segments_nb,
- options, test_vector);
+ options);
if (ctx->mbufs_in[mbuf_idx] == NULL)
goto err;
}
@@ -299,7 +286,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
ctx->pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if (ctx->mbufs_out[mbuf_idx] == NULL)
goto err;
} else {
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v3 6/7] app/crypto-perf: support multiple queue pairs
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
` (4 preceding siblings ...)
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
@ 2017-09-22 7:55 ` Pablo de Lara
2017-09-26 8:42 ` Akhil Goyal
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 7/7] app/crypto-perf: use single mempool Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
7 siblings, 1 reply; 49+ messages in thread
From: Pablo de Lara @ 2017-09-22 7:55 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
Add parameter "qps" in crypto performance app,
to create multiple queue pairs per device.
This new parameter is useful to have multiple logical
cores using a single crypto device, without needing
to initialize a crypto device per core.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_options.h | 2 +
app/test-crypto-perf/cperf_options_parsing.c | 22 ++++++++++
app/test-crypto-perf/cperf_test_latency.c | 14 +++---
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 7 +--
app/test-crypto-perf/cperf_test_throughput.c | 14 +++---
app/test-crypto-perf/cperf_test_verify.c | 14 +++---
app/test-crypto-perf/main.c | 56 ++++++++++++++----------
doc/guides/tools/cryptoperf.rst | 4 ++
8 files changed, 84 insertions(+), 49 deletions(-)
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index 6d339f4..468d5e2 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -15,6 +15,7 @@
#define CPERF_DESC_NB ("desc-nb")
#define CPERF_DEVTYPE ("devtype")
+#define CPERF_QP_NB ("qp-nb")
#define CPERF_OPTYPE ("optype")
#define CPERF_SESSIONLESS ("sessionless")
#define CPERF_OUT_OF_PLACE ("out-of-place")
@@ -74,6 +75,7 @@ struct cperf_options {
uint32_t segment_sz;
uint32_t test_buffer_size;
uint32_t nb_descriptors;
+ uint32_t nb_qps;
uint32_t sessionless:1;
uint32_t out_of_place:1;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 89f86a2..441cd61 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -364,6 +364,24 @@ parse_desc_nb(struct cperf_options *opts, const char *arg)
}
static int
+parse_qp_nb(struct cperf_options *opts, const char *arg)
+{
+ int ret = parse_uint32_t(&opts->nb_qps, arg);
+
+ if (ret) {
+ RTE_LOG(ERR, USER1, "failed to parse number of queue pairs\n");
+ return -1;
+ }
+
+ if ((opts->nb_qps == 0) || (opts->nb_qps > 256)) {
+ RTE_LOG(ERR, USER1, "invalid number of queue pairs specified\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
parse_device_type(struct cperf_options *opts, const char *arg)
{
if (strlen(arg) > (sizeof(opts->device_type) - 1))
@@ -680,6 +698,7 @@ static struct option lgopts[] = {
{ CPERF_BUFFER_SIZE, required_argument, 0, 0 },
{ CPERF_SEGMENT_SIZE, required_argument, 0, 0 },
{ CPERF_DESC_NB, required_argument, 0, 0 },
+ { CPERF_QP_NB, required_argument, 0, 0 },
{ CPERF_DEVTYPE, required_argument, 0, 0 },
{ CPERF_OPTYPE, required_argument, 0, 0 },
@@ -747,6 +766,7 @@ cperf_options_default(struct cperf_options *opts)
strncpy(opts->device_type, "crypto_aesni_mb",
sizeof(opts->device_type));
+ opts->nb_qps = 1;
opts->op_type = CPERF_CIPHER_THEN_AUTH;
@@ -789,6 +809,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_BUFFER_SIZE, parse_buffer_sz },
{ CPERF_SEGMENT_SIZE, parse_segment_sz },
{ CPERF_DESC_NB, parse_desc_nb },
+ { CPERF_QP_NB, parse_qp_nb },
{ CPERF_DEVTYPE, parse_device_type },
{ CPERF_OPTYPE, parse_op_type },
{ CPERF_SESSIONLESS, parse_sessionless },
@@ -1032,6 +1053,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("#\n");
printf("# cryptodev type: %s\n", opts->device_type);
printf("#\n");
+ printf("# number of queue pairs per device: %u\n", opts->nb_qps);
printf("# crypto operation: %s\n", cperf_op_type_strs[opts->op_type]);
printf("# sessionless: %s\n", opts->sessionless ? "yes" : "no");
printf("# out of place: %s\n", opts->out_of_place ? "yes" : "no");
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index acd8545..99b92d3 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -218,8 +218,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
@@ -252,8 +252,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
snprintf(pool_name, sizeof(pool_name),
- "cperf_pool_out_cdev_%d",
- dev_id);
+ "cperf_pool_out_cdev_%d_qp_%d",
+ dev_id, qp_id);
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
@@ -281,8 +281,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
}
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
test_vector->cipher_iv.length +
@@ -583,7 +583,5 @@ cperf_latency_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_latency_test_free(ctx, ctx->options->pool_sz);
}
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 962dc69..5940836 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -239,7 +239,8 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d", dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
@@ -267,8 +268,8 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
}
if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
+ dev_id, qp_id);
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(pool_name,
options->pool_sz, 0, 0,
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index e4da0d5..9255915 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -201,8 +201,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
@@ -233,8 +233,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
+ dev_id, qp_id);
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
@@ -262,8 +262,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
}
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
@@ -530,7 +530,5 @@ cperf_throughput_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_throughput_test_free(ctx, ctx->options->pool_sz);
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 3159361..dd97354 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -233,8 +233,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
@@ -265,8 +265,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
+ dev_id, qp_id);
ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
@@ -294,8 +294,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
}
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
+ dev_id, qp_id);
uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
@@ -626,7 +626,5 @@ cperf_verify_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_verify_test_free(ctx, ctx->options->pool_sz);
}
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index ffa7180..97dc19c 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -90,7 +90,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
struct rte_mempool *session_pool_socket[])
{
uint8_t enabled_cdev_count = 0, nb_lcores, cdev_id;
- unsigned int i;
+ unsigned int i, j;
int ret;
enabled_cdev_count = rte_cryptodev_devices_get(opts->device_type,
@@ -125,8 +125,8 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
uint8_t socket_id = rte_cryptodev_socket_id(cdev_id);
struct rte_cryptodev_config conf = {
- .nb_queue_pairs = 1,
- .socket_id = socket_id
+ .nb_queue_pairs = opts->nb_qps,
+ .socket_id = socket_id
};
struct rte_cryptodev_qp_conf qp_conf = {
@@ -165,14 +165,16 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
return -EINVAL;
}
- ret = rte_cryptodev_queue_pair_setup(cdev_id, 0,
+ for (j = 0; j < opts->nb_qps; j++) {
+ ret = rte_cryptodev_queue_pair_setup(cdev_id, j,
&qp_conf, socket_id,
session_pool_socket[socket_id]);
if (ret < 0) {
printf("Failed to setup queue pair %u on "
- "cryptodev %u", 0, cdev_id);
+ "cryptodev %u", j, cdev_id);
return -EINVAL;
}
+ }
ret = rte_cryptodev_start(cdev_id);
if (ret < 0) {
@@ -471,23 +473,29 @@ main(int argc, char **argv)
if (!opts.silent)
show_test_vector(t_vec);
+ uint16_t total_nb_qps = nb_cryptodevs * opts.nb_qps;
+
i = 0;
+ uint8_t qp_id = 0, cdev_index = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
- cdev_id = enabled_cdevs[i];
+ cdev_id = enabled_cdevs[cdev_index];
uint8_t socket_id = rte_cryptodev_socket_id(cdev_id);
- ctx[cdev_id] = cperf_testmap[opts.test].constructor(
- session_pool_socket[socket_id], cdev_id, 0,
+ ctx[i] = cperf_testmap[opts.test].constructor(
+ session_pool_socket[socket_id], cdev_id, qp_id,
&opts, t_vec, &op_fns);
- if (ctx[cdev_id] == NULL) {
+ if (ctx[i] == NULL) {
RTE_LOG(ERR, USER1, "Test run constructor failed\n");
goto err;
}
+ qp_id = (qp_id + 1) % opts.nb_qps;
+ if (qp_id == 0)
+ cdev_index++;
i++;
}
@@ -501,19 +509,17 @@ main(int argc, char **argv)
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
- cdev_id = enabled_cdevs[i];
-
rte_eal_remote_launch(cperf_testmap[opts.test].runner,
- ctx[cdev_id], lcore_id);
+ ctx[i], lcore_id);
i++;
}
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
rte_eal_wait_lcore(lcore_id);
i++;
@@ -532,15 +538,17 @@ main(int argc, char **argv)
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
- cdev_id = enabled_cdevs[i];
-
- cperf_testmap[opts.test].destructor(ctx[cdev_id]);
+ cperf_testmap[opts.test].destructor(ctx[i]);
i++;
}
+ for (i = 0; i < nb_cryptodevs &&
+ i < RTE_CRYPTO_MAX_DEVS; i++)
+ rte_cryptodev_stop(enabled_cdevs[i]);
+
free_test_vector(t_vec, &opts);
printf("\n");
@@ -549,16 +557,20 @@ main(int argc, char **argv)
err:
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
cdev_id = enabled_cdevs[i];
- if (ctx[cdev_id] && cperf_testmap[opts.test].destructor)
- cperf_testmap[opts.test].destructor(ctx[cdev_id]);
+ if (ctx[i] && cperf_testmap[opts.test].destructor)
+ cperf_testmap[opts.test].destructor(ctx[i]);
i++;
}
+ for (i = 0; i < nb_cryptodevs &&
+ i < RTE_CRYPTO_MAX_DEVS; i++)
+ rte_cryptodev_stop(enabled_cdevs[i]);
+
free_test_vector(t_vec, &opts);
printf("\n");
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index d587c20..b114b15 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -194,6 +194,10 @@ The following are the appication command-line options:
crypto_armv8
crypto_scheduler
+* ``--qp-nb <n>``
+
+ Set the number of queue pairs per device (1 by default).
+
* ``--optype <name>``
Set operation type, where ``name`` is one of the following::
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v3 7/7] app/crypto-perf: use single mempool
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
` (5 preceding siblings ...)
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 6/7] app/crypto-perf: support multiple queue pairs Pablo de Lara
@ 2017-09-22 7:55 ` Pablo de Lara
2017-09-26 9:21 ` Akhil Goyal
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
7 siblings, 1 reply; 49+ messages in thread
From: Pablo de Lara @ 2017-09-22 7:55 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
In order to improve memory utilization, a single mempool
is created, containing the crypto operation and mbufs
(one if operation is in-place, two if out-of-place).
This way, a single object is allocated and freed
per operation, reducing the amount of memory in cache,
which improves scalability.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 96 ++++--
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_test_latency.c | 361 +++++++++++-----------
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 364 +++++++++++-----------
app/test-crypto-perf/cperf_test_throughput.c | 358 +++++++++++-----------
app/test-crypto-perf/cperf_test_verify.c | 367 +++++++++++------------
6 files changed, 793 insertions(+), 755 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index ad32065..f76dbdd 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -37,7 +37,7 @@
static int
cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector __rte_unused,
@@ -48,10 +48,18 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
sym_op->cipher.data.length = options->test_buffer_size;
@@ -63,7 +71,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
static int
cperf_set_ops_null_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector __rte_unused,
@@ -74,10 +82,18 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* auth parameters */
sym_op->auth.data.length = options->test_buffer_size;
@@ -89,7 +105,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_cipher(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -100,10 +116,18 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -132,7 +156,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
static int
cperf_set_ops_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -143,10 +167,18 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
if (test_vector->auth_iv.length) {
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
@@ -167,9 +199,9 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
@@ -219,7 +251,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -230,10 +262,18 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -256,9 +296,9 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
@@ -316,7 +356,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_aead(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -329,10 +369,18 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
@@ -354,9 +402,9 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index 1f8fa93..94951cc 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -47,7 +47,7 @@ typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
uint16_t iv_offset);
typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 99b92d3..ed62cbf 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -50,17 +50,15 @@ struct cperf_latency_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
struct cperf_op_result *res;
@@ -74,116 +72,127 @@ struct priv_op_data {
#define min(a, b) (a < b ? (uint64_t)a : (uint64_t)b)
static void
-cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
+cperf_latency_test_free(struct cperf_latency_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx->res);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint16_t segments_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr += mbuf_hdr_size + segment_sz;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segments_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
void *
@@ -194,8 +203,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_latency_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
+ int ret;
ctx = rte_malloc(NULL, sizeof(struct cperf_latency_ctx), 0);
if (ctx == NULL)
@@ -218,84 +227,74 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
- dev_id, qp_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = sizeof(struct priv_op_data) +
+ test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
+ snprintf(pool_name, sizeof(pool_name), "pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segments_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment only */
+ obj_size += max_size;
}
- if (options->out_of_place == 1) {
+ ctx->pool = rte_mempool_create_empty(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ rte_socket_id(), 0);
- snprintf(pool_name, sizeof(pool_name),
- "cperf_pool_out_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
+ if (ctx->pool == NULL) {
+ RTE_LOG(ERR, USER1,
+ "Cannot allocate mempool for device %u\n",
+ dev_id);
+ goto err;
}
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ret = rte_mempool_set_ops_byname(ctx->pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+ if (ret != 0) {
+ RTE_LOG(ERR, USER1,
+ "Error setting mempool handler for device %u\n",
+ dev_id);
+ goto err;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(sizeof(struct priv_op_data) +
- test_vector->cipher_iv.length +
- test_vector->auth_iv.length +
- test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
-
- if (ctx->crypto_op_pool == NULL)
+ ret = rte_mempool_populate_default(ctx->pool);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1,
+ "Error populating mempool for device %u\n",
+ dev_id);
goto err;
+ }
+
+ rte_mempool_obj_iter(ctx->pool, mempool_obj_init, (void *)¶ms);
ctx->res = rte_malloc(NULL, sizeof(struct cperf_op_result) *
ctx->options->total_ops, 0);
@@ -305,7 +304,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
return ctx;
err:
- cperf_latency_test_free(ctx, mbuf_idx);
+ cperf_latency_test_free(ctx);
return NULL;
}
@@ -370,7 +369,7 @@ cperf_latency_test_runner(void *arg)
while (test_burst_size <= ctx->options->max_burst_size) {
uint64_t ops_enqd = 0, ops_deqd = 0;
- uint64_t m_idx = 0, b_idx = 0;
+ uint64_t b_idx = 0;
uint64_t tsc_val, tsc_end, tsc_start;
uint64_t tsc_max = 0, tsc_min = ~0UL, tsc_tot = 0, tsc_idx = 0;
@@ -385,11 +384,9 @@ cperf_latency_test_runner(void *arg)
ctx->options->total_ops -
enqd_tot;
- /* Allocate crypto ops from pool */
- if (burst_size != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, burst_size)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ burst_size) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -399,8 +396,8 @@ cperf_latency_test_runner(void *arg)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
burst_size, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
@@ -429,7 +426,7 @@ cperf_latency_test_runner(void *arg)
/* Free memory for not enqueued operations */
if (ops_enqd != burst_size)
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)&ops[ops_enqd],
burst_size - ops_enqd);
@@ -445,16 +442,11 @@ cperf_latency_test_runner(void *arg)
}
if (likely(ops_deqd)) {
- /*
- * free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
+ /* free crypto ops so they can be reused. */
for (i = 0; i < ops_deqd; i++)
store_timestamp(ops_processed[i], tsc_end);
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
deqd_tot += ops_deqd;
@@ -466,9 +458,6 @@ cperf_latency_test_runner(void *arg)
enqd_max = max(ops_enqd, enqd_max);
enqd_min = min(ops_enqd, enqd_min);
- m_idx += ops_enqd;
- m_idx = m_idx + test_burst_size > ctx->options->pool_sz ?
- 0 : m_idx;
b_idx++;
}
@@ -487,7 +476,7 @@ cperf_latency_test_runner(void *arg)
for (i = 0; i < ops_deqd; i++)
store_timestamp(ops_processed[i], tsc_end);
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
deqd_tot += ops_deqd;
@@ -583,5 +572,5 @@ cperf_latency_test_destructor(void *arg)
if (ctx == NULL)
return;
- cperf_latency_test_free(ctx, ctx->options->pool_sz);
+ cperf_latency_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 5940836..05f612a 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -50,12 +50,7 @@ struct cperf_pmd_cyclecount_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_crypto_op **ops;
struct rte_crypto_op **ops_processed;
@@ -63,6 +58,9 @@ struct cperf_pmd_cyclecount_ctx {
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
@@ -86,121 +84,132 @@ static const uint16_t iv_offset =
sizeof(struct rte_crypto_op) + sizeof(struct rte_crypto_sym_op);
static void
-cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx,
- uint32_t mbuf_nb)
+cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
if (ctx->ops)
rte_free(ctx->ops);
if (ctx->ops_processed)
rte_free(ctx->ops_processed);
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz, uint16_t segments_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr += mbuf_hdr_size + segment_sz;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segments_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
void *
@@ -211,9 +220,8 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_pmd_cyclecount_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
- uint32_t dataroom_sz = RTE_PKTMBUF_HEADROOM + options->segment_sz;
+ int ret;
/* preallocate buffers for crypto ops as they can get quite big */
size_t alloc_sz = sizeof(struct rte_crypto_op *) *
@@ -239,73 +247,73 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
- dev_id, qp_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segments_nb, 0, 0,
- dataroom_sz, rte_socket_id());
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
+ snprintf(pool_name, sizeof(pool_name), "pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segments_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment only */
+ obj_size += max_size;
}
- if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
- dev_id, qp_id);
+ ctx->pool = rte_mempool_create_empty(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ rte_socket_id(), 0);
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM + max_size,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
+ if (ctx->pool == NULL) {
+ RTE_LOG(ERR, USER1,
+ "Cannot allocate mempool for device %u\n",
+ dev_id);
+ goto err;
}
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ret = rte_mempool_set_ops_byname(ctx->pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+ if (ret != 0) {
+ RTE_LOG(ERR, USER1,
+ "Error setting mempool handler for device %u\n",
+ dev_id);
+ goto err;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d", dev_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 512,
- priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ ret = rte_mempool_populate_default(ctx->pool);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1,
+ "Error populating mempool for device %u\n",
+ dev_id);
goto err;
+ }
+
+ rte_mempool_obj_iter(ctx->pool, mempool_obj_init, (void *)¶ms);
ctx->ops = rte_malloc("ops", alloc_sz, 0);
if (!ctx->ops)
@@ -318,7 +326,7 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
return ctx;
err:
- cperf_pmd_cyclecount_test_free(ctx, mbuf_idx);
+ cperf_pmd_cyclecount_test_free(ctx);
return NULL;
}
@@ -339,16 +347,22 @@ pmd_cyclecount_bench_ops(struct pmd_cyclecount_state *state, uint32_t cur_op,
test_burst_size);
struct rte_crypto_op **ops = &state->ctx->ops[cur_iter_op];
- if (burst_size != rte_crypto_op_bulk_alloc(
- state->ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, burst_size))
- return -1;
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(state->ctx->pool, (void **)ops,
+ burst_size) != 0) {
+ RTE_LOG(ERR, USER1,
+ "Failed to allocate more crypto operations "
+ "from the the crypto operation pool.\n"
+ "Consider increasing the pool size "
+ "with --pool-sz\n");
+ return -1;
+ }
/* Setup crypto op, attach mbuf etc */
(state->ctx->populate_ops)(ops,
- &state->ctx->mbufs_in[cur_iter_op],
- &state->ctx->mbufs_out[cur_iter_op], burst_size,
+ state->ctx->src_buf_offset,
+ state->ctx->dst_buf_offset,
+ burst_size,
state->ctx->sess, state->opts,
state->ctx->test_vector, iv_offset);
@@ -362,7 +376,7 @@ pmd_cyclecount_bench_ops(struct pmd_cyclecount_state *state, uint32_t cur_op,
}
}
#endif /* CPERF_LINEARIZATION_ENABLE */
- rte_mempool_put_bulk(state->ctx->crypto_op_pool, (void **)ops,
+ rte_mempool_put_bulk(state->ctx->pool, (void **)ops,
burst_size);
}
@@ -382,16 +396,22 @@ pmd_cyclecount_build_ops(struct pmd_cyclecount_state *state,
iter_ops_needed - cur_iter_op, test_burst_size);
struct rte_crypto_op **ops = &state->ctx->ops[cur_iter_op];
- if (burst_size != rte_crypto_op_bulk_alloc(
- state->ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, burst_size))
- return -1;
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(state->ctx->pool, (void **)ops,
+ burst_size) != 0) {
+ RTE_LOG(ERR, USER1,
+ "Failed to allocate more crypto operations "
+ "from the the crypto operation pool.\n"
+ "Consider increasing the pool size "
+ "with --pool-sz\n");
+ return -1;
+ }
/* Setup crypto op, attach mbuf etc */
(state->ctx->populate_ops)(ops,
- &state->ctx->mbufs_in[cur_iter_op],
- &state->ctx->mbufs_out[cur_iter_op], burst_size,
+ state->ctx->src_buf_offset,
+ state->ctx->dst_buf_offset,
+ burst_size,
state->ctx->sess, state->opts,
state->ctx->test_vector, iv_offset);
}
@@ -540,7 +560,7 @@ pmd_cyclecount_bench_burst_sz(
* we may not have processed all ops that we allocated, so
* free everything we've allocated.
*/
- rte_mempool_put_bulk(state->ctx->crypto_op_pool,
+ rte_mempool_put_bulk(state->ctx->pool,
(void **)state->ctx->ops, iter_ops_allocd);
}
@@ -667,5 +687,5 @@ cperf_pmd_cyclecount_test_destructor(void *arg)
if (ctx == NULL)
return;
- cperf_pmd_cyclecount_test_free(ctx, ctx->options->pool_sz);
+ cperf_pmd_cyclecount_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 9255915..0d33fab 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -43,131 +43,140 @@ struct cperf_throughput_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
static void
-cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
+cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint16_t segments_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr += mbuf_hdr_size + segment_sz;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segments_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
void *
@@ -178,8 +187,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_throughput_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
+ int ret;
ctx = rte_malloc(NULL, sizeof(struct cperf_throughput_ctx), 0);
if (ctx == NULL)
@@ -201,83 +210,77 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
- dev_id, qp_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
+ snprintf(pool_name, sizeof(pool_name), "pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segments_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment only */
+ obj_size += max_size;
}
- if (options->out_of_place == 1) {
+ ctx->pool = rte_mempool_create_empty(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ rte_socket_id(), 0);
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
+ if (ctx->pool == NULL) {
+ RTE_LOG(ERR, USER1,
+ "Cannot allocate mempool for device %u\n",
+ dev_id);
+ goto err;
}
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ret = rte_mempool_set_ops_byname(ctx->pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+ if (ret != 0) {
+ RTE_LOG(ERR, USER1,
+ "Error setting mempool handler for device %u\n",
+ dev_id);
+ goto err;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ ret = rte_mempool_populate_default(ctx->pool);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1,
+ "Error populating mempool for device %u\n",
+ dev_id);
goto err;
+ }
+
+ rte_mempool_obj_iter(ctx->pool, mempool_obj_init, (void *)¶ms);
return ctx;
err:
- cperf_throughput_test_free(ctx, mbuf_idx);
+ cperf_throughput_test_free(ctx);
return NULL;
}
@@ -329,7 +332,7 @@ cperf_throughput_test_runner(void *test_ctx)
uint64_t ops_enqd = 0, ops_enqd_total = 0, ops_enqd_failed = 0;
uint64_t ops_deqd = 0, ops_deqd_total = 0, ops_deqd_failed = 0;
- uint64_t m_idx = 0, tsc_start, tsc_end, tsc_duration;
+ uint64_t tsc_start, tsc_end, tsc_duration;
uint16_t ops_unused = 0;
@@ -345,11 +348,9 @@ cperf_throughput_test_runner(void *test_ctx)
uint16_t ops_needed = burst_size - ops_unused;
- /* Allocate crypto ops from pool */
- if (ops_needed != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, ops_needed)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ ops_needed) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -359,10 +360,11 @@ cperf_throughput_test_runner(void *test_ctx)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
- ops_needed, ctx->sess, ctx->options,
- ctx->test_vector, iv_offset);
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
+ ops_needed, ctx->sess,
+ ctx->options, ctx->test_vector,
+ iv_offset);
/**
* When ops_needed is smaller than ops_enqd, the
@@ -407,12 +409,8 @@ cperf_throughput_test_runner(void *test_ctx)
ops_processed, test_burst_size);
if (likely(ops_deqd)) {
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
@@ -425,9 +423,6 @@ cperf_throughput_test_runner(void *test_ctx)
ops_deqd_failed++;
}
- m_idx += ops_needed;
- m_idx = m_idx + test_burst_size > ctx->options->pool_sz ?
- 0 : m_idx;
}
/* Dequeue any operations still in the crypto device */
@@ -442,9 +437,8 @@ cperf_throughput_test_runner(void *test_ctx)
if (ops_deqd == 0)
ops_deqd_failed++;
else {
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
-
ops_deqd_total += ops_deqd;
}
}
@@ -530,5 +524,5 @@ cperf_throughput_test_destructor(void *arg)
if (ctx == NULL)
return;
- cperf_throughput_test_free(ctx, ctx->options->pool_sz);
+ cperf_throughput_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index dd97354..9a9faf5 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -43,135 +43,140 @@ struct cperf_verify_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
-struct cperf_op_result {
- enum rte_crypto_op_status status;
-};
-
static void
-cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
+cperf_verify_test_free(struct cperf_verify_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint16_t segments_nb,
- const struct cperf_options *options)
-{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
+{
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr += mbuf_hdr_size + segment_sz;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segments_nb != 0) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
static void
@@ -210,8 +215,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_verify_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
+ int ret;
ctx = rte_malloc(NULL, sizeof(struct cperf_verify_ctx), 0);
if (ctx == NULL)
@@ -224,7 +229,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->options = options;
ctx->test_vector = test_vector;
- /* IV goes at the end of the cryptop operation */
+ /* IV goes at the end of the crypto operation */
uint16_t iv_offset = sizeof(struct rte_crypto_op) +
sizeof(struct rte_crypto_sym_op);
@@ -233,83 +238,77 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d_qp_%d",
- dev_id, qp_id);
-
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
+ snprintf(pool_name, sizeof(pool_name), "pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in,
- options->segment_sz,
- segments_nb,
- options);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
+ ctx->src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ ctx->dst_buf_offset = ctx->src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = ctx->dst_buf_offset;
+ /* Destination buffer will be one segment only */
+ obj_size += max_size;
}
- if (options->out_of_place == 1) {
-
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- max_size,
- rte_socket_id());
+ ctx->pool = rte_mempool_create_empty(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ rte_socket_id(), 0);
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
+ if (ctx->pool == NULL) {
+ RTE_LOG(ERR, USER1,
+ "Cannot allocate mempool for device %u\n",
+ dev_id);
+ goto err;
}
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, max_size,
- 1, options);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
+ ret = rte_mempool_set_ops_byname(ctx->pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+ if (ret != 0) {
+ RTE_LOG(ERR, USER1,
+ "Error setting mempool handler for device %u\n",
+ dev_id);
+ goto err;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d_qp_%d",
- dev_id, qp_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ ret = rte_mempool_populate_default(ctx->pool);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1,
+ "Error populating mempool for device %u\n",
+ dev_id);
goto err;
+ }
+
+ rte_mempool_obj_iter(ctx->pool, mempool_obj_init, (void *)¶ms);
return ctx;
err:
- cperf_verify_test_free(ctx, mbuf_idx);
+ cperf_verify_test_free(ctx);
return NULL;
}
@@ -425,7 +424,7 @@ cperf_verify_test_runner(void *test_ctx)
static int only_once;
- uint64_t i, m_idx = 0;
+ uint64_t i;
uint16_t ops_unused = 0;
struct rte_crypto_op *ops[ctx->options->max_burst_size];
@@ -465,11 +464,9 @@ cperf_verify_test_runner(void *test_ctx)
uint16_t ops_needed = burst_size - ops_unused;
- /* Allocate crypto ops from pool */
- if (ops_needed != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, ops_needed)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ ops_needed) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -479,8 +476,8 @@ cperf_verify_test_runner(void *test_ctx)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
@@ -520,10 +517,6 @@ cperf_verify_test_runner(void *test_ctx)
ops_deqd = rte_cryptodev_dequeue_burst(ctx->dev_id, ctx->qp_id,
ops_processed, ctx->options->max_burst_size);
- m_idx += ops_needed;
- if (m_idx + ctx->options->max_burst_size > ctx->options->pool_sz)
- m_idx = 0;
-
if (ops_deqd == 0) {
/**
* Count dequeue polls which didn't return any
@@ -538,13 +531,10 @@ cperf_verify_test_runner(void *test_ctx)
if (cperf_verify_op(ops_processed[i], ctx->options,
ctx->test_vector))
ops_failed++;
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_crypto_op_free(ops_processed[i]);
}
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
+ (void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
}
@@ -566,13 +556,10 @@ cperf_verify_test_runner(void *test_ctx)
if (cperf_verify_op(ops_processed[i], ctx->options,
ctx->test_vector))
ops_failed++;
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_crypto_op_free(ops_processed[i]);
}
+ /* free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
+ (void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
}
@@ -626,5 +613,5 @@ cperf_verify_test_destructor(void *arg)
if (ctx == NULL)
return;
- cperf_verify_test_free(ctx, ctx->options->pool_sz);
+ cperf_verify_test_free(ctx);
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/7] app/crypto-perf: support multiple queue pairs
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 6/7] app/crypto-perf: support multiple queue pairs Pablo de Lara
@ 2017-09-26 8:42 ` Akhil Goyal
2017-10-04 10:25 ` De Lara Guarch, Pablo
0 siblings, 1 reply; 49+ messages in thread
From: Akhil Goyal @ 2017-09-26 8:42 UTC (permalink / raw)
To: Pablo de Lara, declan.doherty; +Cc: dev
Hi Pablo,
On 9/22/2017 1:25 PM, Pablo de Lara wrote:
> Add parameter "qps" in crypto performance app,
> to create multiple queue pairs per device.
>
> This new parameter is useful to have multiple logical
> cores using a single crypto device, without needing
> to initialize a crypto device per core.
>
> Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> ---
> app/test-crypto-perf/cperf_options.h | 2 +
> app/test-crypto-perf/cperf_options_parsing.c | 22 ++++++++++
> app/test-crypto-perf/cperf_test_latency.c | 14 +++---
> app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 7 +--
> app/test-crypto-perf/cperf_test_throughput.c | 14 +++---
> app/test-crypto-perf/cperf_test_verify.c | 14 +++---
> app/test-crypto-perf/main.c | 56 ++++++++++++++----------
> doc/guides/tools/cryptoperf.rst | 4 ++
> 8 files changed, 84 insertions(+), 49 deletions(-)
>
> diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
> index 6d339f4..468d5e2 100644
> --- a/app/test-crypto-perf/cperf_options.h
> +++ b/app/test-crypto-perf/cperf_options.h
> @@ -15,6 +15,7 @@
> #define CPERF_DESC_NB ("desc-nb")
>
> #define CPERF_DEVTYPE ("devtype")
> +#define CPERF_QP_NB ("qp-nb")
> #define CPERF_OPTYPE ("optype")
> #define CPERF_SESSIONLESS ("sessionless")
> #define CPERF_OUT_OF_PLACE ("out-of-place")
> @@ -74,6 +75,7 @@ struct cperf_options {
> uint32_t segment_sz;
> uint32_t test_buffer_size;
> uint32_t nb_descriptors;
> + uint32_t nb_qps;
>
> uint32_t sessionless:1;
> uint32_t out_of_place:1;
> diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
> index 89f86a2..441cd61 100644
> --- a/app/test-crypto-perf/cperf_options_parsing.c
> +++ b/app/test-crypto-perf/cperf_options_parsing.c
> @@ -364,6 +364,24 @@ parse_desc_nb(struct cperf_options *opts, const char *arg)
> }
>
> static int
> +parse_qp_nb(struct cperf_options *opts, const char *arg)
> +{
> + int ret = parse_uint32_t(&opts->nb_qps, arg);
> +
> + if (ret) {
> + RTE_LOG(ERR, USER1, "failed to parse number of queue pairs\n");
> + return -1;
> + }
> +
> + if ((opts->nb_qps == 0) || (opts->nb_qps > 256)) {
Shouldn't this be a macro for max nb_qps.
Also a generic comment on this patch.. Why do we need an explicit
parameter for nb-qps. Can't we do it similar to ipsec-secgw.
It takes the devices and maps the queues with core as per the devices'
capabilities.
-Akhil
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH v3 7/7] app/crypto-perf: use single mempool
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 7/7] app/crypto-perf: use single mempool Pablo de Lara
@ 2017-09-26 9:21 ` Akhil Goyal
2017-10-04 7:47 ` De Lara Guarch, Pablo
0 siblings, 1 reply; 49+ messages in thread
From: Akhil Goyal @ 2017-09-26 9:21 UTC (permalink / raw)
To: Pablo de Lara, declan.doherty; +Cc: dev
On 9/22/2017 1:25 PM, Pablo de Lara wrote:
> In order to improve memory utilization, a single mempool
> is created, containing the crypto operation and mbufs
> (one if operation is in-place, two if out-of-place).
> This way, a single object is allocated and freed
> per operation, reducing the amount of memory in cache,
> which improves scalability.
>
> Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> ---
> app/test-crypto-perf/cperf_ops.c | 96 ++++--
> app/test-crypto-perf/cperf_ops.h | 2 +-
> app/test-crypto-perf/cperf_test_latency.c | 361 +++++++++++-----------
> app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 364 +++++++++++-----------
> app/test-crypto-perf/cperf_test_throughput.c | 358 +++++++++++-----------
> app/test-crypto-perf/cperf_test_verify.c | 367 +++++++++++------------
> 6 files changed, 793 insertions(+), 755 deletions(-)
>
The patch set looks good to me. Except for one comment in the 6th patch
of the series and one comment as below.
Is it possible to move the common code at a single place for all the
latency, cycle_count, throughput, verify cases.
I can see a lot of duplicate code in these files.
-Akhil
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
` (6 preceding siblings ...)
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 7/7] app/crypto-perf: use single mempool Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 1/8] app/crypto-perf: refactor common test code Pablo de Lara
` (9 more replies)
7 siblings, 10 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
This patchset includes some improvements in the Crypto
performance application, including app fixes and new parameter additions.
The last patch, in particular, introduces performance improvements.
Currently, crypto operations are allocated in a mempool and mbufs
in a different one. Then mbufs are extracted to an array,
which is looped through for all the crypto operations,
impacting greatly the performance, as much more memory is used.
Since crypto operations and mbufs are mapped 1:1, the can share
the same mempool object (similar to having the mbuf in the
private data of the crypto operation).
This improves performance, as it is only required to handle
a single mempool and the mbufs are obtained from the cache
of the mempoool, and not from an static array.
Changes in v4:
- Refactored test code, to minimize duplications
- Removed --qp-nb parameter. Now the number of queue pairs
per device are calculated from the number of logical cores
available and the number of crypto devices
Changes in v3:
- Renamed "number of queue pairs" option from "--qps" to "--qp-nb",
for more consistency
Changes in v2:
- Added support for multiple queue pairs
- Mempool for crypto operations and mbufs is now created
using rte_mempool_create_empty(), rte_mempool_set_ops_byname(),
rte_mempool_populate_default() and rte_mempool_obj_iter(),
so mempool handler is set, as per Akhil's request.
Pablo de Lara (8):
app/crypto-perf: refactor common test code
app/crypto-perf: set AAD after the crypto operation
app/crypto-perf: parse AEAD data from vectors
app/crypto-perf: parse segment size
app/crypto-perf: overwrite mbuf when verifying
app/crypto-perf: do not populate the mbufs at init
app/crypto-perf: support multiple queue pairs
app/crypto-perf: use single mempool
app/test-crypto-perf/Makefile | 5 +
app/test-crypto-perf/cperf_ops.c | 136 ++++++++---
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_options.h | 5 +-
app/test-crypto-perf/cperf_options_parsing.c | 47 ++--
app/test-crypto-perf/cperf_test_common.c | 225 ++++++++++++++++++
app/test-crypto-perf/cperf_test_common.h | 52 +++++
app/test-crypto-perf/cperf_test_latency.c | 239 +++----------------
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 239 ++++---------------
app/test-crypto-perf/cperf_test_throughput.c | 237 +++----------------
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 +++++
app/test-crypto-perf/cperf_test_verify.c | 278 ++++++-----------------
app/test-crypto-perf/main.c | 100 +++++---
doc/guides/tools/cryptoperf.rst | 6 +-
14 files changed, 715 insertions(+), 911 deletions(-)
create mode 100644 app/test-crypto-perf/cperf_test_common.c
create mode 100644 app/test-crypto-perf/cperf_test_common.h
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 1/8] app/crypto-perf: refactor common test code
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 2/8] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
` (8 subsequent siblings)
9 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
Currently, there is some duplication in all the test types,
in the crypto performance application.
In order to improve maintainability of this code,
and ease future work on it, common functions have been separated
in a different file that gets included in all the tests.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/Makefile | 1 +
app/test-crypto-perf/cperf_test_common.c | 234 +++++++++++++++++++++++
app/test-crypto-perf/cperf_test_common.h | 61 ++++++
app/test-crypto-perf/cperf_test_latency.c | 199 ++-----------------
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 188 ++----------------
app/test-crypto-perf/cperf_test_throughput.c | 194 ++-----------------
app/test-crypto-perf/cperf_test_verify.c | 191 ++----------------
7 files changed, 351 insertions(+), 717 deletions(-)
create mode 100644 app/test-crypto-perf/cperf_test_common.c
create mode 100644 app/test-crypto-perf/cperf_test_common.h
diff --git a/app/test-crypto-perf/Makefile b/app/test-crypto-perf/Makefile
index 821e8e5..25ae395 100644
--- a/app/test-crypto-perf/Makefile
+++ b/app/test-crypto-perf/Makefile
@@ -45,5 +45,6 @@ SRCS-y += cperf_test_latency.c
SRCS-y += cperf_test_pmd_cyclecount.c
SRCS-y += cperf_test_verify.c
SRCS-y += cperf_test_vector_parsing.c
+SRCS-y += cperf_test_common.c
include $(RTE_SDK)/mk/rte.app.mk
diff --git a/app/test-crypto-perf/cperf_test_common.c b/app/test-crypto-perf/cperf_test_common.c
new file mode 100644
index 0000000..a87d27e
--- /dev/null
+++ b/app/test-crypto-perf/cperf_test_common.c
@@ -0,0 +1,234 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_malloc.h>
+
+#include "cperf_test_common.h"
+
+static struct rte_mbuf *
+cperf_mbuf_create(struct rte_mempool *mempool,
+ uint32_t segments_nb,
+ const struct cperf_options *options,
+ const struct cperf_test_vector *test_vector)
+{
+ struct rte_mbuf *mbuf;
+ uint32_t segment_sz = options->max_buffer_size / segments_nb;
+ uint32_t last_sz = options->max_buffer_size % segments_nb;
+ uint8_t *mbuf_data;
+ uint8_t *test_data =
+ (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ test_vector->plaintext.data :
+ test_vector->ciphertext.data;
+
+ mbuf = rte_pktmbuf_alloc(mempool);
+ if (mbuf == NULL)
+ goto error;
+
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
+ if (mbuf_data == NULL)
+ goto error;
+
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ segments_nb--;
+
+ while (segments_nb) {
+ struct rte_mbuf *m;
+
+ m = rte_pktmbuf_alloc(mempool);
+ if (m == NULL)
+ goto error;
+
+ rte_pktmbuf_chain(mbuf, m);
+
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
+ if (mbuf_data == NULL)
+ goto error;
+
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ segments_nb--;
+ }
+
+ if (last_sz) {
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
+ if (mbuf_data == NULL)
+ goto error;
+
+ memcpy(mbuf_data, test_data, last_sz);
+ }
+
+ if (options->op_type != CPERF_CIPHER_ONLY) {
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
+ options->digest_sz);
+ if (mbuf_data == NULL)
+ goto error;
+ }
+
+ if (options->op_type == CPERF_AEAD) {
+ uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
+
+ if (aead == NULL)
+ goto error;
+
+ memcpy(aead, test_vector->aad.data, test_vector->aad.length);
+ }
+
+ return mbuf;
+error:
+ if (mbuf != NULL)
+ rte_pktmbuf_free(mbuf);
+
+ return NULL;
+}
+
+int
+cperf_alloc_common_memory(const struct cperf_options *options,
+ const struct cperf_test_vector *test_vector,
+ uint8_t dev_id, size_t extra_op_priv_size,
+ struct rte_mempool **pkt_mbuf_pool_in,
+ struct rte_mempool **pkt_mbuf_pool_out,
+ struct rte_mbuf ***mbufs_in,
+ struct rte_mbuf ***mbufs_out,
+ struct rte_mempool **crypto_op_pool)
+{
+ unsigned int mbuf_idx = 0;
+ char pool_name[32] = "";
+
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
+ dev_id);
+
+ *pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
+ options->pool_sz * options->segments_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM +
+ RTE_CACHE_LINE_ROUNDUP(
+ (options->max_buffer_size / options->segments_nb) +
+ (options->max_buffer_size % options->segments_nb) +
+ options->digest_sz),
+ rte_socket_id());
+
+ if (*pkt_mbuf_pool_in == NULL)
+ return -1;
+
+ /* Generate mbufs_in with plaintext populated for test */
+ *mbufs_in = (struct rte_mbuf **)rte_malloc(NULL,
+ (sizeof(struct rte_mbuf *) * options->pool_sz), 0);
+
+ for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
+ (*mbufs_in)[mbuf_idx] = cperf_mbuf_create(
+ *pkt_mbuf_pool_in, options->segments_nb,
+ options, test_vector);
+ if ((*mbufs_in)[mbuf_idx] == NULL)
+ return -1;
+ }
+
+ *mbufs_out = (struct rte_mbuf **)rte_zmalloc(NULL,
+ (sizeof(struct rte_mbuf *) *
+ options->pool_sz), 0);
+
+ if (options->out_of_place == 1) {
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
+ dev_id);
+
+ *pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
+ pool_name, options->pool_sz, 0, 0,
+ RTE_PKTMBUF_HEADROOM +
+ RTE_CACHE_LINE_ROUNDUP(
+ options->max_buffer_size +
+ options->digest_sz),
+ rte_socket_id());
+
+ if (*pkt_mbuf_pool_out == NULL)
+ return -1;
+
+ for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
+ (*mbufs_out)[mbuf_idx] = cperf_mbuf_create(
+ *pkt_mbuf_pool_out, 1,
+ options, test_vector);
+ if ((*mbufs_out)[mbuf_idx] == NULL)
+ return -1;
+ }
+ }
+
+ snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
+ dev_id);
+
+ uint16_t priv_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length + test_vector->aead_iv.length +
+ extra_op_priv_size;
+
+ *crypto_op_pool = rte_crypto_op_pool_create(pool_name,
+ RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
+ 512, priv_size, rte_socket_id());
+ if (*crypto_op_pool == NULL)
+ return -1;
+
+ return 0;
+}
+
+void
+cperf_free_common_memory(const struct cperf_options *options,
+ struct rte_mempool *pkt_mbuf_pool_in,
+ struct rte_mempool *pkt_mbuf_pool_out,
+ struct rte_mbuf **mbufs_in,
+ struct rte_mbuf **mbufs_out,
+ struct rte_mempool *crypto_op_pool)
+{
+ uint32_t i = 0;
+
+ if (mbufs_in) {
+ while (mbufs_in[i] != NULL &&
+ i < options->pool_sz)
+ rte_pktmbuf_free(mbufs_in[i++]);
+
+ rte_free(mbufs_in);
+ }
+
+ if (mbufs_out) {
+ i = 0;
+ while (mbufs_out[i] != NULL
+ && i < options->pool_sz)
+ rte_pktmbuf_free(mbufs_out[i++]);
+
+ rte_free(mbufs_out);
+ }
+
+ if (pkt_mbuf_pool_in)
+ rte_mempool_free(pkt_mbuf_pool_in);
+
+ if (pkt_mbuf_pool_out)
+ rte_mempool_free(pkt_mbuf_pool_out);
+
+ if (crypto_op_pool)
+ rte_mempool_free(crypto_op_pool);
+}
diff --git a/app/test-crypto-perf/cperf_test_common.h b/app/test-crypto-perf/cperf_test_common.h
new file mode 100644
index 0000000..766d643
--- /dev/null
+++ b/app/test-crypto-perf/cperf_test_common.h
@@ -0,0 +1,61 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _CPERF_TEST_COMMON_H_
+#define _CPERF_TEST_COMMON_H_
+
+#include <stdint.h>
+
+#include <rte_mempool.h>
+
+#include "cperf_options.h"
+#include "cperf_test_vectors.h"
+
+int
+cperf_alloc_common_memory(const struct cperf_options *options,
+ const struct cperf_test_vector *test_vector,
+ uint8_t dev_id, size_t extra_op_priv_size,
+ struct rte_mempool **pkt_mbuf_pool_in,
+ struct rte_mempool **pkt_mbuf_pool_out,
+ struct rte_mbuf ***mbufs_in,
+ struct rte_mbuf ***mbufs_out,
+ struct rte_mempool **crypto_op_pool);
+
+void
+cperf_free_common_memory(const struct cperf_options *options,
+ struct rte_mempool *pkt_mbuf_pool_in,
+ struct rte_mempool *pkt_mbuf_pool_out,
+ struct rte_mbuf **mbufs_in,
+ struct rte_mbuf **mbufs_out,
+ struct rte_mempool *crypto_op_pool);
+
+#endif /* _CPERF_TEST_COMMON_H_ */
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 58b21ab..eea2900 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -37,7 +37,7 @@
#include "cperf_test_latency.h"
#include "cperf_ops.h"
-
+#include "cperf_test_common.h"
struct cperf_op_result {
uint64_t tsc_start;
@@ -74,124 +74,25 @@ struct priv_op_data {
#define min(a, b) (a < b ? (uint64_t)a : (uint64_t)b)
static void
-cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
+cperf_latency_test_free(struct cperf_latency_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ cperf_free_common_memory(ctx->options,
+ ctx->pkt_mbuf_pool_in,
+ ctx->pkt_mbuf_pool_out,
+ ctx->mbufs_in, ctx->mbufs_out,
+ ctx->crypto_op_pool);
rte_free(ctx->res);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
-{
- struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
- uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
-
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
-
- while (segments_nb) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
- }
-
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, last_sz);
- }
-
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
-}
-
void *
cperf_latency_test_constructor(struct rte_mempool *sess_mp,
uint8_t dev_id, uint16_t qp_id,
@@ -200,8 +101,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_latency_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
- char pool_name[32] = "";
+ size_t extra_op_priv_size = sizeof(struct priv_op_data);
ctx = rte_malloc(NULL, sizeof(struct cperf_latency_ctx), 0);
if (ctx == NULL)
@@ -224,80 +124,11 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
-
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
- options, test_vector);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
- }
-
- if (options->out_of_place == 1) {
-
- snprintf(pool_name, sizeof(pool_name),
- "cperf_pool_out_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
- }
-
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
- }
-
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
-
- uint16_t priv_size = sizeof(struct priv_op_data) +
- test_vector->cipher_iv.length +
- test_vector->auth_iv.length +
- test_vector->aead_iv.length;
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
-
- if (ctx->crypto_op_pool == NULL)
+ if (cperf_alloc_common_memory(options, test_vector, dev_id,
+ extra_op_priv_size,
+ &ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
+ &ctx->mbufs_in, &ctx->mbufs_out,
+ &ctx->crypto_op_pool) < 0)
goto err;
ctx->res = rte_malloc(NULL, sizeof(struct cperf_op_result) *
@@ -308,7 +139,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
return ctx;
err:
- cperf_latency_test_free(ctx, mbuf_idx);
+ cperf_latency_test_free(ctx);
return NULL;
}
@@ -588,5 +419,5 @@ cperf_latency_test_destructor(void *arg)
rte_cryptodev_stop(ctx->dev_id);
- cperf_latency_test_free(ctx, ctx->options->pool_sz);
+ cperf_latency_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 0c949f0..2cc459e 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -39,6 +39,7 @@
#include "cperf_ops.h"
#include "cperf_test_pmd_cyclecount.h"
+#include "cperf_test_common.h"
#define PRETTY_HDR_FMT "%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s\n\n"
#define PRETTY_LINE_FMT "%12u%12u%12u%12u%12u%12u%12u%12.0f%12.0f%12.0f\n"
@@ -86,129 +87,29 @@ static const uint16_t iv_offset =
sizeof(struct rte_crypto_op) + sizeof(struct rte_crypto_sym_op);
static void
-cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx,
- uint32_t mbuf_nb)
+cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
+ cperf_free_common_memory(ctx->options,
+ ctx->pkt_mbuf_pool_in,
+ ctx->pkt_mbuf_pool_out,
+ ctx->mbufs_in, ctx->mbufs_out,
+ ctx->crypto_op_pool);
if (ctx->ops)
rte_free(ctx->ops);
if (ctx->ops_processed)
rte_free(ctx->ops_processed);
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
-
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool, uint32_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
-{
- struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
- uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
-
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
-
- while (segments_nb) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
- }
-
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, last_sz);
- }
-
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(
- mbuf, options->digest_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(
- mbuf, RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
-}
-
void *
cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
uint8_t dev_id, uint16_t qp_id,
@@ -217,15 +118,6 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_pmd_cyclecount_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
- char pool_name[32] = "";
- uint16_t dataroom_sz = RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size /
- options->segments_nb) +
- (options->max_buffer_size %
- options->segments_nb) +
- options->digest_sz);
/* preallocate buffers for crypto ops as they can get quite big */
size_t alloc_sz = sizeof(struct rte_crypto_op *) *
@@ -251,64 +143,10 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d", dev_id);
-
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- dataroom_sz, rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
-
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
- options, test_vector);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
- }
-
- if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz, 0, 0, dataroom_sz,
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
- }
-
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1, options,
- test_vector);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
- }
-
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d", dev_id);
-
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length +
- test_vector->aead_iv.length;
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 512,
- priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ if (cperf_alloc_common_memory(options, test_vector, dev_id, 0,
+ &ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
+ &ctx->mbufs_in, &ctx->mbufs_out,
+ &ctx->crypto_op_pool) < 0)
goto err;
ctx->ops = rte_malloc("ops", alloc_sz, 0);
@@ -322,7 +160,7 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
return ctx;
err:
- cperf_pmd_cyclecount_test_free(ctx, mbuf_idx);
+ cperf_pmd_cyclecount_test_free(ctx);
return NULL;
}
@@ -671,5 +509,5 @@ cperf_pmd_cyclecount_test_destructor(void *arg)
if (ctx == NULL)
return;
- cperf_pmd_cyclecount_test_free(ctx, ctx->options->pool_sz);
+ cperf_pmd_cyclecount_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 3bb1cb0..d4aa84c 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -37,6 +37,7 @@
#include "cperf_test_throughput.h"
#include "cperf_ops.h"
+#include "cperf_test_common.h"
struct cperf_throughput_ctx {
uint8_t dev_id;
@@ -59,123 +60,24 @@ struct cperf_throughput_ctx {
};
static void
-cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
+cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ cperf_free_common_memory(ctx->options,
+ ctx->pkt_mbuf_pool_in,
+ ctx->pkt_mbuf_pool_out,
+ ctx->mbufs_in, ctx->mbufs_out,
+ ctx->crypto_op_pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
-{
- struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
- uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
-
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
-
- while (segments_nb) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
- }
-
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, last_sz);
- }
-
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
-}
-
void *
cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
uint8_t dev_id, uint16_t qp_id,
@@ -184,8 +86,6 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_throughput_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
- char pool_name[32] = "";
ctx = rte_malloc(NULL, sizeof(struct cperf_throughput_ctx), 0);
if (ctx == NULL)
@@ -198,7 +98,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
ctx->options = options;
ctx->test_vector = test_vector;
- /* IV goes at the end of the cryptop operation */
+ /* IV goes at the end of the crypto operation */
uint16_t iv_offset = sizeof(struct rte_crypto_op) +
sizeof(struct rte_crypto_sym_op);
@@ -207,81 +107,15 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
-
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
- options, test_vector);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
- }
-
- if (options->out_of_place == 1) {
-
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
- }
-
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
- }
-
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
-
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
-
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ if (cperf_alloc_common_memory(options, test_vector, dev_id, 0,
+ &ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
+ &ctx->mbufs_in, &ctx->mbufs_out,
+ &ctx->crypto_op_pool) < 0)
goto err;
return ctx;
err:
- cperf_throughput_test_free(ctx, mbuf_idx);
+ cperf_throughput_test_free(ctx);
return NULL;
}
@@ -536,5 +370,5 @@ cperf_throughput_test_destructor(void *arg)
rte_cryptodev_stop(ctx->dev_id);
- cperf_throughput_test_free(ctx, ctx->options->pool_sz);
+ cperf_throughput_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index a314646..d05124a 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -37,6 +37,7 @@
#include "cperf_test_verify.h"
#include "cperf_ops.h"
+#include "cperf_test_common.h"
struct cperf_verify_ctx {
uint8_t dev_id;
@@ -63,123 +64,24 @@ struct cperf_op_result {
};
static void
-cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
+cperf_verify_test_free(struct cperf_verify_ctx *ctx)
{
- uint32_t i;
-
if (ctx) {
if (ctx->sess) {
rte_cryptodev_sym_session_clear(ctx->dev_id, ctx->sess);
rte_cryptodev_sym_session_free(ctx->sess);
}
- if (ctx->mbufs_in) {
- for (i = 0; i < mbuf_nb; i++)
- rte_pktmbuf_free(ctx->mbufs_in[i]);
-
- rte_free(ctx->mbufs_in);
- }
-
- if (ctx->mbufs_out) {
- for (i = 0; i < mbuf_nb; i++) {
- if (ctx->mbufs_out[i] != NULL)
- rte_pktmbuf_free(ctx->mbufs_out[i]);
- }
-
- rte_free(ctx->mbufs_out);
- }
-
- if (ctx->pkt_mbuf_pool_in)
- rte_mempool_free(ctx->pkt_mbuf_pool_in);
-
- if (ctx->pkt_mbuf_pool_out)
- rte_mempool_free(ctx->pkt_mbuf_pool_out);
-
- if (ctx->crypto_op_pool)
- rte_mempool_free(ctx->crypto_op_pool);
+ cperf_free_common_memory(ctx->options,
+ ctx->pkt_mbuf_pool_in,
+ ctx->pkt_mbuf_pool_out,
+ ctx->mbufs_in, ctx->mbufs_out,
+ ctx->crypto_op_pool);
rte_free(ctx);
}
}
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
-{
- struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
- uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
-
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
-
- while (segments_nb) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
- segments_nb--;
- }
-
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
-
- memcpy(mbuf_data, test_data, last_sz);
- }
-
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
-}
-
void *
cperf_verify_test_constructor(struct rte_mempool *sess_mp,
uint8_t dev_id, uint16_t qp_id,
@@ -188,8 +90,6 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
const struct cperf_op_fns *op_fns)
{
struct cperf_verify_ctx *ctx = NULL;
- unsigned int mbuf_idx = 0;
- char pool_name[32] = "";
ctx = rte_malloc(NULL, sizeof(struct cperf_verify_ctx), 0);
if (ctx == NULL)
@@ -211,80 +111,15 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_in == NULL)
- goto err;
-
- /* Generate mbufs_in with plaintext populated for test */
- ctx->mbufs_in = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- ctx->mbufs_in[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_in, options->segments_nb,
- options, test_vector);
- if (ctx->mbufs_in[mbuf_idx] == NULL)
- goto err;
- }
-
- if (options->out_of_place == 1) {
-
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
-
- ctx->pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
- rte_socket_id());
-
- if (ctx->pkt_mbuf_pool_out == NULL)
- goto err;
- }
-
- ctx->mbufs_out = rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) *
- ctx->options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- if (options->out_of_place == 1) {
- ctx->mbufs_out[mbuf_idx] = cperf_mbuf_create(
- ctx->pkt_mbuf_pool_out, 1,
- options, test_vector);
- if (ctx->mbufs_out[mbuf_idx] == NULL)
- goto err;
- } else {
- ctx->mbufs_out[mbuf_idx] = NULL;
- }
- }
-
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
-
- uint16_t priv_size = test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length;
- ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (ctx->crypto_op_pool == NULL)
+ if (cperf_alloc_common_memory(options, test_vector, dev_id, 0,
+ &ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
+ &ctx->mbufs_in, &ctx->mbufs_out,
+ &ctx->crypto_op_pool) < 0)
goto err;
return ctx;
err:
- cperf_verify_test_free(ctx, mbuf_idx);
+ cperf_verify_test_free(ctx);
return NULL;
}
@@ -596,5 +431,5 @@ cperf_verify_test_destructor(void *arg)
rte_cryptodev_stop(ctx->dev_id);
- cperf_verify_test_free(ctx, ctx->options->pool_sz);
+ cperf_verify_test_free(ctx);
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 2/8] app/crypto-perf: set AAD after the crypto operation
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 1/8] app/crypto-perf: refactor common test code Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 3/8] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
` (7 subsequent siblings)
9 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
Instead of prepending the AAD (Additional Authenticated Data)
in the mbuf, it is easier to set after the crypto operation,
as it is a read-only value, like the IV, and then it is not
restricted to the size of the mbuf headroom.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 16 ++++++++++++----
app/test-crypto-perf/cperf_test_common.c | 15 +++------------
app/test-crypto-perf/cperf_test_verify.c | 4 ++--
3 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 88fb972..5be20d9 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -307,6 +307,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
uint16_t iv_offset)
{
uint16_t i;
+ uint16_t aad_offset = iv_offset +
+ RTE_ALIGN_CEIL(test_vector->aead_iv.length, 16);
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
@@ -318,11 +320,12 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
- sym_op->aead.data.offset =
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
+ sym_op->aead.data.offset = 0;
- sym_op->aead.aad.data = rte_pktmbuf_mtod(bufs_in[i], uint8_t *);
- sym_op->aead.aad.phys_addr = rte_pktmbuf_mtophys(bufs_in[i]);
+ sym_op->aead.aad.data = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, aad_offset);
+ sym_op->aead.aad.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
+ aad_offset);
if (options->aead_op == RTE_CRYPTO_AEAD_OP_DECRYPT) {
sym_op->aead.digest.data = test_vector->digest.data;
@@ -360,6 +363,11 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
memcpy(iv_ptr, test_vector->aead_iv.data,
test_vector->aead_iv.length);
+
+ /* Copy AAD after the IV */
+ memcpy(ops[i]->sym->aead.aad.data,
+ test_vector->aad.data,
+ test_vector->aad.length);
}
}
diff --git a/app/test-crypto-perf/cperf_test_common.c b/app/test-crypto-perf/cperf_test_common.c
index a87d27e..ddf5641 100644
--- a/app/test-crypto-perf/cperf_test_common.c
+++ b/app/test-crypto-perf/cperf_test_common.c
@@ -94,16 +94,6 @@ cperf_mbuf_create(struct rte_mempool *mempool,
goto error;
}
- if (options->op_type == CPERF_AEAD) {
- uint8_t *aead = (uint8_t *)rte_pktmbuf_prepend(mbuf,
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16));
-
- if (aead == NULL)
- goto error;
-
- memcpy(aead, test_vector->aad.data, test_vector->aad.length);
- }
-
return mbuf;
error:
if (mbuf != NULL)
@@ -183,9 +173,10 @@ cperf_alloc_common_memory(const struct cperf_options *options,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->cipher_iv.length +
+ uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
test_vector->auth_iv.length + test_vector->aead_iv.length +
- extra_op_priv_size;
+ extra_op_priv_size, 16) +
+ RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
*crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index d05124a..82e5e9f 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -197,9 +197,9 @@ cperf_verify_op(struct rte_crypto_op *op,
break;
case CPERF_AEAD:
cipher = 1;
- cipher_offset = vector->aad.length;
+ cipher_offset = 0;
auth = 1;
- auth_offset = vector->aad.length + options->test_buffer_size;
+ auth_offset = options->test_buffer_size;
break;
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 3/8] app/crypto-perf: parse AEAD data from vectors
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 1/8] app/crypto-perf: refactor common test code Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 2/8] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 4/8] app/crypto-perf: parse segment size Pablo de Lara
` (6 subsequent siblings)
9 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara, stable
Since DPDK 17.08, there is specific parameters
for AEAD algorithm, like AES-GCM. When verifying
crypto operations with test vectors, the parser
was not reading AEAD data (such as IV or key).
Fixes: 8a5b494a7f99 ("app/test-crypto-perf: add AEAD parameters")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_vector_parsing.c | 55 ++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/app/test-crypto-perf/cperf_test_vector_parsing.c b/app/test-crypto-perf/cperf_test_vector_parsing.c
index 148a604..3952632 100644
--- a/app/test-crypto-perf/cperf_test_vector_parsing.c
+++ b/app/test-crypto-perf/cperf_test_vector_parsing.c
@@ -116,6 +116,20 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_key.data) {
+ printf("\naead_key =\n");
+ for (i = 0; i < test_vector->aead_key.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_key.length - 1))
+ printf("0x%02x", test_vector->aead_key.data[i]);
+ else
+ printf("0x%02x, ",
+ test_vector->aead_key.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->cipher_iv.data) {
printf("\ncipher_iv =\n");
for (i = 0; i < test_vector->cipher_iv.length; ++i) {
@@ -142,6 +156,19 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
+ if (test_vector->aead_iv.data) {
+ printf("\naead_iv =\n");
+ for (i = 0; i < test_vector->aead_iv.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->aead_iv.length - 1))
+ printf("0x%02x", test_vector->aead_iv.data[i]);
+ else
+ printf("0x%02x, ", test_vector->aead_iv.data[i]);
+ }
+ printf("\n");
+ }
+
if (test_vector->ciphertext.data) {
printf("\nciphertext =\n");
for (i = 0; i < test_vector->ciphertext.length; ++i) {
@@ -345,6 +372,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_key.length = opts->auth_key_sz;
}
+ } else if (strstr(key_token, "aead_key")) {
+ rte_free(vector->aead_key.data);
+ vector->aead_key.data = data;
+ if (tc_found)
+ vector->aead_key.length = data_length;
+ else {
+ if (opts->aead_key_sz > data_length) {
+ printf("Global aead_key shorter than "
+ "aead_key_sz\n");
+ return -1;
+ }
+ vector->aead_key.length = opts->aead_key_sz;
+ }
+
} else if (strstr(key_token, "cipher_iv")) {
rte_free(vector->cipher_iv.data);
vector->cipher_iv.data = data;
@@ -373,6 +414,20 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_iv.length = opts->auth_iv_sz;
}
+ } else if (strstr(key_token, "aead_iv")) {
+ rte_free(vector->aead_iv.data);
+ vector->aead_iv.data = data;
+ if (tc_found)
+ vector->aead_iv.length = data_length;
+ else {
+ if (opts->aead_iv_sz > data_length) {
+ printf("Global aead iv shorter than "
+ "aead_iv_sz\n");
+ return -1;
+ }
+ vector->aead_iv.length = opts->aead_iv_sz;
+ }
+
} else if (strstr(key_token, "ciphertext")) {
rte_free(vector->ciphertext.data);
vector->ciphertext.data = data;
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 4/8] app/crypto-perf: parse segment size
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
` (2 preceding siblings ...)
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 3/8] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 5/8] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
` (5 subsequent siblings)
9 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
Instead of parsing number of segments, from the command line,
parse segment size, as it is a more usual case to have
the segment size fixed and then different packet sizes
will require different number of segments.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 24 ++++++++
app/test-crypto-perf/cperf_options.h | 4 +-
app/test-crypto-perf/cperf_options_parsing.c | 38 ++++++++----
app/test-crypto-perf/cperf_test_common.c | 74 ++++++++++++++----------
app/test-crypto-perf/cperf_test_latency.c | 2 +-
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 2 +-
app/test-crypto-perf/cperf_test_throughput.c | 2 +-
app/test-crypto-perf/cperf_test_verify.c | 2 +-
doc/guides/tools/cryptoperf.rst | 6 +-
9 files changed, 106 insertions(+), 48 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 5be20d9..ad32065 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -175,6 +175,14 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -256,6 +264,14 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
@@ -346,6 +362,14 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
offset -= tbuf->data_len;
tbuf = tbuf->next;
}
+ /*
+ * If there is not enough room in segment,
+ * place the digest in the next segment
+ */
+ if ((tbuf->data_len - offset) < options->digest_sz) {
+ tbuf = tbuf->next;
+ offset = 0;
+ }
buf = tbuf;
}
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index 2f42cb6..6d339f4 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -11,7 +11,7 @@
#define CPERF_TOTAL_OPS ("total-ops")
#define CPERF_BURST_SIZE ("burst-sz")
#define CPERF_BUFFER_SIZE ("buffer-sz")
-#define CPERF_SEGMENTS_NB ("segments-nb")
+#define CPERF_SEGMENT_SIZE ("segment-sz")
#define CPERF_DESC_NB ("desc-nb")
#define CPERF_DEVTYPE ("devtype")
@@ -71,7 +71,7 @@ struct cperf_options {
uint32_t pool_sz;
uint32_t total_ops;
- uint32_t segments_nb;
+ uint32_t segment_sz;
uint32_t test_buffer_size;
uint32_t nb_descriptors;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index f3508a4..d372691 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -328,17 +328,17 @@ parse_buffer_sz(struct cperf_options *opts, const char *arg)
}
static int
-parse_segments_nb(struct cperf_options *opts, const char *arg)
+parse_segment_sz(struct cperf_options *opts, const char *arg)
{
- int ret = parse_uint32_t(&opts->segments_nb, arg);
+ int ret = parse_uint32_t(&opts->segment_sz, arg);
if (ret) {
- RTE_LOG(ERR, USER1, "failed to parse segments number\n");
+ RTE_LOG(ERR, USER1, "failed to parse segment size\n");
return -1;
}
- if ((opts->segments_nb == 0) || (opts->segments_nb > 255)) {
- RTE_LOG(ERR, USER1, "invalid segments number specified\n");
+ if (opts->segment_sz == 0) {
+ RTE_LOG(ERR, USER1, "Segment size has to be bigger than 0\n");
return -1;
}
@@ -678,7 +678,7 @@ static struct option lgopts[] = {
{ CPERF_TOTAL_OPS, required_argument, 0, 0 },
{ CPERF_BURST_SIZE, required_argument, 0, 0 },
{ CPERF_BUFFER_SIZE, required_argument, 0, 0 },
- { CPERF_SEGMENTS_NB, required_argument, 0, 0 },
+ { CPERF_SEGMENT_SIZE, required_argument, 0, 0 },
{ CPERF_DESC_NB, required_argument, 0, 0 },
{ CPERF_DEVTYPE, required_argument, 0, 0 },
@@ -739,7 +739,11 @@ cperf_options_default(struct cperf_options *opts)
opts->min_burst_size = 32;
opts->inc_burst_size = 0;
- opts->segments_nb = 1;
+ /*
+ * Will be parsed from command line or set to
+ * maximum buffer size + digest, later
+ */
+ opts->segment_sz = 0;
strncpy(opts->device_type, "crypto_aesni_mb",
sizeof(opts->device_type));
@@ -783,7 +787,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_TOTAL_OPS, parse_total_ops },
{ CPERF_BURST_SIZE, parse_burst_sz },
{ CPERF_BUFFER_SIZE, parse_buffer_sz },
- { CPERF_SEGMENTS_NB, parse_segments_nb },
+ { CPERF_SEGMENT_SIZE, parse_segment_sz },
{ CPERF_DESC_NB, parse_desc_nb },
{ CPERF_DEVTYPE, parse_device_type },
{ CPERF_OPTYPE, parse_op_type },
@@ -893,9 +897,21 @@ check_cipher_buffer_length(struct cperf_options *options)
int
cperf_options_check(struct cperf_options *options)
{
- if (options->segments_nb > options->min_buffer_size) {
+ if (options->op_type == CPERF_CIPHER_ONLY)
+ options->digest_sz = 0;
+
+ /*
+ * If segment size is not set, assume only one segment,
+ * big enough to contain the largest buffer and the digest
+ */
+ if (options->segment_sz == 0)
+ options->segment_sz = options->max_buffer_size +
+ options->digest_sz;
+
+ if (options->segment_sz < options->digest_sz) {
RTE_LOG(ERR, USER1,
- "Segments number greater than buffer size.\n");
+ "Segment size should be at least "
+ "the size of the digest\n");
return -EINVAL;
}
@@ -1019,7 +1035,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("%u ", opts->burst_size_list[size_idx]);
printf("\n");
}
- printf("\n# segments per buffer: %u\n", opts->segments_nb);
+ printf("\n# segment size: %u\n", opts->segment_sz);
printf("#\n");
printf("# cryptodev type: %s\n", opts->device_type);
printf("#\n");
diff --git a/app/test-crypto-perf/cperf_test_common.c b/app/test-crypto-perf/cperf_test_common.c
index ddf5641..0f62a2c 100644
--- a/app/test-crypto-perf/cperf_test_common.c
+++ b/app/test-crypto-perf/cperf_test_common.c
@@ -36,18 +36,18 @@
static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
+ uint32_t segment_sz,
uint32_t segments_nb,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_mbuf *mbuf;
- uint32_t segment_sz = options->max_buffer_size / segments_nb;
- uint32_t last_sz = options->max_buffer_size % segments_nb;
uint8_t *mbuf_data;
uint8_t *test_data =
(options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
test_vector->plaintext.data :
test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
if (mbuf == NULL)
@@ -57,11 +57,18 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (options->max_buffer_size <= segment_sz) {
+ memcpy(mbuf_data, test_data, options->max_buffer_size);
+ test_data += options->max_buffer_size;
+ remaining_bytes = 0;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ test_data += segment_sz;
+ remaining_bytes -= segment_sz;
+ }
segments_nb--;
- while (segments_nb) {
+ while (remaining_bytes) {
struct rte_mbuf *m;
m = rte_pktmbuf_alloc(mempool);
@@ -74,22 +81,31 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ remaining_bytes = 0;
+ test_data += remaining_bytes;
+ } else {
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ }
segments_nb--;
}
- if (last_sz) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, last_sz);
- if (mbuf_data == NULL)
- goto error;
+ /*
+ * If there was not enough room for the digest at the end
+ * of the last segment, allocate a new one
+ */
+ if (segments_nb != 0) {
+ struct rte_mbuf *m;
+ m = rte_pktmbuf_alloc(mempool);
- memcpy(mbuf_data, test_data, last_sz);
- }
+ if (m == NULL)
+ goto error;
- if (options->op_type != CPERF_CIPHER_ONLY) {
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf,
- options->digest_sz);
+ rte_pktmbuf_chain(mbuf, m);
+ mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
if (mbuf_data == NULL)
goto error;
}
@@ -118,13 +134,14 @@ cperf_alloc_common_memory(const struct cperf_options *options,
snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
dev_id);
+ uint32_t max_size = options->max_buffer_size + options->digest_sz;
+ uint16_t segments_nb = (max_size % options->segment_sz) ?
+ (max_size / options->segment_sz) + 1 :
+ max_size / options->segment_sz;
+
*pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * options->segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- (options->max_buffer_size / options->segments_nb) +
- (options->max_buffer_size % options->segments_nb) +
- options->digest_sz),
+ options->pool_sz * segments_nb, 0, 0,
+ RTE_PKTMBUF_HEADROOM + options->segment_sz,
rte_socket_id());
if (*pkt_mbuf_pool_in == NULL)
@@ -136,7 +153,9 @@ cperf_alloc_common_memory(const struct cperf_options *options,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
(*mbufs_in)[mbuf_idx] = cperf_mbuf_create(
- *pkt_mbuf_pool_in, options->segments_nb,
+ *pkt_mbuf_pool_in,
+ options->segment_sz,
+ segments_nb,
options, test_vector);
if ((*mbufs_in)[mbuf_idx] == NULL)
return -1;
@@ -152,10 +171,7 @@ cperf_alloc_common_memory(const struct cperf_options *options,
*pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM +
- RTE_CACHE_LINE_ROUNDUP(
- options->max_buffer_size +
- options->digest_sz),
+ RTE_PKTMBUF_HEADROOM + max_size,
rte_socket_id());
if (*pkt_mbuf_pool_out == NULL)
@@ -163,8 +179,8 @@ cperf_alloc_common_memory(const struct cperf_options *options,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
(*mbufs_out)[mbuf_idx] = cperf_mbuf_create(
- *pkt_mbuf_pool_out, 1,
- options, test_vector);
+ *pkt_mbuf_pool_out, max_size,
+ 1, options, test_vector);
if ((*mbufs_out)[mbuf_idx] == NULL)
return -1;
}
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index eea2900..2dc9c0c 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -178,7 +178,7 @@ cperf_latency_test_runner(void *arg)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 2cc459e..81f403c 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -421,7 +421,7 @@ cperf_pmd_cyclecount_test_runner(void *test_ctx)
struct rte_cryptodev_info dev_info;
/* Check if source mbufs require coalescing */
- if (opts->segments_nb > 1) {
+ if (opts->segments_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(state.ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) ==
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index d4aa84c..8d54642 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -140,7 +140,7 @@ cperf_throughput_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 82e5e9f..7a85aa7 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -248,7 +248,7 @@ cperf_verify_test_runner(void *test_ctx)
int linearize = 0;
/* Check if source mbufs require coalescing */
- if (ctx->options->segments_nb > 1) {
+ if (ctx->options->segment_sz < ctx->options->max_buffer_size) {
rte_cryptodev_info_get(ctx->dev_id, &dev_info);
if ((dev_info.feature_flags &
RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER) == 0)
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 2f526c6..d587c20 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -172,9 +172,11 @@ The following are the appication command-line options:
* List of values, up to 32 values, separated in commas (i.e. ``--buffer-sz 32,64,128``)
-* ``--segments-nb <n>``
+* ``--segment-sz <n>``
- Set the number of segments per packet.
+ Set the size of the segment to use, for Scatter Gather List testing.
+ By default, it is set to the size of the maximum buffer size, including the digest size,
+ so a single segment is created.
* ``--devtype <name>``
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 5/8] app/crypto-perf: overwrite mbuf when verifying
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
` (3 preceding siblings ...)
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 4/8] app/crypto-perf: parse segment size Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 6/8] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
` (4 subsequent siblings)
9 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
When running the verify test, mbufs in the pool were
populated with the test vector loaded from a file.
To avoid limiting the number of operations to the pool size,
mbufs will be rewritten with the test vector, before
linking them to the crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_options_parsing.c | 7 ------
app/test-crypto-perf/cperf_test_verify.c | 35 ++++++++++++++++++++++++++++
2 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index d372691..89f86a2 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -944,13 +944,6 @@ cperf_options_check(struct cperf_options *options)
}
if (options->test == CPERF_TEST_TYPE_VERIFY &&
- options->total_ops > options->pool_sz) {
- RTE_LOG(ERR, USER1, "Total number of ops must be less than or"
- " equal to the pool size.\n");
- return -EINVAL;
- }
-
- if (options->test == CPERF_TEST_TYPE_VERIFY &&
(options->inc_buffer_size != 0 ||
options->buffer_size_count > 1)) {
RTE_LOG(ERR, USER1, "Only one buffer size is allowed when "
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 7a85aa7..9a806fa 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -224,6 +224,34 @@ cperf_verify_op(struct rte_crypto_op *op,
return !!res;
}
+static void
+cperf_mbuf_set(struct rte_mbuf *mbuf,
+ const struct cperf_options *options,
+ const struct cperf_test_vector *test_vector)
+{
+ uint32_t segment_sz = options->segment_sz;
+ uint8_t *mbuf_data;
+ uint8_t *test_data =
+ (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ test_vector->plaintext.data :
+ test_vector->ciphertext.data;
+ uint32_t remaining_bytes = options->max_buffer_size;
+
+ while (remaining_bytes) {
+ mbuf_data = rte_pktmbuf_mtod(mbuf, uint8_t *);
+
+ if (remaining_bytes <= segment_sz) {
+ memcpy(mbuf_data, test_data, remaining_bytes);
+ return;
+ }
+
+ memcpy(mbuf_data, test_data, segment_sz);
+ remaining_bytes -= segment_sz;
+ test_data += segment_sz;
+ mbuf = mbuf->next;
+ }
+}
+
int
cperf_verify_test_runner(void *test_ctx)
{
@@ -294,6 +322,13 @@ cperf_verify_test_runner(void *test_ctx)
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
+
+ /* Populate the mbuf with the test vector, for verification */
+ for (i = 0; i < ops_needed; i++)
+ cperf_mbuf_set(ops[i]->sym->m_src,
+ ctx->options,
+ ctx->test_vector);
+
#ifdef CPERF_LINEARIZATION_ENABLE
if (linearize) {
/* PMD doesn't support scatter-gather and source buffer
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 6/8] app/crypto-perf: do not populate the mbufs at init
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
` (4 preceding siblings ...)
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 5/8] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 7/8] app/crypto-perf: support multiple queue pairs Pablo de Lara
` (3 subsequent siblings)
9 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
For throughput and latency tests, it is not required
to populate the mbufs with any test vector.
For verify test, there is already a function that rewrites
the mbufs every time they are going to be used with
crypto operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_common.c | 31 +++++++++----------------------
1 file changed, 9 insertions(+), 22 deletions(-)
diff --git a/app/test-crypto-perf/cperf_test_common.c b/app/test-crypto-perf/cperf_test_common.c
index 0f62a2c..25eb970 100644
--- a/app/test-crypto-perf/cperf_test_common.c
+++ b/app/test-crypto-perf/cperf_test_common.c
@@ -38,15 +38,10 @@ static struct rte_mbuf *
cperf_mbuf_create(struct rte_mempool *mempool,
uint32_t segment_sz,
uint32_t segments_nb,
- const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_options *options)
{
struct rte_mbuf *mbuf;
uint8_t *mbuf_data;
- uint8_t *test_data =
- (options->cipher_op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
- test_vector->plaintext.data :
- test_vector->ciphertext.data;
uint32_t remaining_bytes = options->max_buffer_size;
mbuf = rte_pktmbuf_alloc(mempool);
@@ -57,15 +52,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (options->max_buffer_size <= segment_sz) {
- memcpy(mbuf_data, test_data, options->max_buffer_size);
- test_data += options->max_buffer_size;
+ if (options->max_buffer_size <= segment_sz)
remaining_bytes = 0;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
- test_data += segment_sz;
+ else
remaining_bytes -= segment_sz;
- }
+
segments_nb--;
while (remaining_bytes) {
@@ -81,15 +72,11 @@ cperf_mbuf_create(struct rte_mempool *mempool,
if (mbuf_data == NULL)
goto error;
- if (remaining_bytes <= segment_sz) {
- memcpy(mbuf_data, test_data, remaining_bytes);
+ if (remaining_bytes <= segment_sz)
remaining_bytes = 0;
- test_data += remaining_bytes;
- } else {
- memcpy(mbuf_data, test_data, segment_sz);
+ else
remaining_bytes -= segment_sz;
- test_data += segment_sz;
- }
+
segments_nb--;
}
@@ -156,7 +143,7 @@ cperf_alloc_common_memory(const struct cperf_options *options,
*pkt_mbuf_pool_in,
options->segment_sz,
segments_nb,
- options, test_vector);
+ options);
if ((*mbufs_in)[mbuf_idx] == NULL)
return -1;
}
@@ -180,7 +167,7 @@ cperf_alloc_common_memory(const struct cperf_options *options,
for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
(*mbufs_out)[mbuf_idx] = cperf_mbuf_create(
*pkt_mbuf_pool_out, max_size,
- 1, options, test_vector);
+ 1, options);
if ((*mbufs_out)[mbuf_idx] == NULL)
return -1;
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 7/8] app/crypto-perf: support multiple queue pairs
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
` (5 preceding siblings ...)
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 6/8] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 8/8] app/crypto-perf: use single mempool Pablo de Lara
` (2 subsequent siblings)
9 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
Add support for multiple queue pairs, when there are
more logical cores available than crypto devices enabled.
For instance, if there are 4 cores available and
2 crypto devices, each device will have two queue pairs.
This is useful to have multiple logical cores using
a single crypto device, without needing to initialize
a crypto device per core.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/Makefile | 4 +
app/test-crypto-perf/cperf_options.h | 1 +
app/test-crypto-perf/cperf_options_parsing.c | 2 +
app/test-crypto-perf/cperf_test_common.c | 15 ++--
app/test-crypto-perf/cperf_test_common.h | 3 +-
app/test-crypto-perf/cperf_test_latency.c | 4 +-
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 2 +-
app/test-crypto-perf/cperf_test_throughput.c | 4 +-
app/test-crypto-perf/cperf_test_verify.c | 4 +-
app/test-crypto-perf/main.c | 100 +++++++++++++++++------
10 files changed, 95 insertions(+), 44 deletions(-)
diff --git a/app/test-crypto-perf/Makefile b/app/test-crypto-perf/Makefile
index 25ae395..c75d7ed 100644
--- a/app/test-crypto-perf/Makefile
+++ b/app/test-crypto-perf/Makefile
@@ -47,4 +47,8 @@ SRCS-y += cperf_test_verify.c
SRCS-y += cperf_test_vector_parsing.c
SRCS-y += cperf_test_common.c
+ifeq ($(CONFIG_RTE_LIBRTE_PMD_CRYPTO_SCHEDULER),y)
+LDLIBS += -lrte_pmd_crypto_scheduler
+endif
+
include $(RTE_SDK)/mk/rte.app.mk
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index 6d339f4..da4fb47 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -74,6 +74,7 @@ struct cperf_options {
uint32_t segment_sz;
uint32_t test_buffer_size;
uint32_t nb_descriptors;
+ uint16_t nb_qps;
uint32_t sessionless:1;
uint32_t out_of_place:1;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 89f86a2..e5fa4a8 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -747,6 +747,7 @@ cperf_options_default(struct cperf_options *opts)
strncpy(opts->device_type, "crypto_aesni_mb",
sizeof(opts->device_type));
+ opts->nb_qps = 1;
opts->op_type = CPERF_CIPHER_THEN_AUTH;
@@ -1032,6 +1033,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("#\n");
printf("# cryptodev type: %s\n", opts->device_type);
printf("#\n");
+ printf("# number of queue pairs per device: %u\n", opts->nb_qps);
printf("# crypto operation: %s\n", cperf_op_type_strs[opts->op_type]);
printf("# sessionless: %s\n", opts->sessionless ? "yes" : "no");
printf("# out of place: %s\n", opts->out_of_place ? "yes" : "no");
diff --git a/app/test-crypto-perf/cperf_test_common.c b/app/test-crypto-perf/cperf_test_common.c
index 25eb970..65f0be7 100644
--- a/app/test-crypto-perf/cperf_test_common.c
+++ b/app/test-crypto-perf/cperf_test_common.c
@@ -108,7 +108,8 @@ cperf_mbuf_create(struct rte_mempool *mempool,
int
cperf_alloc_common_memory(const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
- uint8_t dev_id, size_t extra_op_priv_size,
+ uint8_t dev_id, uint16_t qp_id,
+ size_t extra_op_priv_size,
struct rte_mempool **pkt_mbuf_pool_in,
struct rte_mempool **pkt_mbuf_pool_out,
struct rte_mbuf ***mbufs_in,
@@ -118,8 +119,8 @@ cperf_alloc_common_memory(const struct cperf_options *options,
unsigned int mbuf_idx = 0;
char pool_name[32] = "";
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%u_qp_%u",
+ dev_id, qp_id);
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
@@ -153,8 +154,8 @@ cperf_alloc_common_memory(const struct cperf_options *options,
options->pool_sz), 0);
if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%u_qp_%u",
+ dev_id, qp_id);
*pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
pool_name, options->pool_sz, 0, 0,
@@ -173,8 +174,8 @@ cperf_alloc_common_memory(const struct cperf_options *options,
}
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
- dev_id);
+ snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
test_vector->auth_iv.length + test_vector->aead_iv.length +
diff --git a/app/test-crypto-perf/cperf_test_common.h b/app/test-crypto-perf/cperf_test_common.h
index 766d643..ad29431 100644
--- a/app/test-crypto-perf/cperf_test_common.h
+++ b/app/test-crypto-perf/cperf_test_common.h
@@ -43,7 +43,8 @@
int
cperf_alloc_common_memory(const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
- uint8_t dev_id, size_t extra_op_priv_size,
+ uint8_t dev_id, uint16_t qp_id,
+ size_t extra_op_priv_size,
struct rte_mempool **pkt_mbuf_pool_in,
struct rte_mempool **pkt_mbuf_pool_out,
struct rte_mbuf ***mbufs_in,
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 2dc9c0c..313665e 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -124,7 +124,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- if (cperf_alloc_common_memory(options, test_vector, dev_id,
+ if (cperf_alloc_common_memory(options, test_vector, dev_id, qp_id,
extra_op_priv_size,
&ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
&ctx->mbufs_in, &ctx->mbufs_out,
@@ -417,7 +417,5 @@ cperf_latency_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_latency_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 81f403c..13a270f 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -143,7 +143,7 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- if (cperf_alloc_common_memory(options, test_vector, dev_id, 0,
+ if (cperf_alloc_common_memory(options, test_vector, dev_id, qp_id, 0,
&ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
&ctx->mbufs_in, &ctx->mbufs_out,
&ctx->crypto_op_pool) < 0)
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 8d54642..bc7d889 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -107,7 +107,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- if (cperf_alloc_common_memory(options, test_vector, dev_id, 0,
+ if (cperf_alloc_common_memory(options, test_vector, dev_id, qp_id, 0,
&ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
&ctx->mbufs_in, &ctx->mbufs_out,
&ctx->crypto_op_pool) < 0)
@@ -368,7 +368,5 @@ cperf_throughput_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_throughput_test_free(ctx);
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 9a806fa..b122022 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -111,7 +111,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
if (ctx->sess == NULL)
goto err;
- if (cperf_alloc_common_memory(options, test_vector, dev_id, 0,
+ if (cperf_alloc_common_memory(options, test_vector, dev_id, qp_id, 0,
&ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
&ctx->mbufs_in, &ctx->mbufs_out,
&ctx->crypto_op_pool) < 0)
@@ -464,7 +464,5 @@ cperf_verify_test_destructor(void *arg)
if (ctx == NULL)
return;
- rte_cryptodev_stop(ctx->dev_id);
-
cperf_verify_test_free(ctx);
}
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index ffa7180..aaa5830 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -35,6 +35,9 @@
#include <rte_eal.h>
#include <rte_cryptodev.h>
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+#include <rte_cryptodev_scheduler.h>
+#endif
#include "cperf.h"
#include "cperf_options.h"
@@ -90,7 +93,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
struct rte_mempool *session_pool_socket[])
{
uint8_t enabled_cdev_count = 0, nb_lcores, cdev_id;
- unsigned int i;
+ unsigned int i, j;
int ret;
enabled_cdev_count = rte_cryptodev_devices_get(opts->device_type,
@@ -119,21 +122,53 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
max_sess_size = sess_size;
}
+ /*
+ * Calculate number of needed queue pairs, based on the amount
+ * of available number of logical cores and crypto devices.
+ * For instance, if there are 4 cores and 2 crypto devices,
+ * 2 queue pairs will be set up per device.
+ */
+ opts->nb_qps = (nb_lcores % enabled_cdev_count) ?
+ (nb_lcores / enabled_cdev_count) + 1 :
+ nb_lcores / enabled_cdev_count;
+
for (i = 0; i < enabled_cdev_count &&
i < RTE_CRYPTO_MAX_DEVS; i++) {
cdev_id = enabled_cdevs[i];
+#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
+ /*
+ * If multi-core scheduler is used, limit the number
+ * of queue pairs to 1, as there is no way to know
+ * how many cores are being used by the PMD, and
+ * how many will be available for the application.
+ */
+ if (!strcmp((const char *)opts->device_type, "crypto_scheduler") &&
+ rte_cryptodev_scheduler_mode_get(cdev_id) ==
+ CDEV_SCHED_MODE_MULTICORE)
+ opts->nb_qps = 1;
+#endif
+
+ struct rte_cryptodev_info cdev_info;
uint8_t socket_id = rte_cryptodev_socket_id(cdev_id);
+ rte_cryptodev_info_get(cdev_id, &cdev_info);
+ if (opts->nb_qps > cdev_info.max_nb_queue_pairs) {
+ printf("Number of needed queue pairs is higher "
+ "than the maximum number of queue pairs "
+ "per device.\n");
+ printf("Lower the number of cores or increase "
+ "the number of crypto devices\n");
+ return -EINVAL;
+ }
struct rte_cryptodev_config conf = {
- .nb_queue_pairs = 1,
- .socket_id = socket_id
+ .nb_queue_pairs = opts->nb_qps,
+ .socket_id = socket_id
};
struct rte_cryptodev_qp_conf qp_conf = {
- .nb_descriptors = opts->nb_descriptors
+ .nb_descriptors = opts->nb_descriptors
};
-
if (session_pool_socket[socket_id] == NULL) {
char mp_name[RTE_MEMPOOL_NAMESIZE];
struct rte_mempool *sess_mp;
@@ -165,14 +200,16 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs,
return -EINVAL;
}
- ret = rte_cryptodev_queue_pair_setup(cdev_id, 0,
+ for (j = 0; j < opts->nb_qps; j++) {
+ ret = rte_cryptodev_queue_pair_setup(cdev_id, j,
&qp_conf, socket_id,
session_pool_socket[socket_id]);
if (ret < 0) {
printf("Failed to setup queue pair %u on "
- "cryptodev %u", 0, cdev_id);
+ "cryptodev %u", j, cdev_id);
return -EINVAL;
}
+ }
ret = rte_cryptodev_start(cdev_id);
if (ret < 0) {
@@ -417,11 +454,12 @@ main(int argc, char **argv)
goto err;
}
+ nb_cryptodevs = cperf_initialize_cryptodev(&opts, enabled_cdevs,
+ session_pool_socket);
+
if (!opts.silent)
cperf_options_dump(&opts);
- nb_cryptodevs = cperf_initialize_cryptodev(&opts, enabled_cdevs,
- session_pool_socket);
if (nb_cryptodevs < 1) {
RTE_LOG(ERR, USER1, "Failed to initialise requested crypto "
"device type\n");
@@ -471,23 +509,29 @@ main(int argc, char **argv)
if (!opts.silent)
show_test_vector(t_vec);
+ uint16_t total_nb_qps = nb_cryptodevs * opts.nb_qps;
+
i = 0;
+ uint8_t qp_id = 0, cdev_index = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
- cdev_id = enabled_cdevs[i];
+ cdev_id = enabled_cdevs[cdev_index];
uint8_t socket_id = rte_cryptodev_socket_id(cdev_id);
- ctx[cdev_id] = cperf_testmap[opts.test].constructor(
- session_pool_socket[socket_id], cdev_id, 0,
+ ctx[i] = cperf_testmap[opts.test].constructor(
+ session_pool_socket[socket_id], cdev_id, qp_id,
&opts, t_vec, &op_fns);
- if (ctx[cdev_id] == NULL) {
+ if (ctx[i] == NULL) {
RTE_LOG(ERR, USER1, "Test run constructor failed\n");
goto err;
}
+ qp_id = (qp_id + 1) % opts.nb_qps;
+ if (qp_id == 0)
+ cdev_index++;
i++;
}
@@ -501,19 +545,17 @@ main(int argc, char **argv)
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
- cdev_id = enabled_cdevs[i];
-
rte_eal_remote_launch(cperf_testmap[opts.test].runner,
- ctx[cdev_id], lcore_id);
+ ctx[i], lcore_id);
i++;
}
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
rte_eal_wait_lcore(lcore_id);
i++;
@@ -532,15 +574,17 @@ main(int argc, char **argv)
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
- cdev_id = enabled_cdevs[i];
-
- cperf_testmap[opts.test].destructor(ctx[cdev_id]);
+ cperf_testmap[opts.test].destructor(ctx[i]);
i++;
}
+ for (i = 0; i < nb_cryptodevs &&
+ i < RTE_CRYPTO_MAX_DEVS; i++)
+ rte_cryptodev_stop(enabled_cdevs[i]);
+
free_test_vector(t_vec, &opts);
printf("\n");
@@ -549,16 +593,20 @@ main(int argc, char **argv)
err:
i = 0;
RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (i == nb_cryptodevs)
+ if (i == total_nb_qps)
break;
cdev_id = enabled_cdevs[i];
- if (ctx[cdev_id] && cperf_testmap[opts.test].destructor)
- cperf_testmap[opts.test].destructor(ctx[cdev_id]);
+ if (ctx[i] && cperf_testmap[opts.test].destructor)
+ cperf_testmap[opts.test].destructor(ctx[i]);
i++;
}
+ for (i = 0; i < nb_cryptodevs &&
+ i < RTE_CRYPTO_MAX_DEVS; i++)
+ rte_cryptodev_stop(enabled_cdevs[i]);
+
free_test_vector(t_vec, &opts);
printf("\n");
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* [dpdk-dev] [PATCH v4 8/8] app/crypto-perf: use single mempool
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
` (6 preceding siblings ...)
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 7/8] app/crypto-perf: support multiple queue pairs Pablo de Lara
@ 2017-10-04 3:46 ` Pablo de Lara
2017-10-06 11:57 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Akhil Goyal
2017-10-06 12:50 ` De Lara Guarch, Pablo
9 siblings, 0 replies; 49+ messages in thread
From: Pablo de Lara @ 2017-10-04 3:46 UTC (permalink / raw)
To: declan.doherty, akhil.goyal; +Cc: dev, Pablo de Lara
In order to improve memory utilization, a single mempool
is created, containing the crypto operation and mbufs
(one if operation is in-place, two if out-of-place).
This way, a single object is allocated and freed
per operation, reducing the amount of memory in cache,
which improves scalability.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 96 +++++--
app/test-crypto-perf/cperf_ops.h | 2 +-
app/test-crypto-perf/cperf_test_common.c | 322 +++++++++++------------
app/test-crypto-perf/cperf_test_common.h | 16 +-
app/test-crypto-perf/cperf_test_latency.c | 52 ++--
app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 67 ++---
app/test-crypto-perf/cperf_test_throughput.c | 55 ++--
app/test-crypto-perf/cperf_test_verify.c | 60 ++---
8 files changed, 331 insertions(+), 339 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index ad32065..f76dbdd 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -37,7 +37,7 @@
static int
cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector __rte_unused,
@@ -48,10 +48,18 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
sym_op->cipher.data.length = options->test_buffer_size;
@@ -63,7 +71,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
static int
cperf_set_ops_null_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector __rte_unused,
@@ -74,10 +82,18 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* auth parameters */
sym_op->auth.data.length = options->test_buffer_size;
@@ -89,7 +105,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_cipher(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -100,10 +116,18 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -132,7 +156,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
static int
cperf_set_ops_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -143,10 +167,18 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
if (test_vector->auth_iv.length) {
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
@@ -167,9 +199,9 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
@@ -219,7 +251,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -230,10 +262,18 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -256,9 +296,9 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
@@ -316,7 +356,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
static int
cperf_set_ops_aead(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
@@ -329,10 +369,18 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
for (i = 0; i < nb_ops; i++) {
struct rte_crypto_sym_op *sym_op = ops[i]->sym;
+ ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_crypto_op_attach_sym_session(ops[i], sess);
- sym_op->m_src = bufs_in[i];
- sym_op->m_dst = bufs_out[i];
+ sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ src_buf_offset);
+
+ /* Set dest mbuf to NULL if out-of-place (dst_buf_offset = 0) */
+ if (dst_buf_offset == 0)
+ sym_op->m_dst = NULL;
+ else
+ sym_op->m_dst = (struct rte_mbuf *)((uint8_t *)ops[i] +
+ dst_buf_offset);
/* AEAD parameters */
sym_op->aead.data.length = options->test_buffer_size;
@@ -354,9 +402,9 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
struct rte_mbuf *buf, *tbuf;
if (options->out_of_place) {
- buf = bufs_out[i];
+ buf = sym_op->m_dst;
} else {
- tbuf = bufs_in[i];
+ tbuf = sym_op->m_src;
while ((tbuf->next != NULL) &&
(offset >= tbuf->data_len)) {
offset -= tbuf->data_len;
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index 1f8fa93..94951cc 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -47,7 +47,7 @@ typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
uint16_t iv_offset);
typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
- struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
+ uint32_t src_buf_offset, uint32_t dst_buf_offset,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
diff --git a/app/test-crypto-perf/cperf_test_common.c b/app/test-crypto-perf/cperf_test_common.c
index 65f0be7..4c953c6 100644
--- a/app/test-crypto-perf/cperf_test_common.c
+++ b/app/test-crypto-perf/cperf_test_common.c
@@ -34,75 +34,111 @@
#include "cperf_test_common.h"
-static struct rte_mbuf *
-cperf_mbuf_create(struct rte_mempool *mempool,
- uint32_t segment_sz,
- uint32_t segments_nb,
- const struct cperf_options *options)
+struct obj_params {
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+ uint16_t segment_sz;
+ uint16_t segments_nb;
+};
+
+static void
+fill_single_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz)
{
- struct rte_mbuf *mbuf;
- uint8_t *mbuf_data;
- uint32_t remaining_bytes = options->max_buffer_size;
-
- mbuf = rte_pktmbuf_alloc(mempool);
- if (mbuf == NULL)
- goto error;
+ uint32_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = 1;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ m->next = NULL;
+}
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
+static void
+fill_multi_seg_mbuf(struct rte_mbuf *m, struct rte_mempool *mp,
+ void *obj, uint32_t mbuf_offset, uint16_t segment_sz,
+ uint16_t segments_nb)
+{
+ uint16_t mbuf_hdr_size = sizeof(struct rte_mbuf);
+ uint16_t remaining_segments = segments_nb;
+ struct rte_mbuf *next_mbuf;
+ phys_addr_t next_seg_phys_addr = rte_mempool_virt2phy(mp, obj) +
+ mbuf_offset + mbuf_hdr_size;
+
+ do {
+ /* start of buffer is after mbuf structure and priv data */
+ m->priv_size = 0;
+ m->buf_addr = (char *)m + mbuf_hdr_size;
+ m->buf_physaddr = next_seg_phys_addr;
+ next_seg_phys_addr += mbuf_hdr_size + segment_sz;
+ m->buf_len = segment_sz;
+ m->data_len = segment_sz;
+
+ /* No headroom needed for the buffer */
+ m->data_off = 0;
+
+ /* init some constant fields */
+ m->pool = mp;
+ m->nb_segs = segments_nb;
+ m->port = 0xff;
+ rte_mbuf_refcnt_set(m, 1);
+ next_mbuf = (struct rte_mbuf *) ((uint8_t *) m +
+ mbuf_hdr_size + segment_sz);
+ m->next = next_mbuf;
+ m = next_mbuf;
+ remaining_segments--;
+
+ } while (remaining_segments > 0);
+
+ m->next = NULL;
+}
- if (options->max_buffer_size <= segment_sz)
- remaining_bytes = 0;
+static void
+mempool_obj_init(struct rte_mempool *mp,
+ void *opaque_arg,
+ void *obj,
+ __attribute__((unused)) unsigned int i)
+{
+ struct obj_params *params = opaque_arg;
+ struct rte_crypto_op *op = obj;
+ struct rte_mbuf *m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->src_buf_offset);
+ /* Set crypto operation */
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ /* Set source buffer */
+ op->sym->m_src = m;
+ if (params->segments_nb == 1)
+ fill_single_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz);
else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
-
- while (remaining_bytes) {
- struct rte_mbuf *m;
-
- m = rte_pktmbuf_alloc(mempool);
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
-
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
-
- if (remaining_bytes <= segment_sz)
- remaining_bytes = 0;
- else
- remaining_bytes -= segment_sz;
-
- segments_nb--;
- }
-
- /*
- * If there was not enough room for the digest at the end
- * of the last segment, allocate a new one
- */
- if (segments_nb != 0) {
- struct rte_mbuf *m;
- m = rte_pktmbuf_alloc(mempool);
-
- if (m == NULL)
- goto error;
-
- rte_pktmbuf_chain(mbuf, m);
- mbuf_data = (uint8_t *)rte_pktmbuf_append(mbuf, segment_sz);
- if (mbuf_data == NULL)
- goto error;
- }
-
- return mbuf;
-error:
- if (mbuf != NULL)
- rte_pktmbuf_free(mbuf);
-
- return NULL;
+ fill_multi_seg_mbuf(m, mp, obj, params->src_buf_offset,
+ params->segment_sz, params->segments_nb);
+
+
+ /* Set destination buffer */
+ if (params->dst_buf_offset) {
+ m = (struct rte_mbuf *) ((uint8_t *) obj +
+ params->dst_buf_offset);
+ fill_single_seg_mbuf(m, mp, obj, params->dst_buf_offset,
+ params->segment_sz);
+ op->sym->m_dst = m;
+ } else
+ op->sym->m_dst = NULL;
}
int
@@ -110,120 +146,80 @@ cperf_alloc_common_memory(const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
uint8_t dev_id, uint16_t qp_id,
size_t extra_op_priv_size,
- struct rte_mempool **pkt_mbuf_pool_in,
- struct rte_mempool **pkt_mbuf_pool_out,
- struct rte_mbuf ***mbufs_in,
- struct rte_mbuf ***mbufs_out,
- struct rte_mempool **crypto_op_pool)
+ uint32_t *src_buf_offset,
+ uint32_t *dst_buf_offset,
+ struct rte_mempool **pool)
{
- unsigned int mbuf_idx = 0;
char pool_name[32] = "";
-
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_in_cdev_%u_qp_%u",
- dev_id, qp_id);
-
+ int ret;
+
+ /* Calculate the object size */
+ uint16_t crypto_op_size = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+ uint16_t crypto_op_private_size = extra_op_priv_size +
+ test_vector->cipher_iv.length +
+ test_vector->auth_iv.length +
+ options->aead_aad_sz;
+ uint16_t crypto_op_total_size = crypto_op_size +
+ crypto_op_private_size;
+ uint16_t crypto_op_total_size_padded =
+ RTE_CACHE_LINE_ROUNDUP(crypto_op_total_size);
+ uint32_t mbuf_size = sizeof(struct rte_mbuf) + options->segment_sz;
uint32_t max_size = options->max_buffer_size + options->digest_sz;
uint16_t segments_nb = (max_size % options->segment_sz) ?
(max_size / options->segment_sz) + 1 :
max_size / options->segment_sz;
+ uint32_t obj_size = crypto_op_total_size_padded +
+ (mbuf_size * segments_nb);
- *pkt_mbuf_pool_in = rte_pktmbuf_pool_create(pool_name,
- options->pool_sz * segments_nb, 0, 0,
- RTE_PKTMBUF_HEADROOM + options->segment_sz,
- rte_socket_id());
-
- if (*pkt_mbuf_pool_in == NULL)
- return -1;
+ snprintf(pool_name, sizeof(pool_name), "pool_cdev_%u_qp_%u",
+ dev_id, qp_id);
- /* Generate mbufs_in with plaintext populated for test */
- *mbufs_in = (struct rte_mbuf **)rte_malloc(NULL,
- (sizeof(struct rte_mbuf *) * options->pool_sz), 0);
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- (*mbufs_in)[mbuf_idx] = cperf_mbuf_create(
- *pkt_mbuf_pool_in,
- options->segment_sz,
- segments_nb,
- options);
- if ((*mbufs_in)[mbuf_idx] == NULL)
- return -1;
+ *src_buf_offset = crypto_op_total_size_padded;
+
+ struct obj_params params = {
+ .segment_sz = options->segment_sz,
+ .segments_nb = segments_nb,
+ .src_buf_offset = crypto_op_total_size_padded,
+ .dst_buf_offset = 0
+ };
+
+ if (options->out_of_place) {
+ *dst_buf_offset = *src_buf_offset +
+ (mbuf_size * segments_nb);
+ params.dst_buf_offset = *dst_buf_offset;
+ /* Destination buffer will be one segment only */
+ obj_size += max_size;
}
- *mbufs_out = (struct rte_mbuf **)rte_zmalloc(NULL,
- (sizeof(struct rte_mbuf *) *
- options->pool_sz), 0);
-
- if (options->out_of_place == 1) {
- snprintf(pool_name, sizeof(pool_name), "cperf_pool_out_cdev_%u_qp_%u",
- dev_id, qp_id);
-
- *pkt_mbuf_pool_out = rte_pktmbuf_pool_create(
- pool_name, options->pool_sz, 0, 0,
- RTE_PKTMBUF_HEADROOM + max_size,
- rte_socket_id());
-
- if (*pkt_mbuf_pool_out == NULL)
- return -1;
-
- for (mbuf_idx = 0; mbuf_idx < options->pool_sz; mbuf_idx++) {
- (*mbufs_out)[mbuf_idx] = cperf_mbuf_create(
- *pkt_mbuf_pool_out, max_size,
- 1, options);
- if ((*mbufs_out)[mbuf_idx] == NULL)
- return -1;
- }
+ *pool = rte_mempool_create_empty(pool_name,
+ options->pool_sz, obj_size, 512, 0,
+ rte_socket_id(), 0);
+ if (*pool == NULL) {
+ RTE_LOG(ERR, USER1,
+ "Cannot allocate mempool for device %u\n",
+ dev_id);
+ return -1;
}
- snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%u_qp_%u",
- dev_id, qp_id);
-
- uint16_t priv_size = RTE_ALIGN_CEIL(test_vector->cipher_iv.length +
- test_vector->auth_iv.length + test_vector->aead_iv.length +
- extra_op_priv_size, 16) +
- RTE_ALIGN_CEIL(options->aead_aad_sz, 16);
-
- *crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz,
- 512, priv_size, rte_socket_id());
- if (*crypto_op_pool == NULL)
+ ret = rte_mempool_set_ops_byname(*pool,
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+ if (ret != 0) {
+ RTE_LOG(ERR, USER1,
+ "Error setting mempool handler for device %u\n",
+ dev_id);
return -1;
-
- return 0;
-}
-
-void
-cperf_free_common_memory(const struct cperf_options *options,
- struct rte_mempool *pkt_mbuf_pool_in,
- struct rte_mempool *pkt_mbuf_pool_out,
- struct rte_mbuf **mbufs_in,
- struct rte_mbuf **mbufs_out,
- struct rte_mempool *crypto_op_pool)
-{
- uint32_t i = 0;
-
- if (mbufs_in) {
- while (mbufs_in[i] != NULL &&
- i < options->pool_sz)
- rte_pktmbuf_free(mbufs_in[i++]);
-
- rte_free(mbufs_in);
}
- if (mbufs_out) {
- i = 0;
- while (mbufs_out[i] != NULL
- && i < options->pool_sz)
- rte_pktmbuf_free(mbufs_out[i++]);
-
- rte_free(mbufs_out);
+ ret = rte_mempool_populate_default(*pool);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1,
+ "Error populating mempool for device %u\n",
+ dev_id);
+ return -1;
}
- if (pkt_mbuf_pool_in)
- rte_mempool_free(pkt_mbuf_pool_in);
+ rte_mempool_obj_iter(*pool, mempool_obj_init, (void *)¶ms);
- if (pkt_mbuf_pool_out)
- rte_mempool_free(pkt_mbuf_pool_out);
-
- if (crypto_op_pool)
- rte_mempool_free(crypto_op_pool);
+ return 0;
}
diff --git a/app/test-crypto-perf/cperf_test_common.h b/app/test-crypto-perf/cperf_test_common.h
index ad29431..4cee785 100644
--- a/app/test-crypto-perf/cperf_test_common.h
+++ b/app/test-crypto-perf/cperf_test_common.h
@@ -45,18 +45,8 @@ cperf_alloc_common_memory(const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
uint8_t dev_id, uint16_t qp_id,
size_t extra_op_priv_size,
- struct rte_mempool **pkt_mbuf_pool_in,
- struct rte_mempool **pkt_mbuf_pool_out,
- struct rte_mbuf ***mbufs_in,
- struct rte_mbuf ***mbufs_out,
- struct rte_mempool **crypto_op_pool);
-
-void
-cperf_free_common_memory(const struct cperf_options *options,
- struct rte_mempool *pkt_mbuf_pool_in,
- struct rte_mempool *pkt_mbuf_pool_out,
- struct rte_mbuf **mbufs_in,
- struct rte_mbuf **mbufs_out,
- struct rte_mempool *crypto_op_pool);
+ uint32_t *src_buf_offset,
+ uint32_t *dst_buf_offset,
+ struct rte_mempool **pool);
#endif /* _CPERF_TEST_COMMON_H_ */
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 313665e..ca2a4ba 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -50,17 +50,15 @@ struct cperf_latency_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
struct cperf_op_result *res;
@@ -82,11 +80,8 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx)
rte_cryptodev_sym_session_free(ctx->sess);
}
- cperf_free_common_memory(ctx->options,
- ctx->pkt_mbuf_pool_in,
- ctx->pkt_mbuf_pool_out,
- ctx->mbufs_in, ctx->mbufs_out,
- ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx->res);
rte_free(ctx);
@@ -126,9 +121,8 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
if (cperf_alloc_common_memory(options, test_vector, dev_id, qp_id,
extra_op_priv_size,
- &ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
- &ctx->mbufs_in, &ctx->mbufs_out,
- &ctx->crypto_op_pool) < 0)
+ &ctx->src_buf_offset, &ctx->dst_buf_offset,
+ &ctx->pool) < 0)
goto err;
ctx->res = rte_malloc(NULL, sizeof(struct cperf_op_result) *
@@ -204,7 +198,7 @@ cperf_latency_test_runner(void *arg)
while (test_burst_size <= ctx->options->max_burst_size) {
uint64_t ops_enqd = 0, ops_deqd = 0;
- uint64_t m_idx = 0, b_idx = 0;
+ uint64_t b_idx = 0;
uint64_t tsc_val, tsc_end, tsc_start;
uint64_t tsc_max = 0, tsc_min = ~0UL, tsc_tot = 0, tsc_idx = 0;
@@ -219,11 +213,9 @@ cperf_latency_test_runner(void *arg)
ctx->options->total_ops -
enqd_tot;
- /* Allocate crypto ops from pool */
- if (burst_size != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, burst_size)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ burst_size) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -233,8 +225,8 @@ cperf_latency_test_runner(void *arg)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
burst_size, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
@@ -263,7 +255,7 @@ cperf_latency_test_runner(void *arg)
/* Free memory for not enqueued operations */
if (ops_enqd != burst_size)
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)&ops[ops_enqd],
burst_size - ops_enqd);
@@ -279,16 +271,11 @@ cperf_latency_test_runner(void *arg)
}
if (likely(ops_deqd)) {
- /*
- * free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
+ /* Free crypto ops so they can be reused. */
for (i = 0; i < ops_deqd; i++)
store_timestamp(ops_processed[i], tsc_end);
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
deqd_tot += ops_deqd;
@@ -300,9 +287,6 @@ cperf_latency_test_runner(void *arg)
enqd_max = max(ops_enqd, enqd_max);
enqd_min = min(ops_enqd, enqd_min);
- m_idx += ops_enqd;
- m_idx = m_idx + test_burst_size > ctx->options->pool_sz ?
- 0 : m_idx;
b_idx++;
}
@@ -321,7 +305,7 @@ cperf_latency_test_runner(void *arg)
for (i = 0; i < ops_deqd; i++)
store_timestamp(ops_processed[i], tsc_end);
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
deqd_tot += ops_deqd;
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 13a270f..9b41724 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -51,12 +51,7 @@ struct cperf_pmd_cyclecount_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_crypto_op **ops;
struct rte_crypto_op **ops_processed;
@@ -64,6 +59,9 @@ struct cperf_pmd_cyclecount_ctx {
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
@@ -95,11 +93,9 @@ cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx)
rte_cryptodev_sym_session_free(ctx->sess);
}
- cperf_free_common_memory(ctx->options,
- ctx->pkt_mbuf_pool_in,
- ctx->pkt_mbuf_pool_out,
- ctx->mbufs_in, ctx->mbufs_out,
- ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
+
if (ctx->ops)
rte_free(ctx->ops);
@@ -144,9 +140,8 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
goto err;
if (cperf_alloc_common_memory(options, test_vector, dev_id, qp_id, 0,
- &ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
- &ctx->mbufs_in, &ctx->mbufs_out,
- &ctx->crypto_op_pool) < 0)
+ &ctx->src_buf_offset, &ctx->dst_buf_offset,
+ &ctx->pool) < 0)
goto err;
ctx->ops = rte_malloc("ops", alloc_sz, 0);
@@ -181,16 +176,22 @@ pmd_cyclecount_bench_ops(struct pmd_cyclecount_state *state, uint32_t cur_op,
test_burst_size);
struct rte_crypto_op **ops = &state->ctx->ops[cur_iter_op];
- if (burst_size != rte_crypto_op_bulk_alloc(
- state->ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, burst_size))
- return -1;
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(state->ctx->pool, (void **)ops,
+ burst_size) != 0) {
+ RTE_LOG(ERR, USER1,
+ "Failed to allocate more crypto operations "
+ "from the the crypto operation pool.\n"
+ "Consider increasing the pool size "
+ "with --pool-sz\n");
+ return -1;
+ }
/* Setup crypto op, attach mbuf etc */
(state->ctx->populate_ops)(ops,
- &state->ctx->mbufs_in[cur_iter_op],
- &state->ctx->mbufs_out[cur_iter_op], burst_size,
+ state->ctx->src_buf_offset,
+ state->ctx->dst_buf_offset,
+ burst_size,
state->ctx->sess, state->opts,
state->ctx->test_vector, iv_offset);
@@ -204,7 +205,7 @@ pmd_cyclecount_bench_ops(struct pmd_cyclecount_state *state, uint32_t cur_op,
}
}
#endif /* CPERF_LINEARIZATION_ENABLE */
- rte_mempool_put_bulk(state->ctx->crypto_op_pool, (void **)ops,
+ rte_mempool_put_bulk(state->ctx->pool, (void **)ops,
burst_size);
}
@@ -224,16 +225,22 @@ pmd_cyclecount_build_ops(struct pmd_cyclecount_state *state,
iter_ops_needed - cur_iter_op, test_burst_size);
struct rte_crypto_op **ops = &state->ctx->ops[cur_iter_op];
- if (burst_size != rte_crypto_op_bulk_alloc(
- state->ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, burst_size))
- return -1;
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(state->ctx->pool, (void **)ops,
+ burst_size) != 0) {
+ RTE_LOG(ERR, USER1,
+ "Failed to allocate more crypto operations "
+ "from the the crypto operation pool.\n"
+ "Consider increasing the pool size "
+ "with --pool-sz\n");
+ return -1;
+ }
/* Setup crypto op, attach mbuf etc */
(state->ctx->populate_ops)(ops,
- &state->ctx->mbufs_in[cur_iter_op],
- &state->ctx->mbufs_out[cur_iter_op], burst_size,
+ state->ctx->src_buf_offset,
+ state->ctx->dst_buf_offset,
+ burst_size,
state->ctx->sess, state->opts,
state->ctx->test_vector, iv_offset);
}
@@ -382,7 +389,7 @@ pmd_cyclecount_bench_burst_sz(
* we may not have processed all ops that we allocated, so
* free everything we've allocated.
*/
- rte_mempool_put_bulk(state->ctx->crypto_op_pool,
+ rte_mempool_put_bulk(state->ctx->pool,
(void **)state->ctx->ops, iter_ops_allocd);
}
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index bc7d889..b84dc63 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -44,17 +44,15 @@ struct cperf_throughput_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
@@ -68,11 +66,8 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
rte_cryptodev_sym_session_free(ctx->sess);
}
- cperf_free_common_memory(ctx->options,
- ctx->pkt_mbuf_pool_in,
- ctx->pkt_mbuf_pool_out,
- ctx->mbufs_in, ctx->mbufs_out,
- ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
@@ -108,9 +103,8 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
goto err;
if (cperf_alloc_common_memory(options, test_vector, dev_id, qp_id, 0,
- &ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
- &ctx->mbufs_in, &ctx->mbufs_out,
- &ctx->crypto_op_pool) < 0)
+ &ctx->src_buf_offset, &ctx->dst_buf_offset,
+ &ctx->pool) < 0)
goto err;
return ctx;
@@ -167,7 +161,7 @@ cperf_throughput_test_runner(void *test_ctx)
uint64_t ops_enqd = 0, ops_enqd_total = 0, ops_enqd_failed = 0;
uint64_t ops_deqd = 0, ops_deqd_total = 0, ops_deqd_failed = 0;
- uint64_t m_idx = 0, tsc_start, tsc_end, tsc_duration;
+ uint64_t tsc_start, tsc_end, tsc_duration;
uint16_t ops_unused = 0;
@@ -183,11 +177,9 @@ cperf_throughput_test_runner(void *test_ctx)
uint16_t ops_needed = burst_size - ops_unused;
- /* Allocate crypto ops from pool */
- if (ops_needed != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, ops_needed)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ ops_needed) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -197,10 +189,11 @@ cperf_throughput_test_runner(void *test_ctx)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
- ops_needed, ctx->sess, ctx->options,
- ctx->test_vector, iv_offset);
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
+ ops_needed, ctx->sess,
+ ctx->options, ctx->test_vector,
+ iv_offset);
/**
* When ops_needed is smaller than ops_enqd, the
@@ -245,12 +238,8 @@ cperf_throughput_test_runner(void *test_ctx)
ops_processed, test_burst_size);
if (likely(ops_deqd)) {
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ /* Free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
@@ -263,9 +252,6 @@ cperf_throughput_test_runner(void *test_ctx)
ops_deqd_failed++;
}
- m_idx += ops_needed;
- m_idx = m_idx + test_burst_size > ctx->options->pool_sz ?
- 0 : m_idx;
}
/* Dequeue any operations still in the crypto device */
@@ -280,9 +266,8 @@ cperf_throughput_test_runner(void *test_ctx)
if (ops_deqd == 0)
ops_deqd_failed++;
else {
- rte_mempool_put_bulk(ctx->crypto_op_pool,
+ rte_mempool_put_bulk(ctx->pool,
(void **)ops_processed, ops_deqd);
-
ops_deqd_total += ops_deqd;
}
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index b122022..f4c9357 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -44,17 +44,15 @@ struct cperf_verify_ctx {
uint16_t qp_id;
uint8_t lcore_id;
- struct rte_mempool *pkt_mbuf_pool_in;
- struct rte_mempool *pkt_mbuf_pool_out;
- struct rte_mbuf **mbufs_in;
- struct rte_mbuf **mbufs_out;
-
- struct rte_mempool *crypto_op_pool;
+ struct rte_mempool *pool;
struct rte_cryptodev_sym_session *sess;
cperf_populate_ops_t populate_ops;
+ uint32_t src_buf_offset;
+ uint32_t dst_buf_offset;
+
const struct cperf_options *options;
const struct cperf_test_vector *test_vector;
};
@@ -72,11 +70,8 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx)
rte_cryptodev_sym_session_free(ctx->sess);
}
- cperf_free_common_memory(ctx->options,
- ctx->pkt_mbuf_pool_in,
- ctx->pkt_mbuf_pool_out,
- ctx->mbufs_in, ctx->mbufs_out,
- ctx->crypto_op_pool);
+ if (ctx->pool)
+ rte_mempool_free(ctx->pool);
rte_free(ctx);
}
@@ -102,7 +97,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
ctx->options = options;
ctx->test_vector = test_vector;
- /* IV goes at the end of the cryptop operation */
+ /* IV goes at the end of the crypto operation */
uint16_t iv_offset = sizeof(struct rte_crypto_op) +
sizeof(struct rte_crypto_sym_op);
@@ -112,9 +107,8 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
goto err;
if (cperf_alloc_common_memory(options, test_vector, dev_id, qp_id, 0,
- &ctx->pkt_mbuf_pool_in, &ctx->pkt_mbuf_pool_out,
- &ctx->mbufs_in, &ctx->mbufs_out,
- &ctx->crypto_op_pool) < 0)
+ &ctx->src_buf_offset, &ctx->dst_buf_offset,
+ &ctx->pool) < 0)
goto err;
return ctx;
@@ -263,7 +257,7 @@ cperf_verify_test_runner(void *test_ctx)
static int only_once;
- uint64_t i, m_idx = 0;
+ uint64_t i;
uint16_t ops_unused = 0;
struct rte_crypto_op *ops[ctx->options->max_burst_size];
@@ -303,11 +297,9 @@ cperf_verify_test_runner(void *test_ctx)
uint16_t ops_needed = burst_size - ops_unused;
- /* Allocate crypto ops from pool */
- if (ops_needed != rte_crypto_op_bulk_alloc(
- ctx->crypto_op_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- ops, ops_needed)) {
+ /* Allocate objects containing crypto operations and mbufs */
+ if (rte_mempool_get_bulk(ctx->pool, (void **)ops,
+ ops_needed) != 0) {
RTE_LOG(ERR, USER1,
"Failed to allocate more crypto operations "
"from the the crypto operation pool.\n"
@@ -317,8 +309,8 @@ cperf_verify_test_runner(void *test_ctx)
}
/* Setup crypto op, attach mbuf etc */
- (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx],
- &ctx->mbufs_out[m_idx],
+ (ctx->populate_ops)(ops, ctx->src_buf_offset,
+ ctx->dst_buf_offset,
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset);
@@ -358,10 +350,6 @@ cperf_verify_test_runner(void *test_ctx)
ops_deqd = rte_cryptodev_dequeue_burst(ctx->dev_id, ctx->qp_id,
ops_processed, ctx->options->max_burst_size);
- m_idx += ops_needed;
- if (m_idx + ctx->options->max_burst_size > ctx->options->pool_sz)
- m_idx = 0;
-
if (ops_deqd == 0) {
/**
* Count dequeue polls which didn't return any
@@ -376,13 +364,10 @@ cperf_verify_test_runner(void *test_ctx)
if (cperf_verify_op(ops_processed[i], ctx->options,
ctx->test_vector))
ops_failed++;
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_crypto_op_free(ops_processed[i]);
}
+ /* Free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
+ (void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
}
@@ -404,13 +389,10 @@ cperf_verify_test_runner(void *test_ctx)
if (cperf_verify_op(ops_processed[i], ctx->options,
ctx->test_vector))
ops_failed++;
- /* free crypto ops so they can be reused. We don't free
- * the mbufs here as we don't want to reuse them as
- * the crypto operation will change the data and cause
- * failures.
- */
- rte_crypto_op_free(ops_processed[i]);
}
+ /* Free crypto ops so they can be reused. */
+ rte_mempool_put_bulk(ctx->pool,
+ (void **)ops_processed, ops_deqd);
ops_deqd_total += ops_deqd;
}
--
2.9.4
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH v3 7/7] app/crypto-perf: use single mempool
2017-09-26 9:21 ` Akhil Goyal
@ 2017-10-04 7:47 ` De Lara Guarch, Pablo
0 siblings, 0 replies; 49+ messages in thread
From: De Lara Guarch, Pablo @ 2017-10-04 7:47 UTC (permalink / raw)
To: Akhil Goyal, Doherty, Declan; +Cc: dev
> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Tuesday, September 26, 2017 10:21 AM
> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Doherty,
> Declan <declan.doherty@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v3 7/7] app/crypto-perf: use single mempool
>
> On 9/22/2017 1:25 PM, Pablo de Lara wrote:
> > In order to improve memory utilization, a single mempool is created,
> > containing the crypto operation and mbufs (one if operation is
> > in-place, two if out-of-place).
> > This way, a single object is allocated and freed per operation,
> > reducing the amount of memory in cache, which improves scalability.
> >
> > Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> > ---
> > app/test-crypto-perf/cperf_ops.c | 96 ++++--
> > app/test-crypto-perf/cperf_ops.h | 2 +-
> > app/test-crypto-perf/cperf_test_latency.c | 361 +++++++++++--------
> ---
> > app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 364 +++++++++++-
> ----------
> > app/test-crypto-perf/cperf_test_throughput.c | 358 +++++++++++-----
> ------
> > app/test-crypto-perf/cperf_test_verify.c | 367 +++++++++++----------
> --
> > 6 files changed, 793 insertions(+), 755 deletions(-)
> >
>
> The patch set looks good to me. Except for one comment in the 6th patch of
> the series and one comment as below.
>
> Is it possible to move the common code at a single place for all the latency,
> cycle_count, throughput, verify cases.
>
> I can see a lot of duplicate code in these files.
Good point. I will send a v4 with an extra patch that moves this common code to another file
(at the start of the patchset, so it is easier to review the rest of the changes).
Thanks,
Pablo
>
> -Akhil
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/7] app/crypto-perf: support multiple queue pairs
2017-09-26 8:42 ` Akhil Goyal
@ 2017-10-04 10:25 ` De Lara Guarch, Pablo
0 siblings, 0 replies; 49+ messages in thread
From: De Lara Guarch, Pablo @ 2017-10-04 10:25 UTC (permalink / raw)
To: Akhil Goyal, Doherty, Declan; +Cc: dev
Hi Akhil,
> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Tuesday, September 26, 2017 9:42 AM
> To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Doherty,
> Declan <declan.doherty@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v3 6/7] app/crypto-perf: support multiple queue pairs
>
> Hi Pablo,
> On 9/22/2017 1:25 PM, Pablo de Lara wrote:
> > Add parameter "qps" in crypto performance app, to create multiple
> > queue pairs per device.
> >
> > This new parameter is useful to have multiple logical cores using a
> > single crypto device, without needing to initialize a crypto device
> > per core.
> >
> > Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> > ---
> > app/test-crypto-perf/cperf_options.h | 2 +
> > app/test-crypto-perf/cperf_options_parsing.c | 22 ++++++++++
> > app/test-crypto-perf/cperf_test_latency.c | 14 +++---
> > app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 7 +--
> > app/test-crypto-perf/cperf_test_throughput.c | 14 +++---
> > app/test-crypto-perf/cperf_test_verify.c | 14 +++---
> > app/test-crypto-perf/main.c | 56 ++++++++++++++----------
> > doc/guides/tools/cryptoperf.rst | 4 ++
> > 8 files changed, 84 insertions(+), 49 deletions(-)
> >
> > diff --git a/app/test-crypto-perf/cperf_options.h
> > b/app/test-crypto-perf/cperf_options.h
> > index 6d339f4..468d5e2 100644
> > --- a/app/test-crypto-perf/cperf_options.h
> > +++ b/app/test-crypto-perf/cperf_options.h
> > @@ -15,6 +15,7 @@
> > #define CPERF_DESC_NB ("desc-nb")
> >
> > #define CPERF_DEVTYPE ("devtype")
> > +#define CPERF_QP_NB ("qp-nb")
> > #define CPERF_OPTYPE ("optype")
> > #define CPERF_SESSIONLESS ("sessionless")
> > #define CPERF_OUT_OF_PLACE ("out-of-place")
> > @@ -74,6 +75,7 @@ struct cperf_options {
> > uint32_t segment_sz;
> > uint32_t test_buffer_size;
> > uint32_t nb_descriptors;
> > + uint32_t nb_qps;
> >
> > uint32_t sessionless:1;
> > uint32_t out_of_place:1;
> > diff --git a/app/test-crypto-perf/cperf_options_parsing.c
> > b/app/test-crypto-perf/cperf_options_parsing.c
> > index 89f86a2..441cd61 100644
> > --- a/app/test-crypto-perf/cperf_options_parsing.c
> > +++ b/app/test-crypto-perf/cperf_options_parsing.c
> > @@ -364,6 +364,24 @@ parse_desc_nb(struct cperf_options *opts,
> const char *arg)
> > }
> >
> > static int
> > +parse_qp_nb(struct cperf_options *opts, const char *arg) {
> > + int ret = parse_uint32_t(&opts->nb_qps, arg);
> > +
> > + if (ret) {
> > + RTE_LOG(ERR, USER1, "failed to parse number of queue
> pairs\n");
> > + return -1;
> > + }
> > +
> > + if ((opts->nb_qps == 0) || (opts->nb_qps > 256)) {
> Shouldn't this be a macro for max nb_qps.
>
> Also a generic comment on this patch.. Why do we need an explicit
> parameter for nb-qps. Can't we do it similar to ipsec-secgw.
> It takes the devices and maps the queues with core as per the devices'
> capabilities.
I see... that looks like a good idea. I am implementing it, but will do it slightly different.
Instead of having the number of queue pairs per device equal to the number of logical cores,
I will divide the number of cores by the number of crypto devices.
So, if 4 cores are available and 2 crypto devices are used, 2 queue pairs will be set up.
Thanks for your review,
Pablo
>
> -Akhil
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
` (7 preceding siblings ...)
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 8/8] app/crypto-perf: use single mempool Pablo de Lara
@ 2017-10-06 11:57 ` Akhil Goyal
2017-10-06 12:50 ` De Lara Guarch, Pablo
9 siblings, 0 replies; 49+ messages in thread
From: Akhil Goyal @ 2017-10-06 11:57 UTC (permalink / raw)
To: Pablo de Lara, declan.doherty; +Cc: dev
On 10/4/2017 9:16 AM, Pablo de Lara wrote:
> This patchset includes some improvements in the Crypto
> performance application, including app fixes and new parameter additions.
>
> The last patch, in particular, introduces performance improvements.
> Currently, crypto operations are allocated in a mempool and mbufs
> in a different one. Then mbufs are extracted to an array,
> which is looped through for all the crypto operations,
> impacting greatly the performance, as much more memory is used.
>
> Since crypto operations and mbufs are mapped 1:1, the can share
> the same mempool object (similar to having the mbuf in the
> private data of the crypto operation).
> This improves performance, as it is only required to handle
> a single mempool and the mbufs are obtained from the cache
> of the mempoool, and not from an static array.
>
> Changes in v4:
> - Refactored test code, to minimize duplications
> - Removed --qp-nb parameter. Now the number of queue pairs
> per device are calculated from the number of logical cores
> available and the number of crypto devices
>
> Changes in v3:
> - Renamed "number of queue pairs" option from "--qps" to "--qp-nb",
> for more consistency
>
> Changes in v2:
>
> - Added support for multiple queue pairs
> - Mempool for crypto operations and mbufs is now created
> using rte_mempool_create_empty(), rte_mempool_set_ops_byname(),
> rte_mempool_populate_default() and rte_mempool_obj_iter(),
> so mempool handler is set, as per Akhil's request.
>
> Pablo de Lara (8):
> app/crypto-perf: refactor common test code
> app/crypto-perf: set AAD after the crypto operation
> app/crypto-perf: parse AEAD data from vectors
> app/crypto-perf: parse segment size
> app/crypto-perf: overwrite mbuf when verifying
> app/crypto-perf: do not populate the mbufs at init
> app/crypto-perf: support multiple queue pairs
> app/crypto-perf: use single mempool
>
> app/test-crypto-perf/Makefile | 5 +
> app/test-crypto-perf/cperf_ops.c | 136 ++++++++---
> app/test-crypto-perf/cperf_ops.h | 2 +-
> app/test-crypto-perf/cperf_options.h | 5 +-
> app/test-crypto-perf/cperf_options_parsing.c | 47 ++--
> app/test-crypto-perf/cperf_test_common.c | 225 ++++++++++++++++++
> app/test-crypto-perf/cperf_test_common.h | 52 +++++
> app/test-crypto-perf/cperf_test_latency.c | 239 +++----------------
> app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 239 ++++---------------
> app/test-crypto-perf/cperf_test_throughput.c | 237 +++----------------
> app/test-crypto-perf/cperf_test_vector_parsing.c | 55 +++++
> app/test-crypto-perf/cperf_test_verify.c | 278 ++++++-----------------
> app/test-crypto-perf/main.c | 100 +++++---
> doc/guides/tools/cryptoperf.rst | 6 +-
> 14 files changed, 715 insertions(+), 911 deletions(-)
> create mode 100644 app/test-crypto-perf/cperf_test_common.c
> create mode 100644 app/test-crypto-perf/cperf_test_common.h
>
Series Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
` (8 preceding siblings ...)
2017-10-06 11:57 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Akhil Goyal
@ 2017-10-06 12:50 ` De Lara Guarch, Pablo
9 siblings, 0 replies; 49+ messages in thread
From: De Lara Guarch, Pablo @ 2017-10-06 12:50 UTC (permalink / raw)
To: Doherty, Declan, akhil.goyal; +Cc: dev
> -----Original Message-----
> From: De Lara Guarch, Pablo
> Sent: Wednesday, October 4, 2017 4:46 AM
> To: Doherty, Declan <declan.doherty@intel.com>; akhil.goyal@nxp.com
> Cc: dev@dpdk.org; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Subject: [PATCH v4 0/8] Crypto-perf app improvements
>
> This patchset includes some improvements in the Crypto performance
> application, including app fixes and new parameter additions.
>
> The last patch, in particular, introduces performance improvements.
> Currently, crypto operations are allocated in a mempool and mbufs in a
> different one. Then mbufs are extracted to an array, which is looped
> through for all the crypto operations, impacting greatly the performance, as
> much more memory is used.
>
> Since crypto operations and mbufs are mapped 1:1, the can share the same
> mempool object (similar to having the mbuf in the private data of the
> crypto operation).
> This improves performance, as it is only required to handle a single
> mempool and the mbufs are obtained from the cache of the mempoool,
> and not from an static array.
>
> Changes in v4:
> - Refactored test code, to minimize duplications
> - Removed --qp-nb parameter. Now the number of queue pairs
> per device are calculated from the number of logical cores
> available and the number of crypto devices
>
> Changes in v3:
> - Renamed "number of queue pairs" option from "--qps" to "--qp-nb",
> for more consistency
>
> Changes in v2:
>
> - Added support for multiple queue pairs
> - Mempool for crypto operations and mbufs is now created
> using rte_mempool_create_empty(), rte_mempool_set_ops_byname(),
> rte_mempool_populate_default() and rte_mempool_obj_iter(),
> so mempool handler is set, as per Akhil's request.
>
> Pablo de Lara (8):
> app/crypto-perf: refactor common test code
> app/crypto-perf: set AAD after the crypto operation
> app/crypto-perf: parse AEAD data from vectors
> app/crypto-perf: parse segment size
> app/crypto-perf: overwrite mbuf when verifying
> app/crypto-perf: do not populate the mbufs at init
> app/crypto-perf: support multiple queue pairs
> app/crypto-perf: use single mempool
>
> app/test-crypto-perf/Makefile | 5 +
> app/test-crypto-perf/cperf_ops.c | 136 ++++++++---
> app/test-crypto-perf/cperf_ops.h | 2 +-
> app/test-crypto-perf/cperf_options.h | 5 +-
> app/test-crypto-perf/cperf_options_parsing.c | 47 ++--
> app/test-crypto-perf/cperf_test_common.c | 225
> ++++++++++++++++++
> app/test-crypto-perf/cperf_test_common.h | 52 +++++
> app/test-crypto-perf/cperf_test_latency.c | 239 +++----------------
> app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 239 ++++---------------
> app/test-crypto-perf/cperf_test_throughput.c | 237 +++----------------
> app/test-crypto-perf/cperf_test_vector_parsing.c | 55 +++++
> app/test-crypto-perf/cperf_test_verify.c | 278 ++++++-----------------
> app/test-crypto-perf/main.c | 100 +++++---
> doc/guides/tools/cryptoperf.rst | 6 +-
> 14 files changed, 715 insertions(+), 911 deletions(-) create mode 100644
> app/test-crypto-perf/cperf_test_common.c
> create mode 100644 app/test-crypto-perf/cperf_test_common.h
>
> --
> 2.9.4
Applied to dpdk-next-crypto.
Pablo
^ permalink raw reply [flat|nested] 49+ messages in thread
end of thread, other threads:[~2017-10-06 12:52 UTC | newest]
Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-18 8:05 [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 1/6] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 2/6] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 3/6] app/crypto-perf: parse segment size Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 4/6] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 5/6] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
2017-08-18 8:05 ` [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single mempool Pablo de Lara
2017-08-30 8:30 ` Akhil Goyal
[not found] ` <9F7182E3F746AB4EA17801C148F3C60433039119@IRSMSX101.ger.corp.intel.com>
2017-09-11 11:08 ` De Lara Guarch, Pablo
2017-09-11 13:10 ` Shreyansh Jain
2017-09-11 13:56 ` De Lara Guarch, Pablo
2017-09-04 13:08 ` [dpdk-dev] [PATCH 0/6] Crypto-perf app improvements Zhang, Roy Fan
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 3/7] app/crypto-perf: parse segment size Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
2017-09-13 7:20 ` [dpdk-dev] [PATCH v2 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 0/7] Crypto-perf app improvements Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 3/7] app/crypto-perf: parse segment size Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 6/7] app/crypto-perf: support multiple queue pairs Pablo de Lara
2017-09-26 8:42 ` Akhil Goyal
2017-10-04 10:25 ` De Lara Guarch, Pablo
2017-09-22 7:55 ` [dpdk-dev] [PATCH v3 7/7] app/crypto-perf: use single mempool Pablo de Lara
2017-09-26 9:21 ` Akhil Goyal
2017-10-04 7:47 ` De Lara Guarch, Pablo
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 1/8] app/crypto-perf: refactor common test code Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 2/8] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 3/8] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 4/8] app/crypto-perf: parse segment size Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 5/8] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 6/8] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 7/8] app/crypto-perf: support multiple queue pairs Pablo de Lara
2017-10-04 3:46 ` [dpdk-dev] [PATCH v4 8/8] app/crypto-perf: use single mempool Pablo de Lara
2017-10-06 11:57 ` [dpdk-dev] [PATCH v4 0/8] Crypto-perf app improvements Akhil Goyal
2017-10-06 12:50 ` De Lara Guarch, Pablo
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 0/7] " Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 1/7] app/crypto-perf: set AAD after the crypto operation Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 2/7] app/crypto-perf: parse AEAD data from vectors Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 3/7] app/crypto-perf: parse segment size Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 4/7] app/crypto-perf: overwrite mbuf when verifying Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 5/7] app/crypto-perf: do not populate the mbufs at init Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 6/7] app/crypto-perf: support multiple queue pairs Pablo de Lara
2017-09-13 7:22 ` [dpdk-dev] [PATCH v2 7/7] app/crypto-perf: use single mempool Pablo de Lara
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).