From: "Kusztal, ArkadiuszX" <arkadiuszx.kusztal@intel.com>
To: "Trahe, Fiona" <fiona.trahe@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"Mrozowicz, SlawomirX" <slawomirx.mrozowicz@intel.com>
Cc: "Doherty, Declan" <declan.doherty@intel.com>,
"Griffin, John" <john.griffin@intel.com>,
"De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>,
"Trahe, Fiona" <fiona.trahe@intel.com>
Subject: Re: [dpdk-dev] [PATCH] app/crypto-perf: add minimise-offload-cost flag
Date: Thu, 11 May 2017 14:22:06 +0000 [thread overview]
Message-ID: <80307F746F1522479831AB1253B7024E758DEE@IRSMSX102.ger.corp.intel.com> (raw)
In-Reply-To: <1494346443-11130-1-git-send-email-fiona.trahe@intel.com>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Fiona Trahe
> Sent: Tuesday, May 09, 2017 5:14 PM
> To: dev@dpdk.org; Mrozowicz, SlawomirX
> <slawomirx.mrozowicz@intel.com>
> Cc: Doherty, Declan <declan.doherty@intel.com>; Griffin, John
> <john.griffin@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>
> Subject: [dpdk-dev] [PATCH] app/crypto-perf: add minimise-offload-cost flag
>
> The throughput test enqueues and dequeues bursts of operations
> to the device. For software devices the full burst size will
> usually be successfully en/dequeued, on hardware devices
> however the CPU can call the API more frequently than necessary,
> as it has nothing else to do.
> Minimum offload cost is achieved when the specified
> burst_size is en/dequeued. So rather than
> wasting CPU cycles continually retrying, with a
> fraction of the burst being en/dequeued each time,
> fewer CPU cycles are used by backing off until a full
> burst can be enqueued.
>
> This patch adds a --minimise-offload-cost flag.
> When set the test backs off until full bursts are
> en/dequeued and counts the cycles while waiting.
> These cycles represent cycles saved by
> offloading, which in a real application are
> available for other work. Hence these cycles are
> deducted from the total cycle-count to show the
> minimum offload-cost which can be acheived.
>
> Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
> ---
> app/test-crypto-perf/cperf_options.h | 2 +
> app/test-crypto-perf/cperf_options_parsing.c | 12 +++++
> app/test-crypto-perf/cperf_test_throughput.c | 76
> ++++++++++++++++++++++------
> 3 files changed, 75 insertions(+), 15 deletions(-)
>
> diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-
> perf/cperf_options.h
> index b928c58..48ca1de 100644
> --- a/app/test-crypto-perf/cperf_options.h
> +++ b/app/test-crypto-perf/cperf_options.h
> @@ -31,6 +31,7 @@
> --
> 2.5.0
Acked-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
next prev parent reply other threads:[~2017-05-11 14:22 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-09 16:14 Fiona Trahe
2017-05-11 14:22 ` Kusztal, ArkadiuszX [this message]
2017-07-12 13:02 ` Trahe, Fiona
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=80307F746F1522479831AB1253B7024E758DEE@IRSMSX102.ger.corp.intel.com \
--to=arkadiuszx.kusztal@intel.com \
--cc=declan.doherty@intel.com \
--cc=dev@dpdk.org \
--cc=fiona.trahe@intel.com \
--cc=john.griffin@intel.com \
--cc=pablo.de.lara.guarch@intel.com \
--cc=slawomirx.mrozowicz@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).