DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] add crypto perf test graphing script
@ 2020-12-11 17:31 Ciara Power
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 1/4] test/cryptodev: fix latency test csv output Ciara Power
                   ` (5 more replies)
  0 siblings, 6 replies; 27+ messages in thread
From: Ciara Power @ 2020-12-11 17:31 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, akhil.goyal, Ciara Power

This patchset introduces a python script to run various crypto performance
test cases, and graph the results in a consumable manner. The test suites can
be configured via a JSON file, currently throughput and latency ptests for
devices crypto_qat, crypto_aesni_mb and crypto_aesni_gcm are supported.
The final collection of graphs are output in PDF format, with a PDF per
test suite, containing all test case graphs relevant for that suite.

Some cleanup is included for the throughput performance test and latency
performance test csv outputs, to make them easier to work with.

Ciara Power (4):
  test/cryptodev: fix latency test csv output
  test/cryptodev: improve csv output for perf tests
  usertools: add script to graph crypto perf results
  maintainers: update crypto perf app maintainers

 MAINTAINERS                                  |   3 +
 app/test-crypto-perf/cperf_test_latency.c    |  13 +-
 app/test-crypto-perf/cperf_test_throughput.c |  12 +-
 doc/guides/tools/cryptoperf.rst              |  93 ++++++
 usertools/dpdk_graph_crypto_perf.py          | 249 +++++++++++++++
 usertools/graph_crypto_perf_config.json      | 309 +++++++++++++++++++
 6 files changed, 665 insertions(+), 14 deletions(-)
 create mode 100755 usertools/dpdk_graph_crypto_perf.py
 create mode 100644 usertools/graph_crypto_perf_config.json

-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 1/4] test/cryptodev: fix latency test csv output
  2020-12-11 17:31 [dpdk-dev] [PATCH 0/4] add crypto perf test graphing script Ciara Power
@ 2020-12-11 17:31 ` Ciara Power
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 2/4] test/cryptodev: improve csv output for perf tests Ciara Power
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Ciara Power @ 2020-12-11 17:31 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, Ciara Power, pablo.de.lara.guarch, stable

The csv output for the latency performance test had an extra header,
"Packet Size", which is a duplicate of "Buffer Size", and had no
corresponding value in the output. This is now removed.

Fixes: f6cefe253cc8 ("app/crypto-perf: add range/list of sizes")
Cc: pablo.de.lara.guarch@intel.com
Cc: stable@dpdk.org

Signed-off-by: Ciara Power <ciara.power@intel.com>
---
 app/test-crypto-perf/cperf_test_latency.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 0e4d0e1538..c2590a4dcf 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -310,7 +310,7 @@ cperf_latency_test_runner(void *arg)
 		if (ctx->options->csv) {
 			if (rte_atomic16_test_and_set(&display_once))
 				printf("\n# lcore, Buffer Size, Burst Size, Pakt Seq #, "
-						"Packet Size, cycles, time (us)");
+						"cycles, time (us)");
 
 			for (i = 0; i < ctx->options->total_ops; i++) {
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 2/4] test/cryptodev: improve csv output for perf tests
  2020-12-11 17:31 [dpdk-dev] [PATCH 0/4] add crypto perf test graphing script Ciara Power
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 1/4] test/cryptodev: fix latency test csv output Ciara Power
@ 2020-12-11 17:31 ` Ciara Power
  2021-01-11 15:43   ` Doherty, Declan
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 3/4] usertools: add script to graph crypto perf results Ciara Power
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 27+ messages in thread
From: Ciara Power @ 2020-12-11 17:31 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, akhil.goyal, Ciara Power

The csv outputs for performance tests were not easily consumed, due to
unnecessary whitespaces and capitals. The delimiter is modified to now
be "," instead of ";" which was present in some cases. Some unnecessary
values were also removed from the output.

Signed-off-by: Ciara Power <ciara.power@intel.com>
---
 app/test-crypto-perf/cperf_test_latency.c    | 13 +++++--------
 app/test-crypto-perf/cperf_test_throughput.c | 12 ++++++------
 2 files changed, 11 insertions(+), 14 deletions(-)

diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index c2590a4dcf..f3c09b8c1c 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -309,18 +309,15 @@ cperf_latency_test_runner(void *arg)
 
 		if (ctx->options->csv) {
 			if (rte_atomic16_test_and_set(&display_once))
-				printf("\n# lcore, Buffer Size, Burst Size, Pakt Seq #, "
-						"cycles, time (us)");
+				printf("\n#buffer_size(b),burst_size,time(us)");
 
 			for (i = 0; i < ctx->options->total_ops; i++) {
 
-				printf("\n%u;%u;%u;%"PRIu64";%"PRIu64";%.3f",
-					ctx->lcore_id, ctx->options->test_buffer_size,
-					test_burst_size, i + 1,
-					ctx->res[i].tsc_end - ctx->res[i].tsc_start,
+				printf("\n%u,%u,%.3f",
+					ctx->options->test_buffer_size,
+					test_burst_size,
 					tunit * (double) (ctx->res[i].tsc_end
-							- ctx->res[i].tsc_start)
-						/ tsc_hz);
+					- ctx->res[i].tsc_start) / tsc_hz);
 
 			}
 		} else {
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index f30f7d5c2c..a841a890b9 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -294,13 +294,13 @@ cperf_throughput_test_runner(void *test_ctx)
 					cycles_per_packet);
 		} else {
 			if (rte_atomic16_test_and_set(&display_once))
-				printf("#lcore id,Buffer Size(B),"
-					"Burst Size,Enqueued,Dequeued,Failed Enq,"
-					"Failed Deq,Ops(Millions),Throughput(Gbps),"
-					"Cycles/Buf\n\n");
+				printf("#lcore_id,buffer_size(b),"
+					"burst_size,enqueued,dequeued,failed_enq,"
+					"failed_deq,ops(millions),throughput(gbps),"
+					"cycles_per_buf\n\n");
 
-			printf("%u;%u;%u;%"PRIu64";%"PRIu64";%"PRIu64";%"PRIu64";"
-					"%.3f;%.3f;%.3f\n",
+			printf("%u,%u,%u,%"PRIu64",%"PRIu64",%"PRIu64",%"PRIu64","
+					"%.3f,%.3f,%.3f\n",
 					ctx->lcore_id,
 					ctx->options->test_buffer_size,
 					test_burst_size,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 3/4] usertools: add script to graph crypto perf results
  2020-12-11 17:31 [dpdk-dev] [PATCH 0/4] add crypto perf test graphing script Ciara Power
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 1/4] test/cryptodev: fix latency test csv output Ciara Power
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 2/4] test/cryptodev: improve csv output for perf tests Ciara Power
@ 2020-12-11 17:31 ` Ciara Power
  2020-12-11 19:35   ` Stephen Hemminger
  2021-01-11 16:03   ` Doherty, Declan
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 4/4] maintainers: update crypto perf app maintainers Ciara Power
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 27+ messages in thread
From: Ciara Power @ 2020-12-11 17:31 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, akhil.goyal, Ciara Power, Thomas Monjalon

The python script introduced in this patch runs the crypto performance
test application for various test cases, and graphs the results.

Test cases are defined in the config JSON file, this is where parameters
are specified for each test. Currently there are various test cases for
devices crypto_qat, crypto_aesni_mb and crypto_gcm. Tests for the
ptest types Throughput and Latency are supported for each.

The results of each test case are graphed and saved in PDFs (one PDF for
each test suite, showing all test case graphs for that suite).
The graphs output include various grouped barcharts for throughput
tests, and histogram and boxplot graphs are used for latency tests.

Usage:
The script uses the installed app by default (from ninja install).
Alternatively we can pass path to app by
	"-f <rel_path>/<build_dir>/app/dpdk-test-crypto-perf"

All device test suites are run by default.
Alternatively we can specify by adding arguments,
	"-t all" - to run all test suites
	"-t crypto_qat_latency" - to run QAT latency test suite only
	"-t crypto_aesni_mb_throughput crypto_aesni_gcm_latency"
		- to run both AESNI_MB throughput and AESNI_GCM latency
		test suites

Signed-off-by: Ciara Power <ciara.power@intel.com>
---
 MAINTAINERS                             |   2 +
 doc/guides/tools/cryptoperf.rst         |  93 +++++++
 usertools/dpdk_graph_crypto_perf.py     | 249 +++++++++++++++++++
 usertools/graph_crypto_perf_config.json | 309 ++++++++++++++++++++++++
 4 files changed, 653 insertions(+)
 create mode 100755 usertools/dpdk_graph_crypto_perf.py
 create mode 100644 usertools/graph_crypto_perf_config.json

diff --git a/MAINTAINERS b/MAINTAINERS
index eafe9f8c46..5e9dc1a1a7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1588,6 +1588,8 @@ M: Declan Doherty <declan.doherty@intel.com>
 T: git://dpdk.org/next/dpdk-next-crypto
 F: app/test-crypto-perf/
 F: doc/guides/tools/cryptoperf.rst
+F: usertools/dpdk_graph_crypto_perf.py
+F: usertools/graph_crypto_perf_config.json
 
 Eventdev test application
 M: Jerin Jacob <jerinj@marvell.com>
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 79359fe894..63d97319a8 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -453,3 +453,96 @@ Test vector file for cipher algorithm aes cbc 256 with authorization sha::
    digest =
    0x1C, 0xB2, 0x3D, 0xD1, 0xF9, 0xC7, 0x6C, 0x49, 0x2E, 0xDA, 0x94, 0x8B, 0xF1, 0xCF, 0x96, 0x43,
    0x67, 0x50, 0x39, 0x76, 0xB5, 0xA1, 0xCE, 0xA1, 0xD7, 0x77, 0x10, 0x07, 0x43, 0x37, 0x05, 0xB4
+
+
+Graph Crypto Perf Results
+-------------------------
+
+The ``dpdk_graph_crypto_perf.py`` usertool is a simple script to automate
+running crypto performance tests, and graphing the results.
+The output graphs include various grouped barcharts for throughput
+tests, and histogram and boxplot graphs for latency tests.
+These are output to PDF files, with one PDF per test suite.
+
+
+Test Configuration
+~~~~~~~~~~~~~~~~~~
+
+The test cases run by the script are outlined in the ``graph_crypto_perf_config.json`` file.
+An example of this configuration is shown below for one test suite,
+showing the default config for the test suite, and one test case.
+The test case has additional app config that will be combined with
+the default config when running the test case.
+
+.. code-block:: c
+
+   "crypto_aesni_mb_throughput": {
+       "default": {
+           "eal": {
+               "l": "1,2",
+               "log-level": "1",
+               "vdev": "crypto_aesni_mb"
+           },
+           "app": {
+               "csv-friendly": true,
+               "silent": true,
+               "buffer-sz": "64,128,256,512,768,1024,1408,2048",
+               "burst-sz": "1,4,8,16,32",
+               "ptest": "throughput",
+               "devtype": "crypto_aesni_mb"
+           }
+        },
+       "AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+               "cipher-algo": "aes-cbc",
+               "cipher-key-sz": "16",
+               "auth-algo": "sha1-hmac",
+               "optype": "auth-then-cipher",
+               "cipher-op": "decrypt"
+        }
+   }
+
+Currently, crypto_qat, crypto_aesni_mb, and crypto_aesni_gcm devices for
+both throughput and latency ptests are supported.
+
+
+Usage
+~~~~~
+
+.. code-block:: console
+
+   ./dpdk_graph_crypto_perf
+
+The following are the application command-line options:
+
+* ``-f file_path``
+
+  Provide path to ``dpdk-test-crypto-perf`` application.
+  The script uses the installed app by default.
+
+  .. code-block:: console
+
+     ./dpdk_graph_crypto_perf -f <build_dir>/app/dpdk-test-crypto-perf
+
+
+* ``-t test_suite_list``
+
+  Specify test suites to run. All test suites are run by default.
+
+  To run all test suites
+
+  .. code-block:: console
+
+     ./dpdk_graph_crypto_perf -t all
+
+  To run crypto_qat latency test suite only
+
+  .. code-block:: console
+
+     ./dpdk_graph_crypto_perf -t crypto_qat_latency
+
+  To run both crypto_aesni_mb throughput and crypto_aesni_gcm latency test suites
+
+  .. code-block:: console
+
+     ./dpdk_graph_crypto_perf -t crypto_aesni_mb_throughput \
+         crypto_aesni_gcm_latency
diff --git a/usertools/dpdk_graph_crypto_perf.py b/usertools/dpdk_graph_crypto_perf.py
new file mode 100755
index 0000000000..a1361fb625
--- /dev/null
+++ b/usertools/dpdk_graph_crypto_perf.py
@@ -0,0 +1,249 @@
+#! /usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+"""
+Script to automate running crypto performance tests for a range of test
+cases and devices as configured in the JSON file.
+The results are processed and output into various graphs in PDF files.
+Currently, throughput and latency tests are supported.
+"""
+
+import glob
+import json
+import os
+import shutil
+import subprocess
+from argparse import ArgumentParser
+from datetime import datetime
+import img2pdf
+import pandas as pd
+import plotly.express as px
+
+SCRIPT_PATH = os.path.dirname(__file__) + "/"
+GRAPHS_PATH = SCRIPT_PATH + "graph_crypto_perf_graphs/"
+PDFS_PATH = SCRIPT_PATH + "graph_crypto_perf_pdfs/"
+
+
+class Grapher:
+    """Grapher object containing all graphing functions. """
+    def __init__(self, dev):
+        self.graph_num = 0
+        self.dev = dev
+        self.test = ""
+        self.ptest = ""
+        self.data = pd.DataFrame()
+        if not os.path.exists(GRAPHS_PATH):
+            os.makedirs(GRAPHS_PATH)
+
+    def save_graph(self, fig):
+        """
+        Update figure layout to increase readability, output to JPG file.
+        """
+        fig.update_layout(font_size=30, title_x=0.5, title_font={"size": 30},
+                          margin=dict(t=200, l=150, r=150, b=150))
+        fig.write_image(GRAPHS_PATH + "%s_%d.jpg" % (self.dev,
+                                                     self.graph_num))
+
+    def boxplot_graph(self, x_axis_label):
+        """Plot a boxplot graph for the given parameters."""
+        fig = px.box(self.data, x=x_axis_label,
+                     title="Device: " + self.dev + "<br>" + self.test +
+                     "<br>(Outliers Included)", height=1200, width=2400)
+        self.save_graph(fig)
+        self.graph_num += 1
+
+    def grouped_graph(self, y_axis_label, x_axis_label, color_label):
+        """Plot a grouped barchart using the given parameters."""
+        if (self.data[y_axis_label] == 0).all():
+            return
+        fig = px.bar(self.data, x=x_axis_label, color=color_label,
+                     y=y_axis_label,
+                     title="Device: " + self.dev + "<br>" + self.test + "<br>"
+                     + y_axis_label + " for each " + x_axis_label +
+                     "/" + color_label,
+                     barmode="group",
+                     height=1200,
+                     width=2400)
+        fig.update_xaxes(type='category')
+        self.save_graph(fig)
+        self.graph_num += 1
+
+    def histogram_graph(self, x_axis_label):
+        """Plot a histogram graph using the given parameters."""
+        quart1 = self.data[x_axis_label].quantile(0.25)
+        quart3 = self.data[x_axis_label].quantile(0.75)
+        inter_quart_range = quart3 - quart1
+        dev_data_out = self.data[~((self.data[x_axis_label] <
+                                    (quart1 - 1.5 * inter_quart_range)) |
+                                   (self.data[x_axis_label] >
+                                    (quart3 + 1.5 * inter_quart_range)))]
+        fig = px.histogram(dev_data_out, x=x_axis_label,
+                           title="Device: " + self.dev + "<br>" + self.test +
+                           "<br>(Outliers removed using Interquartile Range)",
+                           height=1200,
+                           width=2400)
+        max_val = dev_data_out[x_axis_label].max()
+        min_val = dev_data_out[x_axis_label].min()
+        fig.update_traces(xbins=dict(
+            start=min_val,
+            end=max_val,
+            size=(max_val - min_val) / 200
+        ))
+        self.save_graph(fig)
+        self.graph_num += 1
+
+
+def cleanup_throughput_datatypes(data):
+    """Cleanup data types of throughput test results dataframe. """
+    data['burst_size'] = data['burst_size'].astype('int')
+    data['buffer_size(b)'] = data['buffer_size(b)'].astype('int')
+    data['burst_size'] = data['burst_size'].astype('category')
+    data['buffer_size(b)'] = data['buffer_size(b)'].astype('category')
+    data['failed_enq'] = data['failed_enq'].astype('int')
+    data['throughput(gbps)'] = data['throughput(gbps)'].astype('float')
+    data['ops(millions)'] = data['ops(millions)'].astype('float')
+    data['cycles_per_buf'] = data['cycles_per_buf'].astype('float')
+    return data
+
+
+def process_test_results(grapher, data):
+    """
+    Process results from the test case,
+    calling graph functions to output graph images.
+    """
+    print("\tProcessing Test Case Results: " + grapher.test)
+    if grapher.ptest == "throughput":
+        grapher.data = cleanup_throughput_datatypes(data)
+        for y_label in ["throughput(gbps)", "ops(millions)",
+                        "cycles_per_buf", "failed_enq"]:
+            grapher.grouped_graph(y_label, "buffer_size(b)",
+                                  "burst_size")
+    elif grapher.ptest == "latency":
+        data['time(us)'] = data['time(us)'].astype('float')
+        grapher.data = data
+        grapher.histogram_graph("time(us)")
+        grapher.boxplot_graph("time(us)")
+    else:
+        print("Invalid ptest")
+        return
+
+
+def create_results_pdf(dev):
+    """Output results graphs to one PDF."""
+    if not os.path.exists(PDFS_PATH):
+        os.makedirs(PDFS_PATH)
+    dev_graphs = sorted(glob.glob(GRAPHS_PATH + "%s_*.jpg" % dev), key=(
+        lambda x: int((x.rsplit('_', 1)[1]).split('.')[0])))
+    if dev_graphs:
+        with open(PDFS_PATH + "/%s_results.pdf" % dev, "wb") as pdf_file:
+            pdf_file.write(img2pdf.convert(dev_graphs))
+
+
+def run_test(test_cmd, test, grapher, timestamp, params):
+    """Run performance test app for the given test case parameters."""
+    print("\n\tRunning Test Case: " + test)
+    try:
+        process_out = subprocess.check_output([test_cmd] + params,
+                                              universal_newlines=True,
+                                              stderr=subprocess.STDOUT)
+        rows = []
+        for line in process_out.split('\n'):
+            if not line:
+                continue
+            if line.startswith('#'):
+                columns = line[1:].split(',')
+            elif line[0].isdigit():
+                rows.append(line.split(','))
+            else:
+                continue
+        data = pd.DataFrame(rows, columns=columns)
+        data['date'] = timestamp
+        grapher.test = test
+        process_test_results(grapher, data)
+    except subprocess.CalledProcessError as err:
+        print("\tCannot run performance test application for: " + str(err))
+        return
+
+
+def run_test_suite(test_cmd, dut, test_cases, timestamp):
+    """Parse test cases for the test suite and run each test."""
+    print("\nRunning Test Suite: " + dut)
+    default_params = []
+    grapher = Grapher(dut)
+    for (key, val) in test_cases['default']['eal'].items():
+        if len(key) == 1:
+            default_params.append("-" + key + " " + val)
+        else:
+            default_params.append("--" + key + "=" + val)
+
+    default_params.append("--")
+    for (key, val) in test_cases['default']['app'].items():
+        if isinstance(val, bool):
+            default_params.append("--" + key if val is True else "")
+        else:
+            default_params.append("--" + key + "=" + val)
+
+    if 'ptest' not in test_cases['default']['app']:
+        print("Test Suite must contain default ptest value, skipping")
+        return
+    grapher.ptest = test_cases['default']['app']['ptest']
+
+    for (test, params) in {k: v for (k, v) in test_cases.items() if
+                           k != "default"}.items():
+        extra_params = []
+        for (key, val) in params.items():
+            extra_params.append("--" + key + "=" + val)
+        run_test(test_cmd, test, grapher, timestamp,
+                 default_params + extra_params)
+
+    create_results_pdf(dut)
+
+
+def parse_args():
+    """Parse command-line arguments passed to script."""
+    parser = ArgumentParser()
+    parser.add_argument('-f', '--file-path',
+                        default=shutil.which('dpdk-test-crypto-perf'),
+                        help="Path for test perf app")
+    parser.add_argument('-t', '--test-suites', nargs='+', default=["all"],
+                        help="List of device test suites to run")
+    args = parser.parse_args()
+    return args.file_path, args.test_suites
+
+
+def main():
+    """
+    Load JSON config and call relevant functions to run chosen test suites.
+    """
+    test_cmd, test_suites = parse_args()
+    if not os.path.isfile(test_cmd):
+        print("Invalid filepath!")
+        return
+    try:
+        with open(SCRIPT_PATH + 'graph_crypto_perf_config.json') as conf:
+            test_suite_options = json.load(conf)
+    except json.decoder.JSONDecodeError as err:
+        print("Error loading JSON config: " + err.msg)
+        return
+    timestamp = pd.Timestamp(datetime.now())
+
+    if test_suites != ["all"]:
+        dev_list = []
+        for (dut, test_cases) in {k: v for (k, v) in test_suite_options.items()
+                                  if k in test_suites}.items():
+            dev_list.append(dut)
+            run_test_suite(test_cmd, dut, test_cases, timestamp)
+        if not dev_list:
+            print("No valid device test suites chosen!")
+            return
+    else:
+        for (dut, test_cases) in test_suite_options.items():
+            run_test_suite(test_cmd, dut, test_cases, timestamp)
+
+    if os.path.exists(GRAPHS_PATH):
+        shutil.rmtree(GRAPHS_PATH)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/usertools/graph_crypto_perf_config.json b/usertools/graph_crypto_perf_config.json
new file mode 100644
index 0000000000..004ec3e84e
--- /dev/null
+++ b/usertools/graph_crypto_perf_config.json
@@ -0,0 +1,309 @@
+{
+	"crypto_aesni_mb_throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"log-level": "1",
+				"vdev": "crypto_aesni_mb"
+			},
+			"app": {
+				"csv-friendly": true,
+				"silent": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"ptest": "throughput",
+				"devtype": "crypto_aesni_mb"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-128 SHA1-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"auth-op": "generate",
+			"auth-key-sz": "64",
+			"digest-sz": "20",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt"
+		},
+		"AES-GMAC 128 auth-only generate": {
+			"auth-algo": "aes-gmac",
+			"auth-key-sz": "16",
+			"auth-iv-sz": "12",
+			"auth-op": "generate",
+			"digest-sz": "16",
+			"optype": "auth-only",
+			"total-ops": "10000000"
+		}
+	},
+	"crypto_aesni_mb_latency": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"log-level": "1",
+				"vdev": "crypto_aesni_mb"
+			},
+			"app": {
+				"csv-friendly": true,
+				"silent": true,
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"ptest": "latency",
+				"devtype": "crypto_aesni_mb"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		}
+	},
+	"crypto_aesni_gcm_throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"log-level": "1",
+				"vdev": "crypto_aesni_gcm"
+			},
+			"app": {
+				"csv-friendly": true,
+				"silent": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"ptest": "throughput",
+				"devtype": "crypto_aesni_gcm"
+			}
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "16",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GMAC 128 auth-only generate": {
+			"auth-algo": "aes-gmac",
+			"auth-key-sz": "16",
+			"auth-iv-sz": "12",
+			"auth-op": "generate",
+			"digest-sz": "16",
+			"optype": "auth-only",
+			"total-ops": "10000000"
+		}
+	},
+	"crypto_aesni_gcm_latency": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"log-level": "1",
+				"vdev": "crypto_aesni_gcm"
+			},
+			"app": {
+				"csv-friendly": true,
+				"silent": true,
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"ptest": "latency",
+				"devtype": "crypto_aesni_gcm"
+			}
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "16",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead"
+		},
+		"AES-GCM-256 aead-op encrypt latency": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead"
+		}
+	},
+	"crypto_qat_throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"log-level": "1"
+			},
+			"app": {
+				"csv-friendly": true,
+				"silent": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"devtype": "crypto_qat",
+				"ptest": "throughput"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-128 SHA1-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt"
+		}
+	},
+	"crypto_qat_latency": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"log-level": "1"
+			},
+			"app": {
+				"csv-friendly": true,
+				"silent": true,
+				"ptest": "latency",
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"devtype": "crypto_qat"
+			}
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "encrypt"
+		}
+	}
+}
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 4/4] maintainers: update crypto perf app maintainers
  2020-12-11 17:31 [dpdk-dev] [PATCH 0/4] add crypto perf test graphing script Ciara Power
                   ` (2 preceding siblings ...)
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 3/4] usertools: add script to graph crypto perf results Ciara Power
@ 2020-12-11 17:31 ` Ciara Power
  2021-01-14 10:41 ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Ciara Power
  2021-01-20 17:29 ` [dpdk-dev] [PATCH v3 " Ciara Power
  5 siblings, 0 replies; 27+ messages in thread
From: Ciara Power @ 2020-12-11 17:31 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, akhil.goyal, Ciara Power, Thomas Monjalon

This patch adds a maintainer for the crypto perf test application,
to cover the new perf test graphing script.

Signed-off-by: Ciara Power <ciara.power@intel.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 5e9dc1a1a7..9dd3cb9aef 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1585,6 +1585,7 @@ F: doc/guides/tools/comp_perf.rst
 
 Crypto performance test application
 M: Declan Doherty <declan.doherty@intel.com>
+M: Ciara Power <ciara.power@intel.com>
 T: git://dpdk.org/next/dpdk-next-crypto
 F: app/test-crypto-perf/
 F: doc/guides/tools/cryptoperf.rst
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 3/4] usertools: add script to graph crypto perf results
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 3/4] usertools: add script to graph crypto perf results Ciara Power
@ 2020-12-11 19:35   ` Stephen Hemminger
  2021-01-11 16:03   ` Doherty, Declan
  1 sibling, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2020-12-11 19:35 UTC (permalink / raw)
  To: Ciara Power; +Cc: dev, declan.doherty, akhil.goyal, Thomas Monjalon

On Fri, 11 Dec 2020 17:31:13 +0000
Ciara Power <ciara.power@intel.com> wrote:

> +	}
> +}
> \ No newline at end of file
> -- 

Fix your editor please

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 2/4] test/cryptodev: improve csv output for perf tests
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 2/4] test/cryptodev: improve csv output for perf tests Ciara Power
@ 2021-01-11 15:43   ` Doherty, Declan
  0 siblings, 0 replies; 27+ messages in thread
From: Doherty, Declan @ 2021-01-11 15:43 UTC (permalink / raw)
  To: Ciara Power, dev; +Cc: akhil.goyal



On 11/12/2020 5:31 PM, Ciara Power wrote:
> The csv outputs for performance tests were not easily consumed, due to
> unnecessary whitespaces and capitals. The delimiter is modified to now
> be "," instead of ";" which was present in some cases. Some unnecessary
> values were also removed from the output.
> 
> Signed-off-by: Ciara Power <ciara.power@intel.com>
> ---
>   app/test-crypto-perf/cperf_test_latency.c    | 13 +++++--------
>   app/test-crypto-perf/cperf_test_throughput.c | 12 ++++++------
>   2 files changed, 11 insertions(+), 14 deletions(-)
> 
> diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
> index c2590a4dcf..f3c09b8c1c 100644
> --- a/app/test-crypto-perf/cperf_test_latency.c
> +++ b/app/test-crypto-perf/cperf_test_latency.c
> @@ -309,18 +309,15 @@ cperf_latency_test_runner(void *arg)
>   
>   		if (ctx->options->csv) {
>   			if (rte_atomic16_test_and_set(&display_once))
> -				printf("\n# lcore, Buffer Size, Burst Size, Pakt Seq #, "
> -						"cycles, time (us)");
> +				printf("\n#buffer_size(b),burst_size,time(us)");
>   
>   			for (i = 0; i < ctx->options->total_ops; i++) {
>   
> -				printf("\n%u;%u;%u;%"PRIu64";%"PRIu64";%.3f",
> -					ctx->lcore_id, ctx->options->test_buffer_size,
> -					test_burst_size, i + 1,
> -					ctx->res[i].tsc_end - ctx->res[i].tsc_start,
> +				printf("\n%u,%u,%.3f",
> +					ctx->options->test_buffer_size,
> +					test_burst_size,
>   					tunit * (double) (ctx->res[i].tsc_end
> -							- ctx->res[i].tsc_start)
> -						/ tsc_hz);
> +					- ctx->res[i].tsc_start) / tsc_hz);
>   
>   			}
>   		} else {
> diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
> index f30f7d5c2c..a841a890b9 100644
> --- a/app/test-crypto-perf/cperf_test_throughput.c
> +++ b/app/test-crypto-perf/cperf_test_throughput.c
> @@ -294,13 +294,13 @@ cperf_throughput_test_runner(void *test_ctx)
>   					cycles_per_packet);
>   		} else {
>   			if (rte_atomic16_test_and_set(&display_once))
> -				printf("#lcore id,Buffer Size(B),"
> -					"Burst Size,Enqueued,Dequeued,Failed Enq,"
> -					"Failed Deq,Ops(Millions),Throughput(Gbps),"
> -					"Cycles/Buf\n\n");
> +				printf("#lcore_id,buffer_size(b),"
> +					"burst_size,enqueued,dequeued,failed_enq,"
> +					"failed_deq,ops(millions),throughput(gbps),"
> +					"cycles_per_buf\n\n");
>   
> -			printf("%u;%u;%u;%"PRIu64";%"PRIu64";%"PRIu64";%"PRIu64";"
> -					"%.3f;%.3f;%.3f\n",
> +			printf("%u,%u,%u,%"PRIu64",%"PRIu64",%"PRIu64",%"PRIu64","
> +					"%.3f,%.3f,%.3f\n",
>   					ctx->lcore_id,
>   					ctx->options->test_buffer_size,
>   					test_burst_size,
> 

I would suggest limiting the changes here to just fixing the delimiter 
here to being comma's instead of semi-colon's, that way it should be 
possible to make the python script support only versions of the output 
which wouldn't be possible if we change column names.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 3/4] usertools: add script to graph crypto perf results
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 3/4] usertools: add script to graph crypto perf results Ciara Power
  2020-12-11 19:35   ` Stephen Hemminger
@ 2021-01-11 16:03   ` Doherty, Declan
  1 sibling, 0 replies; 27+ messages in thread
From: Doherty, Declan @ 2021-01-11 16:03 UTC (permalink / raw)
  To: Ciara Power, dev; +Cc: akhil.goyal, Thomas Monjalon



On 11/12/2020 5:31 PM, Ciara Power wrote:
> The python script introduced in this patch runs the crypto performance
> test application for various test cases, and graphs the results.
> 
> Test cases are defined in the config JSON file, this is where parameters
> are specified for each test. Currently there are various test cases for
> devices crypto_qat, crypto_aesni_mb and crypto_gcm. Tests for the
> ptest types Throughput and Latency are supported for each.
> 
> The results of each test case are graphed and saved in PDFs (one PDF for
> each test suite, showing all test case graphs for that suite).
> The graphs output include various grouped barcharts for throughput
> tests, and histogram and boxplot graphs are used for latency tests.
> 
> Usage:
> The script uses the installed app by default (from ninja install).
> Alternatively we can pass path to app by
> 	"-f <rel_path>/<build_dir>/app/dpdk-test-crypto-perf"
> 
> All device test suites are run by default.
> Alternatively we can specify by adding arguments,
> 	"-t all" - to run all test suites
> 	"-t crypto_qat_latency" - to run QAT latency test suite only
> 	"-t crypto_aesni_mb_throughput crypto_aesni_gcm_latency"
> 		- to run both AESNI_MB throughput and AESNI_GCM latency
> 		test suites
> 
> Signed-off-by: Ciara Power <ciara.power@intel.com>
> ---
>   MAINTAINERS                             |   2 +
>   doc/guides/tools/cryptoperf.rst         |  93 +++++++
>   usertools/dpdk_graph_crypto_perf.py     | 249 +++++++++++++++++++
>   usertools/graph_crypto_perf_config.json | 309 ++++++++++++++++++++++++
>   4 files changed, 653 insertions(+)
>   create mode 100755 usertools/dpdk_graph_crypto_perf.py
>   create mode 100644 usertools/graph_crypto_perf_config.json
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index eafe9f8c46..5e9dc1a1a7 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1588,6 +1588,8 @@ M: Declan Doherty <declan.doherty@intel.com>
>   T: git://dpdk.org/next/dpdk-next-crypto
>   F: app/test-crypto-perf/
>   F: doc/guides/tools/cryptoperf.rst
> +F: usertools/dpdk_graph_crypto_perf.py
> +F: usertools/graph_crypto_perf_config.json
>   
>   Eventdev test application
>   M: Jerin Jacob <jerinj@marvell.com>
> diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
> index 79359fe894..63d97319a8 100644
> --- a/doc/guides/tools/cryptoperf.rst
> +++ b/doc/guides/tools/cryptoperf.rst
> @@ -453,3 +453,96 @@ Test vector file for cipher algorithm aes cbc 256 with authorization sha::
>      digest =
>      0x1C, 0xB2, 0x3D, 0xD1, 0xF9, 0xC7, 0x6C, 0x49, 0x2E, 0xDA, 0x94, 0x8B, 0xF1, 0xCF, 0x96, 0x43,
>      0x67, 0x50, 0x39, 0x76, 0xB5, 0xA1, 0xCE, 0xA1, 0xD7, 0x77, 0x10, 0x07, 0x43, 0x37, 0x05, 0xB4
> +
> +
> +Graph Crypto Perf Results
> +-------------------------
> +
> +The ``dpdk_graph_crypto_perf.py`` usertool is a simple script to automate
> +running crypto performance tests, and graphing the results.
> +The output graphs include various grouped barcharts for throughput
> +tests, and histogram and boxplot graphs for latency tests.
> +These are output to PDF files, with one PDF per test suite.
> +
> +

Add section regarding required python modules needed.

> +Test Configuration
> +~~~~~~~~~~~~~~~~~~
> +
> +The test cases run by the script are outlined in the ``graph_crypto_perf_config.json`` file.
> +An example of this configuration is shown below for one test suite,
> +showing the default config for the test suite, and one test case.
> +The test case has additional app config that will be combined with
> +the default config when running the test case.
> +
> +.. code-block:: c
> +
> +   "crypto_aesni_mb_throughput": {
> +       "default": {
> +           "eal": {
> +               "l": "1,2",
> +               "log-level": "1",
> +               "vdev": "crypto_aesni_mb"
> +           },
> +           "app": {
> +               "csv-friendly": true,
> +               "silent": true,
> +               "buffer-sz": "64,128,256,512,768,1024,1408,2048",
> +               "burst-sz": "1,4,8,16,32",
> +               "ptest": "throughput",
> +               "devtype": "crypto_aesni_mb"
> +           }
> +        },
> +       "AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
> +               "cipher-algo": "aes-cbc",
> +               "cipher-key-sz": "16",
> +               "auth-algo": "sha1-hmac",
> +               "optype": "auth-then-cipher",
> +               "cipher-op": "decrypt"
> +        }
> +   }
> +
> +Currently, crypto_qat, crypto_aesni_mb, and crypto_aesni_gcm devices for
> +both throughput and latency ptests are supported.
> +
> +

Should note that specific test cases only allow for modification of the 
application parameters and not the EAL parameters, and that a default 
configuration is required to specify EAL parameters.

> +Usage
> +~~~~~
> +
> +.. code-block:: console
> +
> +   ./dpdk_graph_crypto_perf
> +
> +The following are the application command-line options:
> +
> +* ``-f file_path``
> +
> +  Provide path to ``dpdk-test-crypto-perf`` application.
> +  The script uses the installed app by default.
> +
> +  .. code-block:: console
> +
> +     ./dpdk_graph_crypto_perf -f <build_dir>/app/dpdk-test-crypto-perf
> +
> +
> +* ``-t test_suite_list``
> +
> +  Specify test suites to run. All test suites are run by default.
> +
> +  To run all test suites
> +
> +  .. code-block:: console
> +
> +     ./dpdk_graph_crypto_perf -t all
> +
> +  To run crypto_qat latency test suite only
> +
> +  .. code-block:: console
> +
> +     ./dpdk_graph_crypto_perf -t crypto_qat_latency
> +
> +  To run both crypto_aesni_mb throughput and crypto_aesni_gcm latency test suites
> +
> +  .. code-block:: console
> +
> +     ./dpdk_graph_crypto_perf -t crypto_aesni_mb_throughput \
> +         crypto_aesni_gcm_latency
> diff --git a/usertools/dpdk_graph_crypto_perf.py b/usertools/dpdk_graph_crypto_perf.py
> new file mode 100755
> index 0000000000..a1361fb625
> --- /dev/null
> +++ b/usertools/dpdk_graph_crypto_perf.py
> @@ -0,0 +1,249 @@
> +#! /usr/bin/env python3
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2020 Intel Corporation
> +
> +"""
> +Script to automate running crypto performance tests for a range of test
> +cases and devices as configured in the JSON file.
> +The results are processed and output into various graphs in PDF files.
> +Currently, throughput and latency tests are supported.
> +"""
> +
> +import glob
> +import json
> +import os
> +import shutil
> +import subprocess
> +from argparse import ArgumentParser
> +from datetime import datetime
> +import img2pdf
> +import pandas as pd
> +import plotly.express as px
> +
> +SCRIPT_PATH = os.path.dirname(__file__) + "/"
> +GRAPHS_PATH = SCRIPT_PATH + "graph_crypto_perf_graphs/"
> +PDFS_PATH = SCRIPT_PATH + "graph_crypto_perf_pdfs/"
> +
> +
> +class Grapher:
> +    """Grapher object containing all graphing functions. """
> +    def __init__(self, dev):
> +        self.graph_num = 0
> +        self.dev = dev
> +        self.test = ""
> +        self.ptest = ""
> +        self.data = pd.DataFrame()
> +        if not os.path.exists(GRAPHS_PATH):
> +            os.makedirs(GRAPHS_PATH)
> +
> +    def save_graph(self, fig):
> +        """
> +        Update figure layout to increase readability, output to JPG file.
> +        """
> +        fig.update_layout(font_size=30, title_x=0.5, title_font={"size": 30},
> +                          margin=dict(t=200, l=150, r=150, b=150))
> +        fig.write_image(GRAPHS_PATH + "%s_%d.jpg" % (self.dev,
> +                                                     self.graph_num))
> +
> +    def boxplot_graph(self, x_axis_label):
> +        """Plot a boxplot graph for the given parameters."""
> +        fig = px.box(self.data, x=x_axis_label,
> +                     title="Device: " + self.dev + "<br>" + self.test +
> +                     "<br>(Outliers Included)", height=1200, width=2400)
> +        self.save_graph(fig)
> +        self.graph_num += 1
> +
> +    def grouped_graph(self, y_axis_label, x_axis_label, color_label):
> +        """Plot a grouped barchart using the given parameters."""
> +        if (self.data[y_axis_label] == 0).all():
> +            return
> +        fig = px.bar(self.data, x=x_axis_label, color=color_label,
> +                     y=y_axis_label,
> +                     title="Device: " + self.dev + "<br>" + self.test + "<br>"
> +                     + y_axis_label + " for each " + x_axis_label +
> +                     "/" + color_label,
> +                     barmode="group",
> +                     height=1200,
> +                     width=2400)
> +        fig.update_xaxes(type='category')
> +        self.save_graph(fig)
> +        self.graph_num += 1
> +
> +    def histogram_graph(self, x_axis_label):
> +        """Plot a histogram graph using the given parameters."""
> +        quart1 = self.data[x_axis_label].quantile(0.25)
> +        quart3 = self.data[x_axis_label].quantile(0.75)
> +        inter_quart_range = quart3 - quart1
> +        dev_data_out = self.data[~((self.data[x_axis_label] <
> +                                    (quart1 - 1.5 * inter_quart_range)) |
> +                                   (self.data[x_axis_label] >
> +                                    (quart3 + 1.5 * inter_quart_range)))]
> +        fig = px.histogram(dev_data_out, x=x_axis_label,
> +                           title="Device: " + self.dev + "<br>" + self.test +
> +                           "<br>(Outliers removed using Interquartile Range)",
> +                           height=1200,
> +                           width=2400)
> +        max_val = dev_data_out[x_axis_label].max()
> +        min_val = dev_data_out[x_axis_label].min()
> +        fig.update_traces(xbins=dict(
> +            start=min_val,
> +            end=max_val,
> +            size=(max_val - min_val) / 200
> +        ))
> +        self.save_graph(fig)
> +        self.graph_num += 1
> +
> +
> +def cleanup_throughput_datatypes(data):
> +    """Cleanup data types of throughput test results dataframe. """
> +    data['burst_size'] = data['burst_size'].astype('int')
> +    data['buffer_size(b)'] = data['buffer_size(b)'].astype('int')
> +    data['burst_size'] = data['burst_size'].astype('category')
> +    data['buffer_size(b)'] = data['buffer_size(b)'].astype('category')
> +    data['failed_enq'] = data['failed_enq'].astype('int')
> +    data['throughput(gbps)'] = data['throughput(gbps)'].astype('float')
> +    data['ops(millions)'] = data['ops(millions)'].astype('float')
> +    data['cycles_per_buf'] = data['cycles_per_buf'].astype('float')
> +    return data
> +
> +
> +def process_test_results(grapher, data):
> +    """
> +    Process results from the test case,
> +    calling graph functions to output graph images.
> +    """
> +    print("\tProcessing Test Case Results: " + grapher.test)
> +    if grapher.ptest == "throughput":
> +        grapher.data = cleanup_throughput_datatypes(data)
> +        for y_label in ["throughput(gbps)", "ops(millions)",
> +                        "cycles_per_buf", "failed_enq"]:
> +            grapher.grouped_graph(y_label, "buffer_size(b)",
> +                                  "burst_size")
> +    elif grapher.ptest == "latency":
> +        data['time(us)'] = data['time(us)'].astype('float')
> +        grapher.data = data
> +        grapher.histogram_graph("time(us)")
> +        grapher.boxplot_graph("time(us)")
> +    else:
> +        print("Invalid ptest")
> +        return
> +
> +
> +def create_results_pdf(dev):
> +    """Output results graphs to one PDF."""
> +    if not os.path.exists(PDFS_PATH):
> +        os.makedirs(PDFS_PATH)
> +    dev_graphs = sorted(glob.glob(GRAPHS_PATH + "%s_*.jpg" % dev), key=(
> +        lambda x: int((x.rsplit('_', 1)[1]).split('.')[0])))
> +    if dev_graphs:
> +        with open(PDFS_PATH + "/%s_results.pdf" % dev, "wb") as pdf_file:
> +            pdf_file.write(img2pdf.convert(dev_graphs))
> +
> +
> +def run_test(test_cmd, test, grapher, timestamp, params):
> +    """Run performance test app for the given test case parameters."""
> +    print("\n\tRunning Test Case: " + test)
> +    try:
> +        process_out = subprocess.check_output([test_cmd] + params,
> +                                              universal_newlines=True,
> +                                              stderr=subprocess.STDOUT)
> +        rows = []
> +        for line in process_out.split('\n'):
> +            if not line:
> +                continue
> +            if line.startswith('#'):
> +                columns = line[1:].split(',')
> +            elif line[0].isdigit():
> +                rows.append(line.split(','))
> +            else:
> +                continue
> +        data = pd.DataFrame(rows, columns=columns)
> +        data['date'] = timestamp
> +        grapher.test = test
> +        process_test_results(grapher, data)
> +    except subprocess.CalledProcessError as err:
> +        print("\tCannot run performance test application for: " + str(err))
> +        return
> +
> +
> +def run_test_suite(test_cmd, dut, test_cases, timestamp):
> +    """Parse test cases for the test suite and run each test."""
> +    print("\nRunning Test Suite: " + dut)
> +    default_params = []
> +    grapher = Grapher(dut)
> +    for (key, val) in test_cases['default']['eal'].items():
> +        if len(key) == 1:
> +            default_params.append("-" + key + " " + val)
> +        else:
> +            default_params.append("--" + key + "=" + val)
> +
> +    default_params.append("--")
> +    for (key, val) in test_cases['default']['app'].items():
> +        if isinstance(val, bool):
> +            default_params.append("--" + key if val is True else "")
> +        else:
> +            default_params.append("--" + key + "=" + val)
> +
> +    if 'ptest' not in test_cases['default']['app']:
> +        print("Test Suite must contain default ptest value, skipping")
> +        return
> +    grapher.ptest = test_cases['default']['app']['ptest']
> +
> +    for (test, params) in {k: v for (k, v) in test_cases.items() if
> +                           k != "default"}.items():
> +        extra_params = []
> +        for (key, val) in params.items():
> +            extra_params.append("--" + key + "=" + val)
> +        run_test(test_cmd, test, grapher, timestamp,
> +                 default_params + extra_params)
> +
> +    create_results_pdf(dut)
> +
> +
> +def parse_args():
> +    """Parse command-line arguments passed to script."""
> +    parser = ArgumentParser()
> +    parser.add_argument('-f', '--file-path',
> +                        default=shutil.which('dpdk-test-crypto-perf'),
> +                        help="Path for test perf app")
> +    parser.add_argument('-t', '--test-suites', nargs='+', default=["all"],
> +                        help="List of device test suites to run")
> +    args = parser.parse_args()
> +    return args.file_path, args.test_suites
> +
> +

I think you should allow the user to specify paths for the configuration 
files and the output location.

May also consider adding support for specifying a specific test case 
within a suite.

> +def main():
> +    """
> +    Load JSON config and call relevant functions to run chosen test suites.
> +    """
> +    test_cmd, test_suites = parse_args()
> +    if not os.path.isfile(test_cmd):
> +        print("Invalid filepath!")
> +        return
> +    try:
> +        with open(SCRIPT_PATH + 'graph_crypto_perf_config.json') as conf:
> +            test_suite_options = json.load(conf)
> +    except json.decoder.JSONDecodeError as err:
> +        print("Error loading JSON config: " + err.msg)
> +        return
> +    timestamp = pd.Timestamp(datetime.now())
> +
> +    if test_suites != ["all"]:
> +        dev_list = []
> +        for (dut, test_cases) in {k: v for (k, v) in test_suite_options.items()
> +                                  if k in test_suites}.items():
> +            dev_list.append(dut)
> +            run_test_suite(test_cmd, dut, test_cases, timestamp)
> +        if not dev_list:
> +            print("No valid device test suites chosen!")
> +            return
> +    else:
> +        for (dut, test_cases) in test_suite_options.items():
> +            run_test_suite(test_cmd, dut, test_cases, timestamp)
> +
> +    if os.path.exists(GRAPHS_PATH):
> +        shutil.rmtree(GRAPHS_PATH)
> +
> +
> +if __name__ == "__main__":
> +    main() > diff --git a/usertools/graph_crypto_perf_config.json 
b/usertools/graph_crypto_perf_config.json
> new file mode 100644
> index 0000000000..004ec3e84e
> --- /dev/null
> +++ b/usertools/graph_crypto_perf_config.json
> @@ -0,0 +1,309 @@
> +{
> +	"crypto_aesni_mb_throughput": {
> +		"default": {
> +			"eal": {
> +				"l": "1,2",
> +				"log-level": "1",
> +				"vdev": "crypto_aesni_mb"
> +			},
> +			"app": {
> +				"csv-friendly": true,
> +				"silent": true,
> +				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
> +				"burst-sz": "1,4,8,16,32",
> +				"ptest": "throughput",
> +				"devtype": "crypto_aesni_mb"
> +			}
> +		},
> +		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "16",
> +			"auth-algo": "sha1-hmac",
> +			"optype": "auth-then-cipher",
> +			"cipher-op": "decrypt"
> +		},
> +		"AES-CBC-128 SHA1-HMAC cipher-then-auth encrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "16",
> +			"auth-algo": "sha1-hmac",
> +			"auth-op": "generate",
> +			"auth-key-sz": "64",
> +			"digest-sz": "20",
> +			"optype": "cipher-then-auth",
> +			"cipher-op": "encrypt"
> +		},
> +		"AES-CBC-256 SHA2-256-HMAC auth-then-cipher decrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "32",
> +			"auth-algo": "sha2-256-hmac",
> +			"optype": "auth-then-cipher",
> +			"cipher-op": "decrypt"
> +		},
> +		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "32",
> +			"auth-algo": "sha2-256-hmac",
> +			"optype": "cipher-then-auth"
> +		},
> +		"AES-GCM-128 aead-op encrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "16",
> +			"aead-iv-sz": "12",
> +			"aead-op": "encrypt",
> +			"aead-aad-sz": "16",
> +			"digest-sz": "16",
> +			"optype": "aead",
> +			"total-ops": "10000000"
> +		},
> +		"AES-GCM-128 aead-op decrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "16",
> +			"aead-op": "decrypt"
> +		},
> +		"AES-GCM-256 aead-op encrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "32",
> +			"aead-op": "encrypt"
> +		},
> +		"AES-GCM-256 aead-op decrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "32",
> +			"aead-op": "decrypt"
> +		},
> +		"AES-GMAC 128 auth-only generate": {
> +			"auth-algo": "aes-gmac",
> +			"auth-key-sz": "16",
> +			"auth-iv-sz": "12",
> +			"auth-op": "generate",
> +			"digest-sz": "16",
> +			"optype": "auth-only",
> +			"total-ops": "10000000"
> +		}
> +	},
> +	"crypto_aesni_mb_latency": {
> +		"default": {
> +			"eal": {
> +				"l": "1,2",
> +				"log-level": "1",
> +				"vdev": "crypto_aesni_mb"
> +			},
> +			"app": {
> +				"csv-friendly": true,
> +				"silent": true,
> +				"buffer-sz": "1024",
> +				"burst-sz": "16",
> +				"ptest": "latency",
> +				"devtype": "crypto_aesni_mb"
> +			}
> +		},
> +		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "16",
> +			"auth-algo": "sha1-hmac",
> +			"optype": "auth-then-cipher",
> +			"cipher-op": "decrypt"
> +		},
> +		"AES-GCM-256 aead-op encrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "32",
> +			"aead-op": "encrypt"
> +		}
> +	},
> +	"crypto_aesni_gcm_throughput": {
> +		"default": {
> +			"eal": {
> +				"l": "1,2",
> +				"log-level": "1",
> +				"vdev": "crypto_aesni_gcm"
> +			},
> +			"app": {
> +				"csv-friendly": true,
> +				"silent": true,
> +				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
> +				"burst-sz": "1,4,8,16,32",
> +				"ptest": "throughput",
> +				"devtype": "crypto_aesni_gcm"
> +			}
> +		},
> +		"AES-GCM-128 aead-op encrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "16",
> +			"aead-iv-sz": "12",
> +			"aead-op": "encrypt",
> +			"aead-aad-sz": "16",
> +			"digest-sz": "16",
> +			"optype": "aead",
> +			"total-ops": "10000000"
> +		},
> +		"AES-GCM-128 aead-op decrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "16",
> +			"aead-op": "decrypt",
> +			"aead-aad-sz": "16",
> +			"aead-iv-sz": "12",
> +			"digest-sz": "16",
> +			"optype": "aead",
> +			"total-ops": "10000000"
> +		},
> +		"AES-GCM-256 aead-op encrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "32",
> +			"aead-op": "encrypt",
> +			"aead-aad-sz": "32",
> +			"aead-iv-sz": "12",
> +			"digest-sz": "16",
> +			"optype": "aead",
> +			"total-ops": "10000000"
> +		},
> +		"AES-GCM-256 aead-op decrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "32",
> +			"aead-op": "decrypt",
> +			"aead-aad-sz": "32",
> +			"aead-iv-sz": "12",
> +			"digest-sz": "16",
> +			"optype": "aead",
> +			"total-ops": "10000000"
> +		},
> +		"AES-GMAC 128 auth-only generate": {
> +			"auth-algo": "aes-gmac",
> +			"auth-key-sz": "16",
> +			"auth-iv-sz": "12",
> +			"auth-op": "generate",
> +			"digest-sz": "16",
> +			"optype": "auth-only",
> +			"total-ops": "10000000"
> +		}
> +	},
> +	"crypto_aesni_gcm_latency": {
> +		"default": {
> +			"eal": {
> +				"l": "1,2",
> +				"log-level": "1",
> +				"vdev": "crypto_aesni_gcm"
> +			},
> +			"app": {
> +				"csv-friendly": true,
> +				"silent": true,
> +				"buffer-sz": "1024",
> +				"burst-sz": "16",
> +				"ptest": "latency",
> +				"devtype": "crypto_aesni_gcm"
> +			}
> +		},
> +		"AES-GCM-128 aead-op decrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "16",
> +			"aead-op": "decrypt",
> +			"aead-aad-sz": "16",
> +			"aead-iv-sz": "12",
> +			"digest-sz": "16",
> +			"optype": "aead"
> +		},
> +		"AES-GCM-256 aead-op encrypt latency": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "32",
> +			"aead-op": "encrypt",
> +			"aead-aad-sz": "32",
> +			"aead-iv-sz": "12",
> +			"digest-sz": "16",
> +			"optype": "aead"
> +		}
> +	},
> +	"crypto_qat_throughput": {
> +		"default": {
> +			"eal": {
> +				"l": "1,2",
> +				"log-level": "1"
> +			},
> +			"app": {
> +				"csv-friendly": true,
> +				"silent": true,
> +				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
> +				"burst-sz": "1,4,8,16,32",
> +				"devtype": "crypto_qat",
> +				"ptest": "throughput"
> +			}
> +		},
> +		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "16",
> +			"auth-algo": "sha1-hmac",
> +			"optype": "auth-then-cipher",
> +			"cipher-op": "decrypt"
> +		},
> +		"AES-CBC-128 SHA1-HMAC cipher-then-auth encrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "16",
> +			"auth-algo": "sha1-hmac",
> +			"optype": "cipher-then-auth",
> +			"cipher-op": "encrypt"
> +		},
> +		"AES-CBC-256 SHA2-256-HMAC auth-then-cipher decrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "32",
> +			"auth-algo": "sha2-256-hmac",
> +			"optype": "auth-then-cipher",
> +			"cipher-op": "decrypt"
> +		},
> +		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "32",
> +			"auth-algo": "sha2-256-hmac",
> +			"optype": "cipher-then-auth",
> +			"cipher-op": "encrypt"
> +		},
> +		"AES-GCM-128 aead-op encrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "16",
> +			"aead-iv-sz": "12",
> +			"aead-op": "encrypt",
> +			"aead-aad-sz": "16",
> +			"digest-sz": "16",
> +			"optype": "aead"
> +		},
> +		"AES-GCM-128 aead-op decrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "16",
> +			"aead-op": "decrypt"
> +		},
> +		"AES-GCM-256 aead-op encrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "32",
> +			"aead-op": "encrypt"
> +		},
> +		"AES-GCM-256 aead-op decrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "32",
> +			"aead-op": "decrypt"
> +		}
> +	},
> +	"crypto_qat_latency": {
> +		"default": {
> +			"eal": {
> +				"l": "1,2",
> +				"log-level": "1"
> +			},
> +			"app": {
> +				"csv-friendly": true,
> +				"silent": true,
> +				"ptest": "latency",
> +				"buffer-sz": "1024",
> +				"burst-sz": "16",
> +				"devtype": "crypto_qat"
> +			}
> +		},
> +		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
> +			"cipher-algo": "aes-cbc",
> +			"cipher-key-sz": "32",
> +			"auth-algo": "sha2-256-hmac",
> +			"optype": "cipher-then-auth",
> +			"cipher-op": "encrypt"
> +		},
> +		"AES-GCM-128 aead-op encrypt": {
> +			"aead-algo": "aes-gcm",
> +			"aead-key-sz": "16",
> +			"aead-op": "encrypt"
> +		}
> +	}
> +}
> \ No newline at end of file
> 

I would suggest splitting the json file into one configuration per 
device type.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script
  2020-12-11 17:31 [dpdk-dev] [PATCH 0/4] add crypto perf test graphing script Ciara Power
                   ` (3 preceding siblings ...)
  2020-12-11 17:31 ` [dpdk-dev] [PATCH 4/4] maintainers: update crypto perf app maintainers Ciara Power
@ 2021-01-14 10:41 ` Ciara Power
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 1/4] test/cryptodev: fix latency test csv output Ciara Power
                     ` (4 more replies)
  2021-01-20 17:29 ` [dpdk-dev] [PATCH v3 " Ciara Power
  5 siblings, 5 replies; 27+ messages in thread
From: Ciara Power @ 2021-01-14 10:41 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski, Ciara Power

This patchset introduces a python script to run various crypto performance
test cases, and graph the results in a consumable manner. The test suites
are configured via JSON file. Some config files are provided,
or the user may create one. Currently throughput and latency ptests for
devices crypto_qat, crypto_aesni_mb and crypto_aesni_gcm are supported.

The final collection of graphs are output in PDF format, with multiple PDFs
per test suite, one for each graph type.

Some fixes are included for the throughput performance test and latency
performance test csv outputs also.

v2:
  - Reduced changes to only fix csv format for all perf test types.
  - Added functionality for additional args such as config file,
    output directory and verbose.
  - Improved help text for script.
  - Improved script console output.
  - Added support for latency test cases with burst or buffer size lists.
  - Split config file into smaller config files, one for each device.
  - Split output PDFs into smaller files, based on test suite graph types.
  - Modified output directory naming and structure.
  - Made some general improvements to script.
  - Updated and improved documentation.

Ciara Power (4):
  test/cryptodev: fix latency test csv output
  test/cryptodev: fix csv output format
  usertools: add script to graph crypto perf results
  maintainers: update crypto perf app maintainers

 MAINTAINERS                                   |   3 +
 app/test-crypto-perf/cperf_test_latency.c     |   4 +-
 .../cperf_test_pmd_cyclecount.c               |   2 +-
 app/test-crypto-perf/cperf_test_throughput.c  |   4 +-
 app/test-crypto-perf/cperf_test_verify.c      |   2 +-
 doc/guides/tools/cryptoperf.rst               | 142 ++++++++
 usertools/configs/crypto-perf-aesni-gcm.json  |  99 ++++++
 usertools/configs/crypto-perf-aesni-mb.json   | 108 ++++++
 usertools/configs/crypto-perf-qat.json        |  94 ++++++
 usertools/dpdk-graph-crypto-perf.py           | 309 ++++++++++++++++++
 10 files changed, 761 insertions(+), 6 deletions(-)
 create mode 100644 usertools/configs/crypto-perf-aesni-gcm.json
 create mode 100644 usertools/configs/crypto-perf-aesni-mb.json
 create mode 100644 usertools/configs/crypto-perf-qat.json
 create mode 100755 usertools/dpdk-graph-crypto-perf.py

-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v2 1/4] test/cryptodev: fix latency test csv output
  2021-01-14 10:41 ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Ciara Power
@ 2021-01-14 10:41   ` Ciara Power
  2021-01-15  9:42     ` Dybkowski, AdamX
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 2/4] test/cryptodev: fix csv output format Ciara Power
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 27+ messages in thread
From: Ciara Power @ 2021-01-14 10:41 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski,
	Ciara Power, pablo.de.lara.guarch, stable

The csv output for the latency performance test had an extra header,
"Packet Size", which is a duplicate of "Buffer Size", and had no
corresponding value in the output. This is now removed.

Fixes: f6cefe253cc8 ("app/crypto-perf: add range/list of sizes")
Cc: pablo.de.lara.guarch@intel.com
Cc: stable@dpdk.org

Signed-off-by: Ciara Power <ciara.power@intel.com>
---
 app/test-crypto-perf/cperf_test_latency.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 0e4d0e1538..c2590a4dcf 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -310,7 +310,7 @@ cperf_latency_test_runner(void *arg)
 		if (ctx->options->csv) {
 			if (rte_atomic16_test_and_set(&display_once))
 				printf("\n# lcore, Buffer Size, Burst Size, Pakt Seq #, "
-						"Packet Size, cycles, time (us)");
+						"cycles, time (us)");
 
 			for (i = 0; i < ctx->options->total_ops; i++) {
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v2 2/4] test/cryptodev: fix csv output format
  2021-01-14 10:41 ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Ciara Power
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 1/4] test/cryptodev: fix latency test csv output Ciara Power
@ 2021-01-14 10:41   ` Ciara Power
  2021-01-15  9:42     ` Dybkowski, AdamX
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 3/4] usertools: add script to graph crypto perf results Ciara Power
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 27+ messages in thread
From: Ciara Power @ 2021-01-14 10:41 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski,
	Ciara Power, anatoly.burakov, pablo.de.lara.guarch, stable

The csv output for each ptest type used ";" instead of ",".
This has now been fixed to use the comma format that is used in the csv
headers.

Fixes: f6cefe253cc8 ("app/crypto-perf: add range/list of sizes")
Fixes: 96dfeb609be1 ("app/crypto-perf: add new PMD benchmarking mode")
Fixes: da40ebd6d383 ("app/crypto-perf: display results in test runner")
Cc: anatoly.burakov@intel.com
Cc: pablo.de.lara.guarch@intel.com
Cc: stable@dpdk.org

Signed-off-by: Ciara Power <ciara.power@intel.com>

---
v2:
  - Reduced changes to only fix csv format.
---
 app/test-crypto-perf/cperf_test_latency.c        | 2 +-
 app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 2 +-
 app/test-crypto-perf/cperf_test_throughput.c     | 4 ++--
 app/test-crypto-perf/cperf_test_verify.c         | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index c2590a4dcf..159fe8492b 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -314,7 +314,7 @@ cperf_latency_test_runner(void *arg)
 
 			for (i = 0; i < ctx->options->total_ops; i++) {
 
-				printf("\n%u;%u;%u;%"PRIu64";%"PRIu64";%.3f",
+				printf("\n%u,%u,%u,%"PRIu64",%"PRIu64",%.3f",
 					ctx->lcore_id, ctx->options->test_buffer_size,
 					test_burst_size, i + 1,
 					ctx->res[i].tsc_end - ctx->res[i].tsc_start,
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 4e67d3aebd..844659aeca 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -16,7 +16,7 @@
 #define PRETTY_HDR_FMT "%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s\n\n"
 #define PRETTY_LINE_FMT "%12u%12u%12u%12u%12u%12u%12u%12.0f%12.0f%12.0f\n"
 #define CSV_HDR_FMT "%s,%s,%s,%s,%s,%s,%s,%s,%s,%s\n"
-#define CSV_LINE_FMT "%10u;%10u;%u;%u;%u;%u;%u;%.3f;%.3f;%.3f\n"
+#define CSV_LINE_FMT "%10u,%10u,%u,%u,%u,%u,%u,%.3f,%.3f,%.3f\n"
 
 struct cperf_pmd_cyclecount_ctx {
 	uint8_t dev_id;
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index f30f7d5c2c..f6eb8cf259 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -299,8 +299,8 @@ cperf_throughput_test_runner(void *test_ctx)
 					"Failed Deq,Ops(Millions),Throughput(Gbps),"
 					"Cycles/Buf\n\n");
 
-			printf("%u;%u;%u;%"PRIu64";%"PRIu64";%"PRIu64";%"PRIu64";"
-					"%.3f;%.3f;%.3f\n",
+			printf("%u,%u,%u,%"PRIu64",%"PRIu64",%"PRIu64",%"PRIu64","
+					"%.3f,%.3f,%.3f\n",
 					ctx->lcore_id,
 					ctx->options->test_buffer_size,
 					test_burst_size,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 833bc9a552..2939aeaa93 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -406,7 +406,7 @@ cperf_verify_test_runner(void *test_ctx)
 				"Burst Size,Enqueued,Dequeued,Failed Enq,"
 				"Failed Deq,Failed Ops\n");
 
-		printf("%10u;%10u;%u;%"PRIu64";%"PRIu64";%"PRIu64";%"PRIu64";"
+		printf("%10u,%10u,%u,%"PRIu64",%"PRIu64",%"PRIu64",%"PRIu64","
 				"%"PRIu64"\n",
 				ctx->lcore_id,
 				ctx->options->max_buffer_size,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v2 3/4] usertools: add script to graph crypto perf results
  2021-01-14 10:41 ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Ciara Power
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 1/4] test/cryptodev: fix latency test csv output Ciara Power
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 2/4] test/cryptodev: fix csv output format Ciara Power
@ 2021-01-14 10:41   ` Ciara Power
  2021-01-15  9:43     ` Dybkowski, AdamX
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 4/4] maintainers: update crypto perf app maintainers Ciara Power
  2021-01-15  8:31   ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Doherty, Declan
  4 siblings, 1 reply; 27+ messages in thread
From: Ciara Power @ 2021-01-14 10:41 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski,
	Ciara Power, Thomas Monjalon

The python script introduced in this patch runs the crypto performance
test application for various test cases, and graphs the results.

Test cases are defined in config JSON files, this is where parameters
are specified for each test. Currently there are various test cases for
devices crypto_qat, crypto_aesni_mb and crypto_gcm. Tests for the
ptest types Throughput and Latency are supported for each.

The results of each test case are graphed and saved in PDFs (one PDF for
each test suite graph type, with all test cases).
The graphs output include various grouped barcharts for throughput
tests, and histogram and boxplot graphs are used for latency tests.

Documentation is added to outline the configuration and usage for the
script.

Usage:
A JSON config file must be specified when running the script,
	"./dpdk-graph-crypto-perf <config_file>"

The script uses the installed app by default (from ninja install).
Alternatively we can pass path to app by
	"-f <rel_path>/<build_dir>/app/dpdk-test-crypto-perf"

All device test suites are run by default.
Alternatively we can specify by adding arguments,
	"-t latency" - to run latency test suite only
	"-t throughput latency"
		- to run both throughput and latency test suites

A directory can be specified for all output files,
or the script directory is used by default.
	"-o <output_dir>"

To see the output from the dpdk-test-crypto-perf app,
use the verbose option "-v".

Signed-off-by: Ciara Power <ciara.power@intel.com>

---
v2:
  - Added functionality for additional args such as config file,
    output directory and verbose.
  - Improved help text for script.
  - Improved script console output.
  - Added support for latency test cases with burst or buffer size lists.
  - Split config file into smaller config files, one for each device.
  - Split output PDFs into smaller files, based on test suite graph types.
  - Modified output directory naming and structure.
  - Made some general improvements to script.
  - Updated and improved documentation.
  - Updated copyright year.
---
 MAINTAINERS                                  |   2 +
 doc/guides/tools/cryptoperf.rst              | 142 +++++++++
 usertools/configs/crypto-perf-aesni-gcm.json |  99 ++++++
 usertools/configs/crypto-perf-aesni-mb.json  | 108 +++++++
 usertools/configs/crypto-perf-qat.json       |  94 ++++++
 usertools/dpdk-graph-crypto-perf.py          | 309 +++++++++++++++++++
 6 files changed, 754 insertions(+)
 create mode 100644 usertools/configs/crypto-perf-aesni-gcm.json
 create mode 100644 usertools/configs/crypto-perf-aesni-mb.json
 create mode 100644 usertools/configs/crypto-perf-qat.json
 create mode 100755 usertools/dpdk-graph-crypto-perf.py

diff --git a/MAINTAINERS b/MAINTAINERS
index 6787b15dcc..52265e7b02 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1589,6 +1589,8 @@ M: Declan Doherty <declan.doherty@intel.com>
 T: git://dpdk.org/next/dpdk-next-crypto
 F: app/test-crypto-perf/
 F: doc/guides/tools/cryptoperf.rst
+F: usertools/dpdk-graph-crypto-perf.py
+F: usertools/configs
 
 Eventdev test application
 M: Jerin Jacob <jerinj@marvell.com>
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 79359fe894..ebdc8367f1 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -453,3 +453,145 @@ Test vector file for cipher algorithm aes cbc 256 with authorization sha::
    digest =
    0x1C, 0xB2, 0x3D, 0xD1, 0xF9, 0xC7, 0x6C, 0x49, 0x2E, 0xDA, 0x94, 0x8B, 0xF1, 0xCF, 0x96, 0x43,
    0x67, 0x50, 0x39, 0x76, 0xB5, 0xA1, 0xCE, 0xA1, 0xD7, 0x77, 0x10, 0x07, 0x43, 0x37, 0x05, 0xB4
+
+
+Graph Crypto Perf Results
+-------------------------
+
+The ``dpdk-graph-crypto-perf.py`` usertool is a simple script to automate
+running crypto performance tests, and graphing the results.
+The output graphs include various grouped barcharts for throughput
+tests, and histogram and boxplot graphs for latency tests.
+These are output to PDF files, with one PDF per test suite graph type.
+
+
+Dependencies
+~~~~~~~~~~~~
+
+The following python modules must be installed to run the script:
+
+* img2pdf
+
+* plotly
+
+* pandas
+
+* glob
+
+
+Test Configuration
+~~~~~~~~~~~~~~~~~~
+
+The test cases run by the script are defined by a JSON config file.
+Some config files can be found in ``usertools/configs/``,
+or the user may create a new one following the same format as the config files provided.
+
+An example of this format is shown below for one test suite in the ``crypto-perf-aesni-mb.json`` file.
+This shows the required default config for the test suite, and one test case.
+The test case has additional app config that will be combined with
+the default config when running the test case.
+
+.. code-block:: c
+
+   "throughput": {
+       "default": {
+           "eal": {
+               "l": "1,2",
+               "vdev": "crypto_aesni_mb"
+           },
+           "app": {
+               "csv-friendly": true,
+               "buffer-sz": "64,128,256,512,768,1024,1408,2048",
+               "burst-sz": "1,4,8,16,32",
+               "ptest": "throughput",
+               "devtype": "crypto_aesni_mb"
+           }
+        },
+       "AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+               "cipher-algo": "aes-cbc",
+               "cipher-key-sz": "16",
+               "auth-algo": "sha1-hmac",
+               "optype": "auth-then-cipher",
+               "cipher-op": "decrypt"
+        }
+   }
+
+.. note::
+   The specific test cases only allow modification of app parameters,
+   and not EAL parameters.
+   The default case is required for each test suite in the config file,
+   to specify EAL parameters.
+
+Currently, crypto_qat, crypto_aesni_mb, and crypto_aesni_gcm devices for
+both throughput and latency ptests are supported.
+
+
+Usage
+~~~~~
+
+.. code-block:: console
+
+   ./dpdk-graph-crypto-perf <config_file>
+
+The ``config_file`` positional argument is required to run the script.
+This points to a valid JSON config file containing test suites.
+
+.. code-block:: console
+
+   ./dpdk-graph-crypto-perf configs/crypto-perf-aesni-mb.json
+
+The following are the application optional command-line options:
+
+* ``-h, --help``
+
+    Display usage information and quit
+
+
+* ``-f <file_path>, --file-path <file_path>``
+
+  Provide path to ``dpdk-test-crypto-perf`` application.
+  The script uses the installed app by default.
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf -f <build_dir>/app/dpdk-test-crypto-perf
+
+
+* ``-t <test_suite_list>, --test-suites <test_suite_list>``
+
+  Specify test suites to run. All test suites are run by default.
+
+  To run crypto-perf-qat latency test suite only:
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf configs/crypto-perf-qat -t latency
+
+  To run both crypto-perf-aesni-mb throughput and latency test suites
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf configs/crypto-perf-aesni-mb -t throughput latency
+
+
+* ``-o <output_path>, --output-path <output_path>``
+
+  Specify directory to use for output files.
+  The default is to use the script's directory.
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf <config_file> -o <output_dir>
+
+
+* ``-v, --verbose``
+
+  Enable verbose output. This displays ``dpdk-test-crypto-perf`` app output in real-time.
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf <config_file> -v
+
+  .. warning::
+     Latency performance tests have a large amount of output.
+     It is not recommended to use the verbose option for latency tests.
diff --git a/usertools/configs/crypto-perf-aesni-gcm.json b/usertools/configs/crypto-perf-aesni-gcm.json
new file mode 100644
index 0000000000..608a46e34f
--- /dev/null
+++ b/usertools/configs/crypto-perf-aesni-gcm.json
@@ -0,0 +1,99 @@
+{
+	"throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"vdev": "crypto_aesni_gcm"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"ptest": "throughput",
+				"devtype": "crypto_aesni_gcm"
+			}
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "16",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GMAC 128 auth-only generate": {
+			"auth-algo": "aes-gmac",
+			"auth-key-sz": "16",
+			"auth-iv-sz": "12",
+			"auth-op": "generate",
+			"digest-sz": "16",
+			"optype": "auth-only",
+			"total-ops": "10000000"
+		}
+	},
+	"latency": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"vdev": "crypto_aesni_gcm"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"ptest": "latency",
+				"devtype": "crypto_aesni_gcm"
+			}
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "16",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead"
+		},
+		"AES-GCM-256 aead-op encrypt latency": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead"
+		}
+	}
+}
diff --git a/usertools/configs/crypto-perf-aesni-mb.json b/usertools/configs/crypto-perf-aesni-mb.json
new file mode 100644
index 0000000000..d50e4af36c
--- /dev/null
+++ b/usertools/configs/crypto-perf-aesni-mb.json
@@ -0,0 +1,108 @@
+{
+	"throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"vdev": "crypto_aesni_mb"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"ptest": "throughput",
+				"devtype": "crypto_aesni_mb"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-128 SHA1-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"auth-op": "generate",
+			"auth-key-sz": "64",
+			"digest-sz": "20",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt"
+		},
+		"AES-GMAC 128 auth-only generate": {
+			"auth-algo": "aes-gmac",
+			"auth-key-sz": "16",
+			"auth-iv-sz": "12",
+			"auth-op": "generate",
+			"digest-sz": "16",
+			"optype": "auth-only",
+			"total-ops": "10000000"
+		}
+	},
+	"latency": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"vdev": "crypto_aesni_mb"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"ptest": "latency",
+				"devtype": "crypto_aesni_mb"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		}
+	}
+}
diff --git a/usertools/configs/crypto-perf-qat.json b/usertools/configs/crypto-perf-qat.json
new file mode 100644
index 0000000000..0adb809e39
--- /dev/null
+++ b/usertools/configs/crypto-perf-qat.json
@@ -0,0 +1,94 @@
+{
+	"throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"devtype": "crypto_qat",
+				"ptest": "throughput"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-128 SHA1-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt"
+		}
+	},
+	"latency": {
+		"default": {
+			"eal": {
+				"l": "1,2"
+			},
+			"app": {
+				"csv-friendly": true,
+				"ptest": "latency",
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"devtype": "crypto_qat"
+			}
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "encrypt"
+		}
+	}
+}
diff --git a/usertools/dpdk-graph-crypto-perf.py b/usertools/dpdk-graph-crypto-perf.py
new file mode 100755
index 0000000000..f4341ee718
--- /dev/null
+++ b/usertools/dpdk-graph-crypto-perf.py
@@ -0,0 +1,309 @@
+#! /usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+
+"""
+Script to automate running crypto performance tests for a range of test
+cases as configured in the JSON file specified by the user.
+The results are processed and output into various graphs in PDF files.
+Currently, throughput and latency tests are supported.
+"""
+
+import glob
+import json
+import os
+import shutil
+import subprocess
+from argparse import ArgumentParser
+from argparse import ArgumentDefaultsHelpFormatter
+import img2pdf
+import pandas as pd
+import plotly.express as px
+
+SCRIPT_PATH = os.path.dirname(__file__) + "/"
+GRAPH_DIR = "temp_graphs"
+
+
+class Grapher:
+    """Grapher object containing all graphing functions. """
+    def __init__(self, config, suite, graph_path):
+        self.graph_num = 0
+        self.graph_path = graph_path
+        self.suite = suite
+        self.config = config
+        self.test = ""
+        self.ptest = ""
+        self.data = pd.DataFrame()
+
+    def save_graph(self, fig, subdir):
+        """
+        Update figure layout to increase readability, output to JPG file.
+        """
+        path = os.path.join(self.graph_path, subdir, "")
+        if not os.path.exists(path):
+            os.makedirs(path)
+        fig.update_layout(font_size=30, title_x=0.5, title_font={"size": 25},
+                          margin={'t': 300, 'l': 150, 'r': 150, 'b': 150})
+        fig.write_image(path + "%d.jpg" % self.graph_num)
+
+    def boxplot_graph(self, x_axis_label, burst, buffer):
+        """Plot a boxplot graph for the given parameters."""
+        fig = px.box(self.data, x=x_axis_label,
+                     title="Config: " + self.config + "<br>Test Suite: " +
+                     self.suite + "<br>" + self.test +
+                     "<br>(Outliers Included)<br>Burst Size: " + burst +
+                     ", Buffer Size: " + buffer,
+                     height=1400, width=2400)
+        self.save_graph(fig, x_axis_label.replace(' ', '_'))
+        self.graph_num += 1
+
+    def grouped_graph(self, y_axis_label, x_axis_label, color_label):
+        """Plot a grouped barchart using the given parameters."""
+        if (self.data[y_axis_label] == 0).all():
+            return
+        fig = px.bar(self.data, x=x_axis_label, color=color_label,
+                     y=y_axis_label,
+                     title="Config: " + self.config + "<br>Test Suite: " +
+                     self.suite + "<br>" + self.test + "<br>"
+                     + y_axis_label + " for each " + x_axis_label +
+                     "/" + color_label, barmode="group", height=1400,
+                     width=2400)
+        fig.update_xaxes(type='category')
+        self.save_graph(fig, y_axis_label.replace(' ', '_'))
+        self.graph_num += 1
+
+    def histogram_graph(self, x_axis_label, burst, buffer):
+        """Plot a histogram graph using the given parameters."""
+        quart1 = self.data[x_axis_label].quantile(0.25)
+        quart3 = self.data[x_axis_label].quantile(0.75)
+        inter_quart_range = quart3 - quart1
+        data_out = self.data[~((self.data[x_axis_label] <
+                                (quart1 - 1.5 * inter_quart_range)) |
+                               (self.data[x_axis_label] >
+                                (quart3 + 1.5 * inter_quart_range)))]
+        fig = px.histogram(data_out, x=x_axis_label,
+                           title="Config: " + self.config + "<br>Test Suite: "
+                           + self.suite + "<br>" + self.test
+                           + "<br>(Outliers removed using Interquartile Range)"
+                           + "<br>Burst Size: " + burst + ", Buffer Size: " +
+                           buffer, height=1400, width=2400)
+        max_val = data_out[x_axis_label].max()
+        min_val = data_out[x_axis_label].min()
+        fig.update_traces(xbins=dict(
+            start=min_val,
+            end=max_val,
+            size=(max_val - min_val) / 200
+        ))
+        self.save_graph(fig, x_axis_label.replace(' ', '_'))
+        self.graph_num += 1
+
+
+def cleanup_throughput_datatypes(data):
+    """Cleanup data types of throughput test results dataframe. """
+    data.columns = data.columns.str.replace('/', ' ')
+    data.columns = data.columns.str.strip()
+    data['Burst Size'] = data['Burst Size'].astype('category')
+    data['Buffer Size(B)'] = data['Buffer Size(B)'].astype('category')
+    data['Failed Enq'] = data['Failed Enq'].astype('int')
+    data['Throughput(Gbps)'] = data['Throughput(Gbps)'].astype('float')
+    data['Ops(Millions)'] = data['Ops(Millions)'].astype('float')
+    data['Cycles Buf'] = data['Cycles Buf'].astype('float')
+    return data
+
+
+def cleanup_latency_datatypes(data):
+    """Cleanup data types of latency test results dataframe. """
+    data.columns = data.columns.str.strip()
+    data = data[['Burst Size', 'Buffer Size', 'time (us)']].copy()
+    data['Burst Size'] = data['Burst Size'].astype('category')
+    data['Buffer Size'] = data['Buffer Size'].astype('category')
+    data['time (us)'] = data['time (us)'].astype('float')
+    return data
+
+
+def process_test_results(grapher, data):
+    """
+    Process results from the test case,
+    calling graph functions to output graph images.
+    """
+    if grapher.ptest == "throughput":
+        grapher.data = cleanup_throughput_datatypes(data)
+        for y_label in ["Throughput(Gbps)", "Ops(Millions)",
+                        "Cycles Buf", "Failed Enq"]:
+            grapher.grouped_graph(y_label, "Buffer Size(B)",
+                                  "Burst Size")
+    elif grapher.ptest == "latency":
+        clean_data = cleanup_latency_datatypes(data)
+        for (burst, buffer), group in clean_data.groupby(['Burst Size',
+                                                          'Buffer Size']):
+            grapher.data = group
+            grapher.histogram_graph("time (us)", burst, buffer)
+            grapher.boxplot_graph("time (us)", burst, buffer)
+    else:
+        print("Invalid ptest")
+        return
+
+
+def create_results_pdf(graph_path, pdf_path):
+    """Output results graphs to PDFs."""
+    if not os.path.exists(pdf_path):
+        os.makedirs(pdf_path)
+    for _, dirs, _ in os.walk(graph_path):
+        for sub in dirs:
+            graphs = sorted(glob.glob(os.path.join(graph_path, sub, "*.jpg")),
+                            key=(lambda x: int((x.rsplit('/', 1)[1])
+                                               .split('.')[0])))
+            if graphs:
+                with open(pdf_path + "%s_results.pdf" % sub, "wb") as pdf_file:
+                    pdf_file.write(img2pdf.convert(graphs))
+
+
+def run_test(test_cmd, test, grapher, params, verbose):
+    """Run performance test app for the given test case parameters."""
+    process = subprocess.Popen(["stdbuf", "-oL", test_cmd] + params,
+                               universal_newlines=True,
+                               stdout=subprocess.PIPE,
+                               stderr=subprocess.STDOUT)
+    rows = []
+    if verbose:
+        print("\n\tOutput for " + test + ":")
+    while process.poll() is None:
+        line = process.stdout.readline().strip()
+        if not line:
+            continue
+        if verbose:
+            print("\t\t>>" + line)
+
+        if line.replace(' ', '').startswith('#lcore'):
+            columns = line[1:].split(',')
+        elif line[0].isdigit():
+            line = line.replace(';', ',')
+            rows.append(line.split(','))
+        else:
+            continue
+
+    if process.poll() != 0 or not columns or not rows:
+        print("\n\t" + test + ": FAIL")
+        return
+    data = pd.DataFrame(rows, columns=columns)
+    grapher.test = test
+    process_test_results(grapher, data)
+    print("\n\t" + test + ": OK")
+    return
+
+
+def run_test_suite(test_cmd, suite_config, verbose):
+    """Parse test cases for the test suite and run each test."""
+    print("\nRunning Test Suite: " + suite_config['suite'])
+    default_params = []
+    graph_path = os.path.join(suite_config['output_path'], GRAPH_DIR,
+                              suite_config['suite'], "")
+    grapher = Grapher(suite_config['config_name'], suite_config['suite'],
+                      graph_path)
+    test_cases = suite_config['test_cases']
+    if 'default' not in test_cases:
+        print("Test Suite must contain default case, skipping")
+        return
+    for (key, val) in test_cases['default']['eal'].items():
+        if len(key) == 1:
+            default_params.append("-" + key + " " + val)
+        else:
+            default_params.append("--" + key + "=" + val)
+
+    default_params.append("--")
+    for (key, val) in test_cases['default']['app'].items():
+        if isinstance(val, bool):
+            default_params.append("--" + key if val is True else "")
+        else:
+            default_params.append("--" + key + "=" + val)
+
+    if 'ptest' not in test_cases['default']['app']:
+        print("Test Suite must contain default ptest value, skipping")
+        return
+    grapher.ptest = test_cases['default']['app']['ptest']
+
+    for (test, params) in {k: v for (k, v) in test_cases.items() if
+                           k != "default"}.items():
+        extra_params = []
+        for (key, val) in params.items():
+            if isinstance(val, bool):
+                extra_params.append("--" + key if val is True else "")
+            else:
+                extra_params.append("--" + key + "=" + val)
+
+        run_test(test_cmd, test, grapher, default_params + extra_params,
+                 verbose)
+
+    create_results_pdf(graph_path, os.path.join(suite_config['output_path'],
+                                                suite_config['suite'], ""))
+
+
+def parse_args():
+    """Parse command-line arguments passed to script."""
+    parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)
+    parser.add_argument('config_path', type=str,
+                        help="Path to JSON configuration file")
+    parser.add_argument('-t', '--test-suites', nargs='+', default=["all"],
+                        help="List of test suites to run")
+    parser.add_argument('-v', '--verbose', action='store_true',
+                        help="""Display perf test app output.
+                        Not recommended for latency tests.""")
+    parser.add_argument('-f', '--file-path',
+                        default=shutil.which('dpdk-test-crypto-perf'),
+                        help="Path for perf test app")
+    parser.add_argument('-o', '--output-path', default=SCRIPT_PATH,
+                        help="Path to store output directories")
+    args = parser.parse_args()
+    return (args.file_path, args.test_suites, args.config_path,
+            args.output_path, args.verbose)
+
+
+def main():
+    """
+    Load JSON config and call relevant functions to run chosen test suites.
+    """
+    test_cmd, test_suites, config_file, output_path, verbose = parse_args()
+    if test_cmd is None or not os.path.isfile(test_cmd):
+        print("Invalid filepath for perf test app!")
+        return
+    try:
+        with open(config_file) as conf:
+            test_suite_ops = json.load(conf)
+            config_name = os.path.splitext(config_file)[0]
+            if '/' in config_name:
+                config_name = config_name.rsplit('/', 1)[1]
+            output_path = os.path.join(output_path, config_name, "")
+            print("Using config: " + config_file)
+    except OSError as err:
+        print("Error with JSON file path: " + err.strerror)
+        return
+    except json.decoder.JSONDecodeError as err:
+        print("Error loading JSON config: " + err.msg)
+        return
+
+    if test_suites != ["all"]:
+        suite_list = []
+        for (suite, test_cases) in {k: v for (k, v) in test_suite_ops.items()
+                                    if k in test_suites}.items():
+            suite_list.append(suite)
+            suite_config = {'config_name': config_name, 'suite': suite,
+                            'test_cases': test_cases,
+                            'output_path': output_path}
+            run_test_suite(test_cmd, suite_config, verbose)
+        if not suite_list:
+            print("No valid test suites chosen!")
+            return
+    else:
+        for (suite, test_cases) in test_suite_ops.items():
+            suite_config = {'config_name': config_name, 'suite': suite,
+                            'test_cases': test_cases,
+                            'output_path': output_path}
+            run_test_suite(test_cmd, suite_config, verbose)
+
+    graph_path = os.path.join(output_path, GRAPH_DIR, "")
+    if os.path.exists(graph_path):
+        shutil.rmtree(graph_path)
+
+
+if __name__ == "__main__":
+    main()
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v2 4/4] maintainers: update crypto perf app maintainers
  2021-01-14 10:41 ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Ciara Power
                     ` (2 preceding siblings ...)
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 3/4] usertools: add script to graph crypto perf results Ciara Power
@ 2021-01-14 10:41   ` Ciara Power
  2021-01-15 10:13     ` Dybkowski, AdamX
  2021-01-15  8:31   ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Doherty, Declan
  4 siblings, 1 reply; 27+ messages in thread
From: Ciara Power @ 2021-01-14 10:41 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski,
	Ciara Power, Thomas Monjalon

This patch adds a maintainer for the crypto perf test application,
to cover the new perf test graphing script.

Signed-off-by: Ciara Power <ciara.power@intel.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 52265e7b02..17b7ad176a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1586,6 +1586,7 @@ F: doc/guides/tools/comp_perf.rst
 
 Crypto performance test application
 M: Declan Doherty <declan.doherty@intel.com>
+M: Ciara Power <ciara.power@intel.com>
 T: git://dpdk.org/next/dpdk-next-crypto
 F: app/test-crypto-perf/
 F: doc/guides/tools/cryptoperf.rst
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script
  2021-01-14 10:41 ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Ciara Power
                     ` (3 preceding siblings ...)
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 4/4] maintainers: update crypto perf app maintainers Ciara Power
@ 2021-01-15  8:31   ` Doherty, Declan
  2021-01-15 15:54     ` Akhil Goyal
  4 siblings, 1 reply; 27+ messages in thread
From: Doherty, Declan @ 2021-01-15  8:31 UTC (permalink / raw)
  To: Ciara Power, dev; +Cc: akhil.goyal, stephen, adamx.dybkowski



On 14/01/2021 10:41 AM, Ciara Power wrote:
> This patchset introduces a python script to run various crypto performance
> test cases, and graph the results in a consumable manner. The test suites
> are configured via JSON file. Some config files are provided,
> or the user may create one. Currently throughput and latency ptests for
> devices crypto_qat, crypto_aesni_mb and crypto_aesni_gcm are supported.
> 
> The final collection of graphs are output in PDF format, with multiple PDFs
> per test suite, one for each graph type.
> 
> Some fixes are included for the throughput performance test and latency
> performance test csv outputs also.
> 
> v2:
>    - Reduced changes to only fix csv format for all perf test types.
>    - Added functionality for additional args such as config file,
>      output directory and verbose.
>    - Improved help text for script.
>    - Improved script console output.
>    - Added support for latency test cases with burst or buffer size lists.
>    - Split config file into smaller config files, one for each device.
>    - Split output PDFs into smaller files, based on test suite graph types.
>    - Modified output directory naming and structure.
>    - Made some general improvements to script.
>    - Updated and improved documentation.
> 
> Ciara Power (4):
>    test/cryptodev: fix latency test csv output
>    test/cryptodev: fix csv output format
>    usertools: add script to graph crypto perf results
>    maintainers: update crypto perf app maintainers
> 
>   MAINTAINERS                                   |   3 +
>   app/test-crypto-perf/cperf_test_latency.c     |   4 +-
>   .../cperf_test_pmd_cyclecount.c               |   2 +-
>   app/test-crypto-perf/cperf_test_throughput.c  |   4 +-
>   app/test-crypto-perf/cperf_test_verify.c      |   2 +-
>   doc/guides/tools/cryptoperf.rst               | 142 ++++++++
>   usertools/configs/crypto-perf-aesni-gcm.json  |  99 ++++++
>   usertools/configs/crypto-perf-aesni-mb.json   | 108 ++++++
>   usertools/configs/crypto-perf-qat.json        |  94 ++++++
>   usertools/dpdk-graph-crypto-perf.py           | 309 ++++++++++++++++++
>   10 files changed, 761 insertions(+), 6 deletions(-)
>   create mode 100644 usertools/configs/crypto-perf-aesni-gcm.json
>   create mode 100644 usertools/configs/crypto-perf-aesni-mb.json
>   create mode 100644 usertools/configs/crypto-perf-qat.json
>   create mode 100755 usertools/dpdk-graph-crypto-perf.py
>

Series Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/4] test/cryptodev: fix latency test csv output
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 1/4] test/cryptodev: fix latency test csv output Ciara Power
@ 2021-01-15  9:42     ` Dybkowski, AdamX
  0 siblings, 0 replies; 27+ messages in thread
From: Dybkowski, AdamX @ 2021-01-15  9:42 UTC (permalink / raw)
  To: Power, Ciara, dev
  Cc: Doherty, Declan, akhil.goyal, stephen, De Lara Guarch, Pablo, stable

> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Thursday, 14 January, 2021 11:41
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; akhil.goyal@nxp.com;
> stephen@networkplumber.org; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Power, Ciara <ciara.power@intel.com>; De
> Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; stable@dpdk.org
> Subject: [PATCH v2 1/4] test/cryptodev: fix latency test csv output
> 
> The csv output for the latency performance test had an extra header, "Packet
> Size", which is a duplicate of "Buffer Size", and had no corresponding value in
> the output. This is now removed.
> 
> Fixes: f6cefe253cc8 ("app/crypto-perf: add range/list of sizes")
> Cc: pablo.de.lara.guarch@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ciara Power <ciara.power@intel.com>

Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] test/cryptodev: fix csv output format
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 2/4] test/cryptodev: fix csv output format Ciara Power
@ 2021-01-15  9:42     ` Dybkowski, AdamX
  0 siblings, 0 replies; 27+ messages in thread
From: Dybkowski, AdamX @ 2021-01-15  9:42 UTC (permalink / raw)
  To: Power, Ciara, dev
  Cc: Doherty, Declan, akhil.goyal, stephen, Burakov, Anatoly,
	De Lara Guarch, Pablo, stable

> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Thursday, 14 January, 2021 11:41
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; akhil.goyal@nxp.com;
> stephen@networkplumber.org; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Power, Ciara <ciara.power@intel.com>;
> Burakov, Anatoly <anatoly.burakov@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; stable@dpdk.org
> Subject: [PATCH v2 2/4] test/cryptodev: fix csv output format
> 
> The csv output for each ptest type used ";" instead of ",".
> This has now been fixed to use the comma format that is used in the csv
> headers.
> 
> Fixes: f6cefe253cc8 ("app/crypto-perf: add range/list of sizes")
> Fixes: 96dfeb609be1 ("app/crypto-perf: add new PMD benchmarking mode")
> Fixes: da40ebd6d383 ("app/crypto-perf: display results in test runner")
> Cc: anatoly.burakov@intel.com
> Cc: pablo.de.lara.guarch@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ciara Power <ciara.power@intel.com>

Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/4] usertools: add script to graph crypto perf results
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 3/4] usertools: add script to graph crypto perf results Ciara Power
@ 2021-01-15  9:43     ` Dybkowski, AdamX
  0 siblings, 0 replies; 27+ messages in thread
From: Dybkowski, AdamX @ 2021-01-15  9:43 UTC (permalink / raw)
  To: Power, Ciara, dev; +Cc: Doherty, Declan, akhil.goyal, stephen, Thomas Monjalon

> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Thursday, 14 January, 2021 11:41
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; akhil.goyal@nxp.com;
> stephen@networkplumber.org; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Power, Ciara <ciara.power@intel.com>;
> Thomas Monjalon <thomas@monjalon.net>
> Subject: [PATCH v2 3/4] usertools: add script to graph crypto perf results
> 
> The python script introduced in this patch runs the crypto performance test
> application for various test cases, and graphs the results.
> 
> Test cases are defined in config JSON files, this is where parameters are
> specified for each test. Currently there are various test cases for devices
> crypto_qat, crypto_aesni_mb and crypto_gcm. Tests for the ptest types
> Throughput and Latency are supported for each.
> 
> The results of each test case are graphed and saved in PDFs (one PDF for each
> test suite graph type, with all test cases).
> The graphs output include various grouped barcharts for throughput tests, and
> histogram and boxplot graphs are used for latency tests.
> 
> Documentation is added to outline the configuration and usage for the script.
> 
> Usage:
> A JSON config file must be specified when running the script,
> 	"./dpdk-graph-crypto-perf <config_file>"
> 
> The script uses the installed app by default (from ninja install).
> Alternatively we can pass path to app by
> 	"-f <rel_path>/<build_dir>/app/dpdk-test-crypto-perf"
> 
> All device test suites are run by default.
> Alternatively we can specify by adding arguments,
> 	"-t latency" - to run latency test suite only
> 	"-t throughput latency"
> 		- to run both throughput and latency test suites
> 
> A directory can be specified for all output files, or the script directory is used by
> default.
> 	"-o <output_dir>"
> 
> To see the output from the dpdk-test-crypto-perf app, use the verbose option "-
> v".
> 
> Signed-off-by: Ciara Power <ciara.power@intel.com>

Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] maintainers: update crypto perf app maintainers
  2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 4/4] maintainers: update crypto perf app maintainers Ciara Power
@ 2021-01-15 10:13     ` Dybkowski, AdamX
  0 siblings, 0 replies; 27+ messages in thread
From: Dybkowski, AdamX @ 2021-01-15 10:13 UTC (permalink / raw)
  To: Power, Ciara, dev; +Cc: Doherty, Declan, akhil.goyal, stephen, Thomas Monjalon

> -----Original Message-----
> From: Power, Ciara <ciara.power@intel.com>
> Sent: Thursday, 14 January, 2021 11:41
> To: dev@dpdk.org
> Cc: Doherty, Declan <declan.doherty@intel.com>; akhil.goyal@nxp.com;
> stephen@networkplumber.org; Dybkowski, AdamX
> <adamx.dybkowski@intel.com>; Power, Ciara <ciara.power@intel.com>;
> Thomas Monjalon <thomas@monjalon.net>
> Subject: [PATCH v2 4/4] maintainers: update crypto perf app maintainers
> 
> This patch adds a maintainer for the crypto perf test application, to cover the
> new perf test graphing script.
> 
> Signed-off-by: Ciara Power <ciara.power@intel.com>

Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script
  2021-01-15  8:31   ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Doherty, Declan
@ 2021-01-15 15:54     ` Akhil Goyal
  2021-01-19 17:31       ` Thomas Monjalon
  0 siblings, 1 reply; 27+ messages in thread
From: Akhil Goyal @ 2021-01-15 15:54 UTC (permalink / raw)
  To: Doherty, Declan, Ciara Power, dev; +Cc: stephen, adamx.dybkowski

> On 14/01/2021 10:41 AM, Ciara Power wrote:
> > This patchset introduces a python script to run various crypto performance
> > test cases, and graph the results in a consumable manner. The test suites
> > are configured via JSON file. Some config files are provided,
> > or the user may create one. Currently throughput and latency ptests for
> > devices crypto_qat, crypto_aesni_mb and crypto_aesni_gcm are supported.
> >
> > The final collection of graphs are output in PDF format, with multiple PDFs
> > per test suite, one for each graph type.
> >
> > Some fixes are included for the throughput performance test and latency
> > performance test csv outputs also.
> >
> > v2:
> >    - Reduced changes to only fix csv format for all perf test types.
> >    - Added functionality for additional args such as config file,
> >      output directory and verbose.
> >    - Improved help text for script.
> >    - Improved script console output.
> >    - Added support for latency test cases with burst or buffer size lists.
> >    - Split config file into smaller config files, one for each device.
> >    - Split output PDFs into smaller files, based on test suite graph types.
> >    - Modified output directory naming and structure.
> >    - Made some general improvements to script.
> >    - Updated and improved documentation.
> >
> > Ciara Power (4):
> >    test/cryptodev: fix latency test csv output
> >    test/cryptodev: fix csv output format
> >    usertools: add script to graph crypto perf results
> >    maintainers: update crypto perf app maintainers
> >
> >   MAINTAINERS                                   |   3 +
> >   app/test-crypto-perf/cperf_test_latency.c     |   4 +-
> >   .../cperf_test_pmd_cyclecount.c               |   2 +-
> >   app/test-crypto-perf/cperf_test_throughput.c  |   4 +-
> >   app/test-crypto-perf/cperf_test_verify.c      |   2 +-
> >   doc/guides/tools/cryptoperf.rst               | 142 ++++++++
> >   usertools/configs/crypto-perf-aesni-gcm.json  |  99 ++++++
> >   usertools/configs/crypto-perf-aesni-mb.json   | 108 ++++++
> >   usertools/configs/crypto-perf-qat.json        |  94 ++++++
> >   usertools/dpdk-graph-crypto-perf.py           | 309 ++++++++++++++++++
> >   10 files changed, 761 insertions(+), 6 deletions(-)
> >   create mode 100644 usertools/configs/crypto-perf-aesni-gcm.json
> >   create mode 100644 usertools/configs/crypto-perf-aesni-mb.json
> >   create mode 100644 usertools/configs/crypto-perf-qat.json
> >   create mode 100755 usertools/dpdk-graph-crypto-perf.py
> >
> 
> Series Acked-by: Declan Doherty <declan.doherty@intel.com>

Applied to dpdk-next-crypto

Thanks.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script
  2021-01-15 15:54     ` Akhil Goyal
@ 2021-01-19 17:31       ` Thomas Monjalon
  2021-01-19 17:34         ` Akhil Goyal
  0 siblings, 1 reply; 27+ messages in thread
From: Thomas Monjalon @ 2021-01-19 17:31 UTC (permalink / raw)
  To: Ciara Power, Akhil Goyal; +Cc: Doherty, Declan, dev, stephen, adamx.dybkowski

15/01/2021 16:54, Akhil Goyal:
> > On 14/01/2021 10:41 AM, Ciara Power wrote:
> > > Ciara Power (4):
> > >    test/cryptodev: fix latency test csv output
> > >    test/cryptodev: fix csv output format
> > >    usertools: add script to graph crypto perf results
> > >    maintainers: update crypto perf app maintainers
> > >
> > >   MAINTAINERS                                   |   3 +
> > >   app/test-crypto-perf/cperf_test_latency.c     |   4 +-
> > >   .../cperf_test_pmd_cyclecount.c               |   2 +-
> > >   app/test-crypto-perf/cperf_test_throughput.c  |   4 +-
> > >   app/test-crypto-perf/cperf_test_verify.c      |   2 +-
> > >   doc/guides/tools/cryptoperf.rst               | 142 ++++++++
> > >   usertools/configs/crypto-perf-aesni-gcm.json  |  99 ++++++
> > >   usertools/configs/crypto-perf-aesni-mb.json   | 108 ++++++
> > >   usertools/configs/crypto-perf-qat.json        |  94 ++++++
> > >   usertools/dpdk-graph-crypto-perf.py           | 309 ++++++++++++++++++
> > >   10 files changed, 761 insertions(+), 6 deletions(-)
> > >   create mode 100644 usertools/configs/crypto-perf-aesni-gcm.json
> > >   create mode 100644 usertools/configs/crypto-perf-aesni-mb.json
> > >   create mode 100644 usertools/configs/crypto-perf-qat.json
> > >   create mode 100755 usertools/dpdk-graph-crypto-perf.py
> > >
> > 
> > Series Acked-by: Declan Doherty <declan.doherty@intel.com>
> 
> Applied to dpdk-next-crypto

Sorry I missed this series and I discover it when looking at the crypto tree.
I see that the crypto perf script and configs are located in usertools.
I think it should be with the app in app/test-crypto-perf/
The usertools directory is for tools used in production by end users.

Please consider changing the directory for the -rc2.
Thanks



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script
  2021-01-19 17:31       ` Thomas Monjalon
@ 2021-01-19 17:34         ` Akhil Goyal
  0 siblings, 0 replies; 27+ messages in thread
From: Akhil Goyal @ 2021-01-19 17:34 UTC (permalink / raw)
  To: Thomas Monjalon, Ciara Power
  Cc: Doherty, Declan, dev, stephen, adamx.dybkowski

> 15/01/2021 16:54, Akhil Goyal:
> > > On 14/01/2021 10:41 AM, Ciara Power wrote:
> > > > Ciara Power (4):
> > > >    test/cryptodev: fix latency test csv output
> > > >    test/cryptodev: fix csv output format
> > > >    usertools: add script to graph crypto perf results
> > > >    maintainers: update crypto perf app maintainers
> > > >
> > > >   MAINTAINERS                                   |   3 +
> > > >   app/test-crypto-perf/cperf_test_latency.c     |   4 +-
> > > >   .../cperf_test_pmd_cyclecount.c               |   2 +-
> > > >   app/test-crypto-perf/cperf_test_throughput.c  |   4 +-
> > > >   app/test-crypto-perf/cperf_test_verify.c      |   2 +-
> > > >   doc/guides/tools/cryptoperf.rst               | 142 ++++++++
> > > >   usertools/configs/crypto-perf-aesni-gcm.json  |  99 ++++++
> > > >   usertools/configs/crypto-perf-aesni-mb.json   | 108 ++++++
> > > >   usertools/configs/crypto-perf-qat.json        |  94 ++++++
> > > >   usertools/dpdk-graph-crypto-perf.py           | 309 ++++++++++++++++++
> > > >   10 files changed, 761 insertions(+), 6 deletions(-)
> > > >   create mode 100644 usertools/configs/crypto-perf-aesni-gcm.json
> > > >   create mode 100644 usertools/configs/crypto-perf-aesni-mb.json
> > > >   create mode 100644 usertools/configs/crypto-perf-qat.json
> > > >   create mode 100755 usertools/dpdk-graph-crypto-perf.py
> > > >
> > >
> > > Series Acked-by: Declan Doherty <declan.doherty@intel.com>
> >
> > Applied to dpdk-next-crypto
> 
> Sorry I missed this series and I discover it when looking at the crypto tree.
> I see that the crypto perf script and configs are located in usertools.
> I think it should be with the app in app/test-crypto-perf/
> The usertools directory is for tools used in production by end users.
> 
> Please consider changing the directory for the -rc2.
> Thanks
> 
Removed from next-crypto for now.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 0/4] add crypto perf test graphing script
  2020-12-11 17:31 [dpdk-dev] [PATCH 0/4] add crypto perf test graphing script Ciara Power
                   ` (4 preceding siblings ...)
  2021-01-14 10:41 ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Ciara Power
@ 2021-01-20 17:29 ` Ciara Power
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 1/4] test/cryptodev: fix latency test csv output Ciara Power
                     ` (4 more replies)
  5 siblings, 5 replies; 27+ messages in thread
From: Ciara Power @ 2021-01-20 17:29 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski, thomas,
	Ciara Power

This patchset introduces a python script to run various crypto performance
test cases, and graph the results in a consumable manner. The test suites
are configured via JSON file. Some config files are provided,
or the user may create one. Currently throughput and latency ptests for
devices crypto_qat, crypto_aesni_mb and crypto_aesni_gcm are supported.

The final collection of graphs are output in PDF format, with multiple PDFs
per test suite, one for each graph type.

Some fixes are included for the throughput performance test and latency
performance test csv outputs also.

v3:
  - Moved script and configs to app/test-crypto-perf directory.
  - Made changes to documentation and MAINTAINERS to reflect the above change.
v2:
  - Reduced changes to only fix csv format for all perf test types.
  - Added functionality for additional args such as config file,
    output directory and verbose.
  - Improved help text for script.
  - Improved script console output.
  - Added support for latency test cases with burst or buffer size lists.
  - Split config file into smaller config files, one for each device.
  - Split output PDFs into smaller files, based on test suite graph types.
  - Modified output directory naming and structure.
  - Made some general improvements to script.
  - Updated and improved documentation.

Ciara Power (4):
  test/cryptodev: fix latency test csv output
  test/cryptodev: fix csv output format
  test/cryptodev: add script to graph perf results
  maintainers: update crypto perf app maintainers

 MAINTAINERS                                   |   1 +
 .../configs/crypto-perf-aesni-gcm.json        |  99 ++++++
 .../configs/crypto-perf-aesni-mb.json         | 108 ++++++
 .../configs/crypto-perf-qat.json              |  94 ++++++
 app/test-crypto-perf/cperf_test_latency.c     |   4 +-
 .../cperf_test_pmd_cyclecount.c               |   2 +-
 app/test-crypto-perf/cperf_test_throughput.c  |   4 +-
 app/test-crypto-perf/cperf_test_verify.c      |   2 +-
 .../dpdk-graph-crypto-perf.py                 | 309 ++++++++++++++++++
 doc/guides/tools/cryptoperf.rst               | 143 ++++++++
 10 files changed, 760 insertions(+), 6 deletions(-)
 create mode 100644 app/test-crypto-perf/configs/crypto-perf-aesni-gcm.json
 create mode 100644 app/test-crypto-perf/configs/crypto-perf-aesni-mb.json
 create mode 100644 app/test-crypto-perf/configs/crypto-perf-qat.json
 create mode 100755 app/test-crypto-perf/dpdk-graph-crypto-perf.py

-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 1/4] test/cryptodev: fix latency test csv output
  2021-01-20 17:29 ` [dpdk-dev] [PATCH v3 " Ciara Power
@ 2021-01-20 17:29   ` Ciara Power
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 2/4] test/cryptodev: fix csv output format Ciara Power
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 27+ messages in thread
From: Ciara Power @ 2021-01-20 17:29 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski, thomas,
	Ciara Power, pablo.de.lara.guarch, stable

The csv output for the latency performance test had an extra header,
"Packet Size", which is a duplicate of "Buffer Size", and had no
corresponding value in the output. This is now removed.

Fixes: f6cefe253cc8 ("app/crypto-perf: add range/list of sizes")
Cc: pablo.de.lara.guarch@intel.com
Cc: stable@dpdk.org

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
---
 app/test-crypto-perf/cperf_test_latency.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 0e4d0e1538..c2590a4dcf 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -310,7 +310,7 @@ cperf_latency_test_runner(void *arg)
 		if (ctx->options->csv) {
 			if (rte_atomic16_test_and_set(&display_once))
 				printf("\n# lcore, Buffer Size, Burst Size, Pakt Seq #, "
-						"Packet Size, cycles, time (us)");
+						"cycles, time (us)");
 
 			for (i = 0; i < ctx->options->total_ops; i++) {
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 2/4] test/cryptodev: fix csv output format
  2021-01-20 17:29 ` [dpdk-dev] [PATCH v3 " Ciara Power
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 1/4] test/cryptodev: fix latency test csv output Ciara Power
@ 2021-01-20 17:29   ` Ciara Power
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 3/4] test/cryptodev: add script to graph perf results Ciara Power
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 27+ messages in thread
From: Ciara Power @ 2021-01-20 17:29 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski, thomas,
	Ciara Power, anatoly.burakov, pablo.de.lara.guarch, stable

The csv output for each ptest type used ";" instead of ",".
This has now been fixed to use the comma format that is used in the csv
headers.

Fixes: f6cefe253cc8 ("app/crypto-perf: add range/list of sizes")
Fixes: 96dfeb609be1 ("app/crypto-perf: add new PMD benchmarking mode")
Fixes: da40ebd6d383 ("app/crypto-perf: display results in test runner")
Cc: anatoly.burakov@intel.com
Cc: pablo.de.lara.guarch@intel.com
Cc: stable@dpdk.org

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>

---
v2:
  - Reduced changes to only fix csv format.
---
 app/test-crypto-perf/cperf_test_latency.c        | 2 +-
 app/test-crypto-perf/cperf_test_pmd_cyclecount.c | 2 +-
 app/test-crypto-perf/cperf_test_throughput.c     | 4 ++--
 app/test-crypto-perf/cperf_test_verify.c         | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index c2590a4dcf..159fe8492b 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -314,7 +314,7 @@ cperf_latency_test_runner(void *arg)
 
 			for (i = 0; i < ctx->options->total_ops; i++) {
 
-				printf("\n%u;%u;%u;%"PRIu64";%"PRIu64";%.3f",
+				printf("\n%u,%u,%u,%"PRIu64",%"PRIu64",%.3f",
 					ctx->lcore_id, ctx->options->test_buffer_size,
 					test_burst_size, i + 1,
 					ctx->res[i].tsc_end - ctx->res[i].tsc_start,
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 4e67d3aebd..844659aeca 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -16,7 +16,7 @@
 #define PRETTY_HDR_FMT "%12s%12s%12s%12s%12s%12s%12s%12s%12s%12s\n\n"
 #define PRETTY_LINE_FMT "%12u%12u%12u%12u%12u%12u%12u%12.0f%12.0f%12.0f\n"
 #define CSV_HDR_FMT "%s,%s,%s,%s,%s,%s,%s,%s,%s,%s\n"
-#define CSV_LINE_FMT "%10u;%10u;%u;%u;%u;%u;%u;%.3f;%.3f;%.3f\n"
+#define CSV_LINE_FMT "%10u,%10u,%u,%u,%u,%u,%u,%.3f,%.3f,%.3f\n"
 
 struct cperf_pmd_cyclecount_ctx {
 	uint8_t dev_id;
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index f30f7d5c2c..f6eb8cf259 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -299,8 +299,8 @@ cperf_throughput_test_runner(void *test_ctx)
 					"Failed Deq,Ops(Millions),Throughput(Gbps),"
 					"Cycles/Buf\n\n");
 
-			printf("%u;%u;%u;%"PRIu64";%"PRIu64";%"PRIu64";%"PRIu64";"
-					"%.3f;%.3f;%.3f\n",
+			printf("%u,%u,%u,%"PRIu64",%"PRIu64",%"PRIu64",%"PRIu64","
+					"%.3f,%.3f,%.3f\n",
 					ctx->lcore_id,
 					ctx->options->test_buffer_size,
 					test_burst_size,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 833bc9a552..2939aeaa93 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -406,7 +406,7 @@ cperf_verify_test_runner(void *test_ctx)
 				"Burst Size,Enqueued,Dequeued,Failed Enq,"
 				"Failed Deq,Failed Ops\n");
 
-		printf("%10u;%10u;%u;%"PRIu64";%"PRIu64";%"PRIu64";%"PRIu64";"
+		printf("%10u,%10u,%u,%"PRIu64",%"PRIu64",%"PRIu64",%"PRIu64","
 				"%"PRIu64"\n",
 				ctx->lcore_id,
 				ctx->options->max_buffer_size,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 3/4] test/cryptodev: add script to graph perf results
  2021-01-20 17:29 ` [dpdk-dev] [PATCH v3 " Ciara Power
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 1/4] test/cryptodev: fix latency test csv output Ciara Power
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 2/4] test/cryptodev: fix csv output format Ciara Power
@ 2021-01-20 17:29   ` Ciara Power
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 4/4] maintainers: update crypto perf app maintainers Ciara Power
  2021-01-25 18:28   ` [dpdk-dev] [PATCH v3 0/4] add crypto perf test graphing script Akhil Goyal
  4 siblings, 0 replies; 27+ messages in thread
From: Ciara Power @ 2021-01-20 17:29 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski, thomas,
	Ciara Power

The python script introduced in this patch runs the crypto performance
test application for various test cases, and graphs the results.

Test cases are defined in config JSON files, this is where parameters
are specified for each test. Currently there are various test cases for
devices crypto_qat, crypto_aesni_mb and crypto_gcm. Tests for the
ptest types Throughput and Latency are supported for each.

The results of each test case are graphed and saved in PDFs (one PDF for
each test suite graph type, with all test cases).
The graphs output include various grouped barcharts for throughput
tests, and histogram and boxplot graphs are used for latency tests.

Documentation is added to outline the configuration and usage for the
script.

Usage:
A JSON config file must be specified when running the script,
	"./dpdk-graph-crypto-perf <config_file>"

The script uses the installed app by default (from ninja install).
Alternatively we can pass path to app by
	"-f <rel_path>/<build_dir>/app/dpdk-test-crypto-perf"

All device test suites are run by default.
Alternatively we can specify by adding arguments,
	"-t latency" - to run latency test suite only
	"-t throughput latency"
		- to run both throughput and latency test suites

A directory can be specified for all output files,
or the script directory is used by default.
	"-o <output_dir>"

To see the output from the dpdk-test-crypto-perf app,
use the verbose option "-v".

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>

---
v3:
  - Moved script and configs to app/test-crypto-perf directory.
  - Made small changes to commit log, documentation and MAINTAINERS
    file to reflect the above change.
v2:
  - Added functionality for additional args such as config file,
    output directory and verbose.
  - Improved help text for script.
  - Improved script console output.
  - Added support for latency test cases with burst or buffer size lists.
  - Split config file into smaller config files, one for each device.
  - Split output PDFs into smaller files, based on test suite graph types.
  - Modified output directory naming and structure.
  - Made some general improvements to script.
  - Updated and improved documentation.
  - Updated copyright year.
---
 .../configs/crypto-perf-aesni-gcm.json        |  99 ++++++
 .../configs/crypto-perf-aesni-mb.json         | 108 ++++++
 .../configs/crypto-perf-qat.json              |  94 ++++++
 .../dpdk-graph-crypto-perf.py                 | 309 ++++++++++++++++++
 doc/guides/tools/cryptoperf.rst               | 143 ++++++++
 5 files changed, 753 insertions(+)
 create mode 100644 app/test-crypto-perf/configs/crypto-perf-aesni-gcm.json
 create mode 100644 app/test-crypto-perf/configs/crypto-perf-aesni-mb.json
 create mode 100644 app/test-crypto-perf/configs/crypto-perf-qat.json
 create mode 100755 app/test-crypto-perf/dpdk-graph-crypto-perf.py

diff --git a/app/test-crypto-perf/configs/crypto-perf-aesni-gcm.json b/app/test-crypto-perf/configs/crypto-perf-aesni-gcm.json
new file mode 100644
index 0000000000..608a46e34f
--- /dev/null
+++ b/app/test-crypto-perf/configs/crypto-perf-aesni-gcm.json
@@ -0,0 +1,99 @@
+{
+	"throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"vdev": "crypto_aesni_gcm"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"ptest": "throughput",
+				"devtype": "crypto_aesni_gcm"
+			}
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "16",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GMAC 128 auth-only generate": {
+			"auth-algo": "aes-gmac",
+			"auth-key-sz": "16",
+			"auth-iv-sz": "12",
+			"auth-op": "generate",
+			"digest-sz": "16",
+			"optype": "auth-only",
+			"total-ops": "10000000"
+		}
+	},
+	"latency": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"vdev": "crypto_aesni_gcm"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"ptest": "latency",
+				"devtype": "crypto_aesni_gcm"
+			}
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt",
+			"aead-aad-sz": "16",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead"
+		},
+		"AES-GCM-256 aead-op encrypt latency": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "32",
+			"aead-iv-sz": "12",
+			"digest-sz": "16",
+			"optype": "aead"
+		}
+	}
+}
diff --git a/app/test-crypto-perf/configs/crypto-perf-aesni-mb.json b/app/test-crypto-perf/configs/crypto-perf-aesni-mb.json
new file mode 100644
index 0000000000..d50e4af36c
--- /dev/null
+++ b/app/test-crypto-perf/configs/crypto-perf-aesni-mb.json
@@ -0,0 +1,108 @@
+{
+	"throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"vdev": "crypto_aesni_mb"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"ptest": "throughput",
+				"devtype": "crypto_aesni_mb"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-128 SHA1-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"auth-op": "generate",
+			"auth-key-sz": "64",
+			"digest-sz": "20",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead",
+			"total-ops": "10000000"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt"
+		},
+		"AES-GMAC 128 auth-only generate": {
+			"auth-algo": "aes-gmac",
+			"auth-key-sz": "16",
+			"auth-iv-sz": "12",
+			"auth-op": "generate",
+			"digest-sz": "16",
+			"optype": "auth-only",
+			"total-ops": "10000000"
+		}
+	},
+	"latency": {
+		"default": {
+			"eal": {
+				"l": "1,2",
+				"vdev": "crypto_aesni_mb"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"ptest": "latency",
+				"devtype": "crypto_aesni_mb"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		}
+	}
+}
diff --git a/app/test-crypto-perf/configs/crypto-perf-qat.json b/app/test-crypto-perf/configs/crypto-perf-qat.json
new file mode 100644
index 0000000000..0adb809e39
--- /dev/null
+++ b/app/test-crypto-perf/configs/crypto-perf-qat.json
@@ -0,0 +1,94 @@
+{
+	"throughput": {
+		"default": {
+			"eal": {
+				"l": "1,2"
+			},
+			"app": {
+				"csv-friendly": true,
+				"buffer-sz": "64,128,256,512,768,1024,1408,2048",
+				"burst-sz": "1,4,8,16,32",
+				"devtype": "crypto_qat",
+				"ptest": "throughput"
+			}
+		},
+		"AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-128 SHA1-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "16",
+			"auth-algo": "sha1-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC auth-then-cipher decrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "auth-then-cipher",
+			"cipher-op": "decrypt"
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-iv-sz": "12",
+			"aead-op": "encrypt",
+			"aead-aad-sz": "16",
+			"digest-sz": "16",
+			"optype": "aead"
+		},
+		"AES-GCM-128 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "decrypt"
+		},
+		"AES-GCM-256 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "encrypt"
+		},
+		"AES-GCM-256 aead-op decrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "32",
+			"aead-op": "decrypt"
+		}
+	},
+	"latency": {
+		"default": {
+			"eal": {
+				"l": "1,2"
+			},
+			"app": {
+				"csv-friendly": true,
+				"ptest": "latency",
+				"buffer-sz": "1024",
+				"burst-sz": "16",
+				"devtype": "crypto_qat"
+			}
+		},
+		"AES-CBC-256 SHA2-256-HMAC cipher-then-auth encrypt": {
+			"cipher-algo": "aes-cbc",
+			"cipher-key-sz": "32",
+			"auth-algo": "sha2-256-hmac",
+			"optype": "cipher-then-auth",
+			"cipher-op": "encrypt"
+		},
+		"AES-GCM-128 aead-op encrypt": {
+			"aead-algo": "aes-gcm",
+			"aead-key-sz": "16",
+			"aead-op": "encrypt"
+		}
+	}
+}
diff --git a/app/test-crypto-perf/dpdk-graph-crypto-perf.py b/app/test-crypto-perf/dpdk-graph-crypto-perf.py
new file mode 100755
index 0000000000..f4341ee718
--- /dev/null
+++ b/app/test-crypto-perf/dpdk-graph-crypto-perf.py
@@ -0,0 +1,309 @@
+#! /usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+
+"""
+Script to automate running crypto performance tests for a range of test
+cases as configured in the JSON file specified by the user.
+The results are processed and output into various graphs in PDF files.
+Currently, throughput and latency tests are supported.
+"""
+
+import glob
+import json
+import os
+import shutil
+import subprocess
+from argparse import ArgumentParser
+from argparse import ArgumentDefaultsHelpFormatter
+import img2pdf
+import pandas as pd
+import plotly.express as px
+
+SCRIPT_PATH = os.path.dirname(__file__) + "/"
+GRAPH_DIR = "temp_graphs"
+
+
+class Grapher:
+    """Grapher object containing all graphing functions. """
+    def __init__(self, config, suite, graph_path):
+        self.graph_num = 0
+        self.graph_path = graph_path
+        self.suite = suite
+        self.config = config
+        self.test = ""
+        self.ptest = ""
+        self.data = pd.DataFrame()
+
+    def save_graph(self, fig, subdir):
+        """
+        Update figure layout to increase readability, output to JPG file.
+        """
+        path = os.path.join(self.graph_path, subdir, "")
+        if not os.path.exists(path):
+            os.makedirs(path)
+        fig.update_layout(font_size=30, title_x=0.5, title_font={"size": 25},
+                          margin={'t': 300, 'l': 150, 'r': 150, 'b': 150})
+        fig.write_image(path + "%d.jpg" % self.graph_num)
+
+    def boxplot_graph(self, x_axis_label, burst, buffer):
+        """Plot a boxplot graph for the given parameters."""
+        fig = px.box(self.data, x=x_axis_label,
+                     title="Config: " + self.config + "<br>Test Suite: " +
+                     self.suite + "<br>" + self.test +
+                     "<br>(Outliers Included)<br>Burst Size: " + burst +
+                     ", Buffer Size: " + buffer,
+                     height=1400, width=2400)
+        self.save_graph(fig, x_axis_label.replace(' ', '_'))
+        self.graph_num += 1
+
+    def grouped_graph(self, y_axis_label, x_axis_label, color_label):
+        """Plot a grouped barchart using the given parameters."""
+        if (self.data[y_axis_label] == 0).all():
+            return
+        fig = px.bar(self.data, x=x_axis_label, color=color_label,
+                     y=y_axis_label,
+                     title="Config: " + self.config + "<br>Test Suite: " +
+                     self.suite + "<br>" + self.test + "<br>"
+                     + y_axis_label + " for each " + x_axis_label +
+                     "/" + color_label, barmode="group", height=1400,
+                     width=2400)
+        fig.update_xaxes(type='category')
+        self.save_graph(fig, y_axis_label.replace(' ', '_'))
+        self.graph_num += 1
+
+    def histogram_graph(self, x_axis_label, burst, buffer):
+        """Plot a histogram graph using the given parameters."""
+        quart1 = self.data[x_axis_label].quantile(0.25)
+        quart3 = self.data[x_axis_label].quantile(0.75)
+        inter_quart_range = quart3 - quart1
+        data_out = self.data[~((self.data[x_axis_label] <
+                                (quart1 - 1.5 * inter_quart_range)) |
+                               (self.data[x_axis_label] >
+                                (quart3 + 1.5 * inter_quart_range)))]
+        fig = px.histogram(data_out, x=x_axis_label,
+                           title="Config: " + self.config + "<br>Test Suite: "
+                           + self.suite + "<br>" + self.test
+                           + "<br>(Outliers removed using Interquartile Range)"
+                           + "<br>Burst Size: " + burst + ", Buffer Size: " +
+                           buffer, height=1400, width=2400)
+        max_val = data_out[x_axis_label].max()
+        min_val = data_out[x_axis_label].min()
+        fig.update_traces(xbins=dict(
+            start=min_val,
+            end=max_val,
+            size=(max_val - min_val) / 200
+        ))
+        self.save_graph(fig, x_axis_label.replace(' ', '_'))
+        self.graph_num += 1
+
+
+def cleanup_throughput_datatypes(data):
+    """Cleanup data types of throughput test results dataframe. """
+    data.columns = data.columns.str.replace('/', ' ')
+    data.columns = data.columns.str.strip()
+    data['Burst Size'] = data['Burst Size'].astype('category')
+    data['Buffer Size(B)'] = data['Buffer Size(B)'].astype('category')
+    data['Failed Enq'] = data['Failed Enq'].astype('int')
+    data['Throughput(Gbps)'] = data['Throughput(Gbps)'].astype('float')
+    data['Ops(Millions)'] = data['Ops(Millions)'].astype('float')
+    data['Cycles Buf'] = data['Cycles Buf'].astype('float')
+    return data
+
+
+def cleanup_latency_datatypes(data):
+    """Cleanup data types of latency test results dataframe. """
+    data.columns = data.columns.str.strip()
+    data = data[['Burst Size', 'Buffer Size', 'time (us)']].copy()
+    data['Burst Size'] = data['Burst Size'].astype('category')
+    data['Buffer Size'] = data['Buffer Size'].astype('category')
+    data['time (us)'] = data['time (us)'].astype('float')
+    return data
+
+
+def process_test_results(grapher, data):
+    """
+    Process results from the test case,
+    calling graph functions to output graph images.
+    """
+    if grapher.ptest == "throughput":
+        grapher.data = cleanup_throughput_datatypes(data)
+        for y_label in ["Throughput(Gbps)", "Ops(Millions)",
+                        "Cycles Buf", "Failed Enq"]:
+            grapher.grouped_graph(y_label, "Buffer Size(B)",
+                                  "Burst Size")
+    elif grapher.ptest == "latency":
+        clean_data = cleanup_latency_datatypes(data)
+        for (burst, buffer), group in clean_data.groupby(['Burst Size',
+                                                          'Buffer Size']):
+            grapher.data = group
+            grapher.histogram_graph("time (us)", burst, buffer)
+            grapher.boxplot_graph("time (us)", burst, buffer)
+    else:
+        print("Invalid ptest")
+        return
+
+
+def create_results_pdf(graph_path, pdf_path):
+    """Output results graphs to PDFs."""
+    if not os.path.exists(pdf_path):
+        os.makedirs(pdf_path)
+    for _, dirs, _ in os.walk(graph_path):
+        for sub in dirs:
+            graphs = sorted(glob.glob(os.path.join(graph_path, sub, "*.jpg")),
+                            key=(lambda x: int((x.rsplit('/', 1)[1])
+                                               .split('.')[0])))
+            if graphs:
+                with open(pdf_path + "%s_results.pdf" % sub, "wb") as pdf_file:
+                    pdf_file.write(img2pdf.convert(graphs))
+
+
+def run_test(test_cmd, test, grapher, params, verbose):
+    """Run performance test app for the given test case parameters."""
+    process = subprocess.Popen(["stdbuf", "-oL", test_cmd] + params,
+                               universal_newlines=True,
+                               stdout=subprocess.PIPE,
+                               stderr=subprocess.STDOUT)
+    rows = []
+    if verbose:
+        print("\n\tOutput for " + test + ":")
+    while process.poll() is None:
+        line = process.stdout.readline().strip()
+        if not line:
+            continue
+        if verbose:
+            print("\t\t>>" + line)
+
+        if line.replace(' ', '').startswith('#lcore'):
+            columns = line[1:].split(',')
+        elif line[0].isdigit():
+            line = line.replace(';', ',')
+            rows.append(line.split(','))
+        else:
+            continue
+
+    if process.poll() != 0 or not columns or not rows:
+        print("\n\t" + test + ": FAIL")
+        return
+    data = pd.DataFrame(rows, columns=columns)
+    grapher.test = test
+    process_test_results(grapher, data)
+    print("\n\t" + test + ": OK")
+    return
+
+
+def run_test_suite(test_cmd, suite_config, verbose):
+    """Parse test cases for the test suite and run each test."""
+    print("\nRunning Test Suite: " + suite_config['suite'])
+    default_params = []
+    graph_path = os.path.join(suite_config['output_path'], GRAPH_DIR,
+                              suite_config['suite'], "")
+    grapher = Grapher(suite_config['config_name'], suite_config['suite'],
+                      graph_path)
+    test_cases = suite_config['test_cases']
+    if 'default' not in test_cases:
+        print("Test Suite must contain default case, skipping")
+        return
+    for (key, val) in test_cases['default']['eal'].items():
+        if len(key) == 1:
+            default_params.append("-" + key + " " + val)
+        else:
+            default_params.append("--" + key + "=" + val)
+
+    default_params.append("--")
+    for (key, val) in test_cases['default']['app'].items():
+        if isinstance(val, bool):
+            default_params.append("--" + key if val is True else "")
+        else:
+            default_params.append("--" + key + "=" + val)
+
+    if 'ptest' not in test_cases['default']['app']:
+        print("Test Suite must contain default ptest value, skipping")
+        return
+    grapher.ptest = test_cases['default']['app']['ptest']
+
+    for (test, params) in {k: v for (k, v) in test_cases.items() if
+                           k != "default"}.items():
+        extra_params = []
+        for (key, val) in params.items():
+            if isinstance(val, bool):
+                extra_params.append("--" + key if val is True else "")
+            else:
+                extra_params.append("--" + key + "=" + val)
+
+        run_test(test_cmd, test, grapher, default_params + extra_params,
+                 verbose)
+
+    create_results_pdf(graph_path, os.path.join(suite_config['output_path'],
+                                                suite_config['suite'], ""))
+
+
+def parse_args():
+    """Parse command-line arguments passed to script."""
+    parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)
+    parser.add_argument('config_path', type=str,
+                        help="Path to JSON configuration file")
+    parser.add_argument('-t', '--test-suites', nargs='+', default=["all"],
+                        help="List of test suites to run")
+    parser.add_argument('-v', '--verbose', action='store_true',
+                        help="""Display perf test app output.
+                        Not recommended for latency tests.""")
+    parser.add_argument('-f', '--file-path',
+                        default=shutil.which('dpdk-test-crypto-perf'),
+                        help="Path for perf test app")
+    parser.add_argument('-o', '--output-path', default=SCRIPT_PATH,
+                        help="Path to store output directories")
+    args = parser.parse_args()
+    return (args.file_path, args.test_suites, args.config_path,
+            args.output_path, args.verbose)
+
+
+def main():
+    """
+    Load JSON config and call relevant functions to run chosen test suites.
+    """
+    test_cmd, test_suites, config_file, output_path, verbose = parse_args()
+    if test_cmd is None or not os.path.isfile(test_cmd):
+        print("Invalid filepath for perf test app!")
+        return
+    try:
+        with open(config_file) as conf:
+            test_suite_ops = json.load(conf)
+            config_name = os.path.splitext(config_file)[0]
+            if '/' in config_name:
+                config_name = config_name.rsplit('/', 1)[1]
+            output_path = os.path.join(output_path, config_name, "")
+            print("Using config: " + config_file)
+    except OSError as err:
+        print("Error with JSON file path: " + err.strerror)
+        return
+    except json.decoder.JSONDecodeError as err:
+        print("Error loading JSON config: " + err.msg)
+        return
+
+    if test_suites != ["all"]:
+        suite_list = []
+        for (suite, test_cases) in {k: v for (k, v) in test_suite_ops.items()
+                                    if k in test_suites}.items():
+            suite_list.append(suite)
+            suite_config = {'config_name': config_name, 'suite': suite,
+                            'test_cases': test_cases,
+                            'output_path': output_path}
+            run_test_suite(test_cmd, suite_config, verbose)
+        if not suite_list:
+            print("No valid test suites chosen!")
+            return
+    else:
+        for (suite, test_cases) in test_suite_ops.items():
+            suite_config = {'config_name': config_name, 'suite': suite,
+                            'test_cases': test_cases,
+                            'output_path': output_path}
+            run_test_suite(test_cmd, suite_config, verbose)
+
+    graph_path = os.path.join(output_path, GRAPH_DIR, "")
+    if os.path.exists(graph_path):
+        shutil.rmtree(graph_path)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 79359fe894..86c5a8aa16 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -453,3 +453,146 @@ Test vector file for cipher algorithm aes cbc 256 with authorization sha::
    digest =
    0x1C, 0xB2, 0x3D, 0xD1, 0xF9, 0xC7, 0x6C, 0x49, 0x2E, 0xDA, 0x94, 0x8B, 0xF1, 0xCF, 0x96, 0x43,
    0x67, 0x50, 0x39, 0x76, 0xB5, 0xA1, 0xCE, 0xA1, 0xD7, 0x77, 0x10, 0x07, 0x43, 0x37, 0x05, 0xB4
+
+
+Graph Crypto Perf Results
+-------------------------
+
+The ``dpdk-graph-crypto-perf.py`` tool is a simple script to automate
+running crypto performance tests, and graphing the results.
+It can be found in the ``app/test-crypto-perf/`` directory.
+The output graphs include various grouped barcharts for throughput
+tests, and histogram and boxplot graphs for latency tests.
+These are output to PDF files, with one PDF per test suite graph type.
+
+
+Dependencies
+~~~~~~~~~~~~
+
+The following python modules must be installed to run the script:
+
+* img2pdf
+
+* plotly
+
+* pandas
+
+* glob
+
+
+Test Configuration
+~~~~~~~~~~~~~~~~~~
+
+The test cases run by the script are defined by a JSON config file.
+Some config files can be found in ``app/test-crypto-perf/configs/``,
+or the user may create a new one following the same format as the config files provided.
+
+An example of this format is shown below for one test suite in the ``crypto-perf-aesni-mb.json`` file.
+This shows the required default config for the test suite, and one test case.
+The test case has additional app config that will be combined with
+the default config when running the test case.
+
+.. code-block:: c
+
+   "throughput": {
+       "default": {
+           "eal": {
+               "l": "1,2",
+               "vdev": "crypto_aesni_mb"
+           },
+           "app": {
+               "csv-friendly": true,
+               "buffer-sz": "64,128,256,512,768,1024,1408,2048",
+               "burst-sz": "1,4,8,16,32",
+               "ptest": "throughput",
+               "devtype": "crypto_aesni_mb"
+           }
+        },
+       "AES-CBC-128 SHA1-HMAC auth-then-cipher decrypt": {
+               "cipher-algo": "aes-cbc",
+               "cipher-key-sz": "16",
+               "auth-algo": "sha1-hmac",
+               "optype": "auth-then-cipher",
+               "cipher-op": "decrypt"
+        }
+   }
+
+.. note::
+   The specific test cases only allow modification of app parameters,
+   and not EAL parameters.
+   The default case is required for each test suite in the config file,
+   to specify EAL parameters.
+
+Currently, crypto_qat, crypto_aesni_mb, and crypto_aesni_gcm devices for
+both throughput and latency ptests are supported.
+
+
+Usage
+~~~~~
+
+.. code-block:: console
+
+   ./dpdk-graph-crypto-perf <config_file>
+
+The ``config_file`` positional argument is required to run the script.
+This points to a valid JSON config file containing test suites.
+
+.. code-block:: console
+
+   ./dpdk-graph-crypto-perf configs/crypto-perf-aesni-mb.json
+
+The following are the application optional command-line options:
+
+* ``-h, --help``
+
+    Display usage information and quit
+
+
+* ``-f <file_path>, --file-path <file_path>``
+
+  Provide path to ``dpdk-test-crypto-perf`` application.
+  The script uses the installed app by default.
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf -f <build_dir>/app/dpdk-test-crypto-perf
+
+
+* ``-t <test_suite_list>, --test-suites <test_suite_list>``
+
+  Specify test suites to run. All test suites are run by default.
+
+  To run crypto-perf-qat latency test suite only:
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf configs/crypto-perf-qat -t latency
+
+  To run both crypto-perf-aesni-mb throughput and latency test suites
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf configs/crypto-perf-aesni-mb -t throughput latency
+
+
+* ``-o <output_path>, --output-path <output_path>``
+
+  Specify directory to use for output files.
+  The default is to use the script's directory.
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf <config_file> -o <output_dir>
+
+
+* ``-v, --verbose``
+
+  Enable verbose output. This displays ``dpdk-test-crypto-perf`` app output in real-time.
+
+  .. code-block:: console
+
+     ./dpdk-graph-crypto-perf <config_file> -v
+
+  .. warning::
+     Latency performance tests have a large amount of output.
+     It is not recommended to use the verbose option for latency tests.
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH v3 4/4] maintainers: update crypto perf app maintainers
  2021-01-20 17:29 ` [dpdk-dev] [PATCH v3 " Ciara Power
                     ` (2 preceding siblings ...)
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 3/4] test/cryptodev: add script to graph perf results Ciara Power
@ 2021-01-20 17:29   ` Ciara Power
  2021-01-25 18:28   ` [dpdk-dev] [PATCH v3 0/4] add crypto perf test graphing script Akhil Goyal
  4 siblings, 0 replies; 27+ messages in thread
From: Ciara Power @ 2021-01-20 17:29 UTC (permalink / raw)
  To: dev
  Cc: declan.doherty, akhil.goyal, stephen, adamx.dybkowski, thomas,
	Ciara Power

This patch adds a maintainer for the crypto perf test application,
to cover the new perf test graphing script.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index aa973a3960..395f8ec384 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1585,6 +1585,7 @@ F: doc/guides/tools/comp_perf.rst
 
 Crypto performance test application
 M: Declan Doherty <declan.doherty@intel.com>
+M: Ciara Power <ciara.power@intel.com>
 T: git://dpdk.org/next/dpdk-next-crypto
 F: app/test-crypto-perf/
 F: doc/guides/tools/cryptoperf.rst
-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/4] add crypto perf test graphing script
  2021-01-20 17:29 ` [dpdk-dev] [PATCH v3 " Ciara Power
                     ` (3 preceding siblings ...)
  2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 4/4] maintainers: update crypto perf app maintainers Ciara Power
@ 2021-01-25 18:28   ` Akhil Goyal
  4 siblings, 0 replies; 27+ messages in thread
From: Akhil Goyal @ 2021-01-25 18:28 UTC (permalink / raw)
  To: Ciara Power, dev; +Cc: declan.doherty, stephen, adamx.dybkowski, thomas

> This patchset introduces a python script to run various crypto performance
> test cases, and graph the results in a consumable manner. The test suites
> are configured via JSON file. Some config files are provided,
> or the user may create one. Currently throughput and latency ptests for
> devices crypto_qat, crypto_aesni_mb and crypto_aesni_gcm are supported.
> 
> The final collection of graphs are output in PDF format, with multiple PDFs
> per test suite, one for each graph type.
> 
> Some fixes are included for the throughput performance test and latency
> performance test csv outputs also.
> 
> v3:
>   - Moved script and configs to app/test-crypto-perf directory.
>   - Made changes to documentation and MAINTAINERS to reflect the above
> change.
Series applied to dpdk-next-crypto

Thanks.


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2021-01-25 18:28 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-11 17:31 [dpdk-dev] [PATCH 0/4] add crypto perf test graphing script Ciara Power
2020-12-11 17:31 ` [dpdk-dev] [PATCH 1/4] test/cryptodev: fix latency test csv output Ciara Power
2020-12-11 17:31 ` [dpdk-dev] [PATCH 2/4] test/cryptodev: improve csv output for perf tests Ciara Power
2021-01-11 15:43   ` Doherty, Declan
2020-12-11 17:31 ` [dpdk-dev] [PATCH 3/4] usertools: add script to graph crypto perf results Ciara Power
2020-12-11 19:35   ` Stephen Hemminger
2021-01-11 16:03   ` Doherty, Declan
2020-12-11 17:31 ` [dpdk-dev] [PATCH 4/4] maintainers: update crypto perf app maintainers Ciara Power
2021-01-14 10:41 ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Ciara Power
2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 1/4] test/cryptodev: fix latency test csv output Ciara Power
2021-01-15  9:42     ` Dybkowski, AdamX
2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 2/4] test/cryptodev: fix csv output format Ciara Power
2021-01-15  9:42     ` Dybkowski, AdamX
2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 3/4] usertools: add script to graph crypto perf results Ciara Power
2021-01-15  9:43     ` Dybkowski, AdamX
2021-01-14 10:41   ` [dpdk-dev] [PATCH v2 4/4] maintainers: update crypto perf app maintainers Ciara Power
2021-01-15 10:13     ` Dybkowski, AdamX
2021-01-15  8:31   ` [dpdk-dev] [PATCH v2 0/4] add crypto perf test graphing script Doherty, Declan
2021-01-15 15:54     ` Akhil Goyal
2021-01-19 17:31       ` Thomas Monjalon
2021-01-19 17:34         ` Akhil Goyal
2021-01-20 17:29 ` [dpdk-dev] [PATCH v3 " Ciara Power
2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 1/4] test/cryptodev: fix latency test csv output Ciara Power
2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 2/4] test/cryptodev: fix csv output format Ciara Power
2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 3/4] test/cryptodev: add script to graph perf results Ciara Power
2021-01-20 17:29   ` [dpdk-dev] [PATCH v3 4/4] maintainers: update crypto perf app maintainers Ciara Power
2021-01-25 18:28   ` [dpdk-dev] [PATCH v3 0/4] add crypto perf test graphing script Akhil Goyal

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ http://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git