DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v1 0/4] [RFC] Testpmd RPC API
@ 2022-04-07 21:47 ohilyard
  2022-04-07 21:47 ` [PATCH v1 1/4] app/test-pmd-api: Add C++ Compiler ohilyard
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: ohilyard @ 2022-04-07 21:47 UTC (permalink / raw)
  To: dev; +Cc: Honnappa.Nagarahalli, thomas, Owen Hilyard

From: Owen Hilyard <ohilyard@iol.unh.edu>

    Currently, DTS uses Testpmd for most of its testing. This has been successful in reducing the need to create more test apps, but it has a few drawbacks. First, if some part of DPDK is not exposed via Testpmd or one of the example applications, for the purposes of DTS it is not testable. This is a situation I’d like to avoid. However, adding new functionality to Testpmd is labor-intensive. Testpmd currently uses a hand-written LL(1) parser (https://en.wikipedia.org/wiki/LL_parser) to parse command line options. This makes adding new functionality difficult since the parser is stored as a series of several thousand line long lookup tables. To look at it another way, 64% of the 52238 lines in Testpmd are related to command line input in some way. The command line interface of testpmd also presents several challenges for the underlying implementation, since it requires that everything a user might want to reference is identified via something that is reasonable to ask a user to type. As of right now, this is handled via either strings or integers. This can be handled by creating a global registry for objects, but it is still extra work that I think can be avoided. In addition, this leads to more places where things can go wrong. 

This is what DTS running a single command in testpmd looks like right now:
https://drive.google.com/file/d/1hvTcjfVdh8-I3CUNoq6bx82EuNQSK6qW/view?usp=sharing

    This approach has a number of disadvantages. First, it requires assembling all commands as strings inside of the test suite and sending them through a full round trip of SSH. This means that any non-trivial command, such as creating an RTE flow, will involve a lot of string templating. This normally wouldn’t be a big issue, except that some of the test suites are designed to hundreds of commands over the course of a test, paying the cost of an SSH round trip for each. Once Testpmd has the commands, it will then call the appropriate functions inside of DPDK, and then print out all of the state to standard out. All of this is sent back to DTS, where the author of the test case then needs to handle all possible outputs of Trex, often by either declaring the presence of a single word or short phrase in the output as meaning success or failure. In my opinion, this is something that is perfectly fine for humans to interact with, but it causes a lot of issues with automation due to its inherent inflexibility and the less-than-ideal methods of information transfer. This is why I am proposing the creation of an automation-oriented pmd, with a focus on exposing as much.

https://drive.google.com/file/d/1wj4-RnFPVERCzM8b68VJswAOEI9cg-X8/view?usp=sharing 

	That diagram is a high-level overview of the design, which explicitly excludes implementation details. However, it already has some benefits. First, making DPDK do something is a normal method call, instead of needing to format things into a string. This provides a much better interface for people working in both DTS and DPDK. Second, the ability to return structured data means that there won’t be parsers on both sides of communication anymore. Structured data also allows much more verbosity, since it is no longer an interface designed for humans. If a test case author needs to return the bytes of every received packet back to DTS for comparison with the expected value, they can. If you need to return a pointer for DTS to use later, that becomes reasonable. Simply moving to shuffling structured data around and using RPC already provides a lot of benefits. 
	The next obvious question would be what to use for the implementation. The initial attempt was made using Python on both sides and the standard library xmlrpc module. The RPC aspect of this approach worked very well, with the ability to send arbitrary python objects back and forth between DTS and app. However, having Python interacting with DPDK has a few issues. First, DPDK is generally very multi-threaded and the most common implementation of Python, CPython, does not have concurrency. It has something known as the global interpretr lock, which is a global mutex. This makes it very difficult to interact with blocking, multi-threaded code. The other issue is that I was not able to find a binding generator that I feel would be sufficient for DPDK. Many generators assumed sizeof(int) == 4 or had other portability issues such as assuming GCC or Clang as a C compiler. Others focused on some subset of C, meaning they would throw errors on alignment annotations. 
    Given this, I decided to look for cross-language RPC libraries. Although libraries exist for performing xmlrpc in C, they generally appeared quite difficult to use and required a lot of manual work. The next best option was gRPC. gRPC allows using a simple language, protobuf, with a language extension for rpc. It provides code generation to make it easy to use multiple languages together, since it was developed to make polyglot microservice interaction easier. The only drawback is that it considers C++ good enough for C support. In this case, I was able to easily integrate DPDK with C++, so that isn’t much of a concern. I used C++17 in the attached patches, but the minimum requirements are C++11. If there is concern about modern C++ causing too much mental overhead, a “C with classes” subset of C++ could easily be used. I also added an on-by-default option to use a C++ compiler, allowing anyone who does not have a C++ compiler available to them to turn off everything that uses C++. This disables the application written for this RFC.
    One of the major benefits of gRPC is the asynchronous API. This allows streaming data on both sides of an RPC call. This allows streaming logs back to DTS, streaming large amounts of data from low-memory systems back to DTS for processing, and would allow DTS to easily feed DPDK data, ending the test quickly on a failure. Currently, due to the overhead of sending data to Testpmd, it is common to just send all of the commands over and run everything since that will be much faster when the test passes, but it can cost a lot of time in the event of a failure. There are also optional security features for requiring authentication before allowing code execution. I think that a discussion on whether using them for DTS is necessary is warranted, although I personally think that it’s not worth the effort given the type of environment this sample application is expected to run in. 
    For this RFC, I ported test-acl because it was mostly self-contained and was something I could run on my laptop. It should be fairly easy to see how you would expand this proof of concept to cover more of DPDK, and I think that most of the functions currently used in testpmd could be ported over to this approach, saving a lot of development time. However, I would like to see some more interest before I take on a task like that. This will require a lot of work on the DTS side to implement, but it will make it much easier to add new features to DTS. 

Owen Hilyard (4):
  app/test-pmd-api: Add C++ Compiler
  app/test-pmd-api: Add POC with gRPC deps
  app/test-pmd-api: Add protobuf file
  app/test-pmd-api: Implementation files for the API

 app/meson.build              |   17 +
 app/test-pmd-api/api.proto   |   12 +
 app/test-pmd-api/api_impl.cc | 1160 ++++++++++++++++++++++++++++++++++
 app/test-pmd-api/api_impl.h  |   10 +
 app/test-pmd-api/main.c      |   11 +
 app/test-pmd-api/meson.build |   96 +++
 meson.build                  |    3 +
 meson_options.txt            |    2 +
 8 files changed, 1311 insertions(+)
 create mode 100644 app/test-pmd-api/api.proto
 create mode 100644 app/test-pmd-api/api_impl.cc
 create mode 100644 app/test-pmd-api/api_impl.h
 create mode 100644 app/test-pmd-api/main.c
 create mode 100644 app/test-pmd-api/meson.build

-- 
2.30.2


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 1/4] app/test-pmd-api: Add C++ Compiler
  2022-04-07 21:47 [PATCH v1 0/4] [RFC] Testpmd RPC API ohilyard
@ 2022-04-07 21:47 ` ohilyard
  2023-10-02 18:33   ` Stephen Hemminger
  2022-04-07 21:47 ` [PATCH v1 2/4] app/test-pmd-api: Add POC with gRPC deps ohilyard
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: ohilyard @ 2022-04-07 21:47 UTC (permalink / raw)
  To: dev; +Cc: Honnappa.Nagarahalli, thomas, Owen Hilyard

From: Owen Hilyard <ohilyard@iol.unh.edu>

Adds a C++ compiler to the project, which is currently enabled by
default for ease of testing. Meson currently lacks a way to try to get a
compiler, and failing to find a compiler for a language always causes a
hard error, so this is the only workable approach.

Signed-off-by: Owen Hilyard <ohilyard@iol.unh.edu>
---
 meson.build       | 3 +++
 meson_options.txt | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/meson.build b/meson.build
index 937f6110c0..01d47100f2 100644
--- a/meson.build
+++ b/meson.build
@@ -31,6 +31,9 @@ endif
 
 # set up some global vars for compiler, platform, configuration, etc.
 cc = meson.get_compiler('c')
+if get_option('use_cpp')
+    cxx = meson.get_compiler('cpp')
+endif
 dpdk_source_root = meson.current_source_dir()
 dpdk_build_root = meson.current_build_dir()
 dpdk_conf = configuration_data()
diff --git a/meson_options.txt b/meson_options.txt
index 7c220ad68d..9461d194a1 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -48,3 +48,5 @@ option('tests', type: 'boolean', value: true, description:
        'build unit tests')
 option('use_hpet', type: 'boolean', value: false, description:
        'use HPET timer in EAL')
+option('use_cpp', type: 'boolean', value: true, description: 
+       'enable components requiring a C++ compiler.')
\ No newline at end of file
-- 
2.30.2


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 2/4] app/test-pmd-api: Add POC with gRPC deps
  2022-04-07 21:47 [PATCH v1 0/4] [RFC] Testpmd RPC API ohilyard
  2022-04-07 21:47 ` [PATCH v1 1/4] app/test-pmd-api: Add C++ Compiler ohilyard
@ 2022-04-07 21:47 ` ohilyard
  2022-04-07 21:47 ` [PATCH v1 3/4] app/test-pmd-api: Add protobuf file ohilyard
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: ohilyard @ 2022-04-07 21:47 UTC (permalink / raw)
  To: dev; +Cc: Honnappa.Nagarahalli, thomas, Owen Hilyard

From: Owen Hilyard <ohilyard@iol.unh.edu>

The new app is disabled if the dependencies are not present, in order to
avoid breaking the build on any system that does not have gRPC
installed. The meson file for the app is heavily derived from
the testpmd.

Signed-off-by: Owen Hilyard <ohilyard@iol.unh.edu>
---
 app/meson.build              | 17 +++++++
 app/test-pmd-api/meson.build | 96 ++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)
 create mode 100644 app/test-pmd-api/meson.build

diff --git a/app/meson.build b/app/meson.build
index 93d8c15032..3dfd5c003e 100644
--- a/app/meson.build
+++ b/app/meson.build
@@ -20,6 +20,23 @@ apps = [
         'test-sad',
 ]
 
+if get_option('use_cpp')
+    protoc = find_program('protoc', required : false)
+    protobuf_dep = dependency('protobuf', required : false)
+    grpc_cpp_plugin = find_program('grpc_cpp_plugin', required: false)
+    grpc_python_plugin = find_program('grpc_python_plugin', required: false)
+    grpc_dep = dependency('grpc', required: false)
+    grpcpp_dep = dependency('grpc++', required: false)
+
+    if protoc.found() and dep.found() and grpc_cpp_plugin.found() and grpc_python_plugin.found() and grpc_dep.found() and grpcpp_dep.found()
+        apps += [
+            'test-pmd-api'
+        ]
+    endif
+
+endif
+
+
 default_cflags = machine_args + ['-DALLOW_EXPERIMENTAL_API']
 default_ldflags = []
 if get_option('default_library') == 'static' and not is_windows
diff --git a/app/test-pmd-api/meson.build b/app/test-pmd-api/meson.build
new file mode 100644
index 0000000000..7438098e9d
--- /dev/null
+++ b/app/test-pmd-api/meson.build
@@ -0,0 +1,96 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+# override default name to drop the hyphen
+name = 'testpmd-api'
+cflags += [
+    '-Wno-deprecated-declarations'
+]
+sources += files(
+    'main.c',
+    'api_impl.cc'
+)
+
+ldflags += [
+    '-ldl',
+    '-lgrpc++_reflection',
+]
+
+ext_deps += [protobuf_dep, grpc_dep, grpcpp_dep, dependency('threads')]
+
+if dpdk_conf.has('RTE_HAS_JANSSON')
+    ext_deps += jansson_dep
+endif
+
+deps += ['ethdev', 'cmdline', 'bus_pci']
+if dpdk_conf.has('RTE_CRYPTO_SCHEDULER')
+    deps += 'crypto_scheduler'
+endif
+if dpdk_conf.has('RTE_LIB_BITRATESTATS')
+    deps += 'bitratestats'
+endif
+if dpdk_conf.has('RTE_LIB_BPF')
+    deps += 'bpf'
+endif
+if dpdk_conf.has('RTE_LIB_GRO')
+    deps += 'gro'
+endif
+if dpdk_conf.has('RTE_LIB_GSO')
+    deps += 'gso'
+endif
+if dpdk_conf.has('RTE_LIB_LATENCYSTATS')
+    deps += 'latencystats'
+endif
+if dpdk_conf.has('RTE_LIB_METRICS')
+    deps += 'metrics'
+endif
+if dpdk_conf.has('RTE_LIB_PDUMP')
+    deps += 'pdump'
+endif
+if dpdk_conf.has('RTE_NET_BOND')
+    deps += 'net_bond'
+endif
+if dpdk_conf.has('RTE_NET_BNXT')
+    deps += 'net_bnxt'
+endif
+if dpdk_conf.has('RTE_NET_I40E')
+    deps += 'net_i40e'
+endif
+if dpdk_conf.has('RTE_NET_IXGBE')
+    deps += 'net_ixgbe'
+endif
+if dpdk_conf.has('RTE_NET_DPAA')
+    deps += ['bus_dpaa', 'mempool_dpaa', 'net_dpaa']
+endif
+
+if meson.version().version_compare('>=0.55')
+    grpc_cpp_plugin_path = grpc_cpp_plugin.full_path()
+    grpc_python_plugin_path = grpc_python_plugin.full_path()
+else
+    grpc_cpp_plugin_path = grpc_cpp_plugin.path()
+    grpc_python_plugin_path = grpc_python_plugin.path()
+endif
+
+
+cpp_generator = generator(protoc, 
+                output    : ['@BASENAME@.pb.cc', '@BASENAME@.pb.h', '@BASENAME@.grpc.pb.cc', '@BASENAME@.grpc.pb.h'],
+                arguments : [
+                    '--proto_path=@CURRENT_SOURCE_DIR@',
+                    '--plugin=protoc-gen-grpc=@0@'.format(grpc_cpp_plugin_path), 
+                    '--cpp_out=@BUILD_DIR@',
+                    '--grpc_out=@BUILD_DIR@',
+                    '@INPUT@'
+                ])
+
+python_generator = generator(protoc, 
+                output    : ['@BASENAME@_pb2.py', '@BASENAME@_pb2_grpc.py'],
+                arguments : [
+                    '--proto_path=@CURRENT_SOURCE_DIR@',
+                    '--plugin=protoc-gen-grpc=@0@'.format(grpc_python_plugin_path), 
+                    '--python_out=@BUILD_DIR@',
+                    '--grpc_out=@BUILD_DIR@',
+                    '@INPUT@'
+                ])
+
+sources += cpp_generator.process('api.proto')
+sources += python_generator.process('api.proto')
\ No newline at end of file
-- 
2.30.2


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 3/4] app/test-pmd-api: Add protobuf file
  2022-04-07 21:47 [PATCH v1 0/4] [RFC] Testpmd RPC API ohilyard
  2022-04-07 21:47 ` [PATCH v1 1/4] app/test-pmd-api: Add C++ Compiler ohilyard
  2022-04-07 21:47 ` [PATCH v1 2/4] app/test-pmd-api: Add POC with gRPC deps ohilyard
@ 2022-04-07 21:47 ` ohilyard
  2022-04-07 21:47 ` [PATCH v1 4/4] app/test-pmd-api: Implementation files for the API ohilyard
  2022-04-11 14:27 ` [PATCH v1 0/4] [RFC] Testpmd RPC API Jerin Jacob
  4 siblings, 0 replies; 12+ messages in thread
From: ohilyard @ 2022-04-07 21:47 UTC (permalink / raw)
  To: dev; +Cc: Honnappa.Nagarahalli, thomas, Owen Hilyard

From: Owen Hilyard <ohilyard@iol.unh.edu>

This file contains the gRPC definitions for the api as it currently
stands.

Signed-off-by: Owen Hilyard <ohilyard@iol.unh.edu>
---
 app/test-pmd-api/api.proto | 12 ++++++++++++
 1 file changed, 12 insertions(+)
 create mode 100644 app/test-pmd-api/api.proto

diff --git a/app/test-pmd-api/api.proto b/app/test-pmd-api/api.proto
new file mode 100644
index 0000000000..ba52e379e9
--- /dev/null
+++ b/app/test-pmd-api/api.proto
@@ -0,0 +1,12 @@
+syntax = "proto3";
+import "google/protobuf/empty.proto";
+
+message AclSetupArgs {
+    repeated string args = 1;
+}
+
+service TestpmdAPI {
+    rpc acl_setup (AclSetupArgs) returns (google.protobuf.Empty);
+    rpc acl_search (google.protobuf.Empty) returns (google.protobuf.Empty);
+    rpc acl_cleanup_config (google.protobuf.Empty) returns (google.protobuf.Empty);
+}
\ No newline at end of file
-- 
2.30.2


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v1 4/4] app/test-pmd-api: Implementation files for the API
  2022-04-07 21:47 [PATCH v1 0/4] [RFC] Testpmd RPC API ohilyard
                   ` (2 preceding siblings ...)
  2022-04-07 21:47 ` [PATCH v1 3/4] app/test-pmd-api: Add protobuf file ohilyard
@ 2022-04-07 21:47 ` ohilyard
  2022-04-11 14:27 ` [PATCH v1 0/4] [RFC] Testpmd RPC API Jerin Jacob
  4 siblings, 0 replies; 12+ messages in thread
From: ohilyard @ 2022-04-07 21:47 UTC (permalink / raw)
  To: dev; +Cc: Honnappa.Nagarahalli, thomas, Owen Hilyard

From: Owen Hilyard <ohilyard@iol.unh.edu>

As of right now, this is a fairly direct port. As such, most of the main
file from test-acl is present in api_impl.cc. If this proof of concept
is going to expand into a usable application, the acl test helper can be
moved to another file to help keep the service definition file clean.
The header file must remain a C header file in order to be able to be
included in the main file. At this point, the main file is just a stub
that starts the RPC server, but I have left it so that any extensions
can be written in C and the C++ parts of this app can be easily
encapsulated.

Signed-off-by: Owen Hilyard <ohilyard@iol.unh.edu>
---
 app/test-pmd-api/api_impl.cc | 1160 ++++++++++++++++++++++++++++++++++
 app/test-pmd-api/api_impl.h  |   10 +
 app/test-pmd-api/main.c      |   11 +
 3 files changed, 1181 insertions(+)
 create mode 100644 app/test-pmd-api/api_impl.cc
 create mode 100644 app/test-pmd-api/api_impl.h
 create mode 100644 app/test-pmd-api/main.c

diff --git a/app/test-pmd-api/api_impl.cc b/app/test-pmd-api/api_impl.cc
new file mode 100644
index 0000000000..6972172598
--- /dev/null
+++ b/app/test-pmd-api/api_impl.cc
@@ -0,0 +1,1160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ * Copyright(c) 2022 University of New Hampshire
+ */
+
+#include <rte_string_fns.h>
+#include <rte_acl.h>
+#include <getopt.h>
+#include <string.h>
+
+#include <rte_cycles.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_ip.h>
+
+#include "api.pb.h"
+#include "api.grpc.pb.h"
+
+/*
+C++ includes
+*/
+#include <string>
+#include <vector>
+#include <grpcpp/ext/proto_server_reflection_plugin.h>
+#include <grpcpp/grpcpp.h>
+#include <grpcpp/health_check_service_interface.h>
+
+#define PRINT_USAGE_START "%s [EAL options] --\n"
+
+#define RTE_LOGTYPE_TESTACL RTE_LOGTYPE_USER1
+
+#define APP_NAME "TESTACL"
+
+#define GET_CB_FIELD(in, fd, base, lim, dlm)                                   \
+    do {                                                                   \
+        unsigned long val;                                             \
+        char *end_fld;                                                 \
+        errno = 0;                                                     \
+        val = strtoul((in), &end_fld, (base));                         \
+        if (errno != 0 || end_fld[0] != (dlm) || val > (lim))          \
+            return -EINVAL;                                        \
+        (fd) = (typeof(fd))val;                                        \
+        (in) = end_fld + 1;                                            \
+    } while (0)
+
+#define OPT_RULE_FILE "rulesf"
+#define OPT_TRACE_FILE "tracef"
+#define OPT_RULE_NUM "rulenum"
+#define OPT_TRACE_NUM "tracenum"
+#define OPT_TRACE_STEP "tracestep"
+#define OPT_SEARCH_ALG "alg"
+#define OPT_BLD_CATEGORIES "bldcat"
+#define OPT_RUN_CATEGORIES "runcat"
+#define OPT_MAX_SIZE "maxsize"
+#define OPT_ITER_NUM "iter"
+#define OPT_VERBOSE "verbose"
+#define OPT_IPV6 "ipv6"
+
+#define TRACE_DEFAULT_NUM 0x10000
+#define TRACE_STEP_MAX 0x1000
+#define TRACE_STEP_DEF 0x100
+
+#define RULE_NUM 0x10000
+
+#define COMMENT_LEAD_CHAR '#'
+
+enum {
+	DUMP_NONE, DUMP_SEARCH, DUMP_PKT, DUMP_MAX
+};
+
+struct acl_alg {
+	const char *name;
+	enum rte_acl_classify_alg alg;
+};
+
+static const struct acl_alg acl_alg[] = {
+		{
+				.name = "scalar",
+				.alg = RTE_ACL_CLASSIFY_SCALAR,
+		},
+		{
+				.name = "sse",
+				.alg = RTE_ACL_CLASSIFY_SSE,
+		},
+		{
+				.name = "avx2",
+				.alg = RTE_ACL_CLASSIFY_AVX2,
+		},
+		{
+				.name = "neon",
+				.alg = RTE_ACL_CLASSIFY_NEON,
+		},
+		{
+				.name = "altivec",
+				.alg = RTE_ACL_CLASSIFY_ALTIVEC,
+		},
+		{
+				.name = "avx512x16",
+				.alg = RTE_ACL_CLASSIFY_AVX512X16,
+		},
+		{
+				.name = "avx512x32",
+				.alg = RTE_ACL_CLASSIFY_AVX512X32,
+		},
+};
+
+static struct {
+	const char *prgname;
+	const char *rule_file;
+	const char *trace_file;
+	size_t max_size;
+	uint32_t bld_categories;
+	uint32_t run_categories;
+	uint32_t nb_rules;
+	uint32_t nb_traces;
+	uint32_t trace_step;
+	uint32_t trace_sz;
+	uint32_t iter_num;
+	uint32_t verbose;
+	uint32_t ipv6;
+	struct acl_alg alg;
+	uint32_t used_traces;
+	void *traces;
+	struct rte_acl_ctx *acx;
+} config = {
+		.prgname = NULL,
+		.rule_file = NULL,
+		.trace_file = NULL,
+		.max_size = 0,
+		.bld_categories = 3,
+		.run_categories = 1,
+		.nb_rules = RULE_NUM,
+		.nb_traces = TRACE_DEFAULT_NUM,
+		.trace_step = TRACE_STEP_DEF,
+		.trace_sz = 0,
+		.iter_num = 1,
+		.verbose = DUMP_MAX,
+		.ipv6 = 0,
+		.alg = {
+				.name = "default",
+				.alg = RTE_ACL_CLASSIFY_DEFAULT,
+		},
+		.used_traces = 0,
+		.traces = NULL,
+		.acx = NULL,
+};
+
+static struct rte_acl_param prm = {
+		.name = APP_NAME,
+		.socket_id = SOCKET_ID_ANY,
+		.rule_size = 0,
+		.max_rule_num = 0,
+};
+
+/*
+ * Rule and trace formats definitions.
+ */
+
+struct ipv4_5tuple {
+	uint8_t proto;
+	uint32_t ip_src;
+	uint32_t ip_dst;
+	uint16_t port_src;
+	uint16_t port_dst;
+};
+
+enum {
+	PROTO_FIELD_IPV4,
+	SRC_FIELD_IPV4,
+	DST_FIELD_IPV4,
+	SRCP_FIELD_IPV4,
+	DSTP_FIELD_IPV4,
+	NUM_FIELDS_IPV4
+};
+
+/*
+ * That effectively defines order of IPV4VLAN classifications:
+ *  - PROTO
+ *  - VLAN (TAG and DOMAIN)
+ *  - SRC IP ADDRESS
+ *  - DST IP ADDRESS
+ *  - PORTS (SRC and DST)
+ */
+enum {
+	RTE_ACL_IPV4VLAN_PROTO,
+	RTE_ACL_IPV4VLAN_VLAN,
+	RTE_ACL_IPV4VLAN_SRC,
+	RTE_ACL_IPV4VLAN_DST,
+	RTE_ACL_IPV4VLAN_PORTS,
+	RTE_ACL_IPV4VLAN_NUM
+};
+
+struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = {
+		{
+				.type = RTE_ACL_FIELD_TYPE_BITMASK,
+				.size = sizeof(uint8_t),
+				.field_index = PROTO_FIELD_IPV4,
+				.input_index = RTE_ACL_IPV4VLAN_PROTO,
+				.offset = offsetof(struct ipv4_5tuple, proto),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = SRC_FIELD_IPV4,
+				.input_index = RTE_ACL_IPV4VLAN_SRC,
+				.offset = offsetof(struct ipv4_5tuple, ip_src),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = DST_FIELD_IPV4,
+				.input_index = RTE_ACL_IPV4VLAN_DST,
+				.offset = offsetof(struct ipv4_5tuple, ip_dst),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_RANGE,
+				.size = sizeof(uint16_t),
+				.field_index = SRCP_FIELD_IPV4,
+				.input_index = RTE_ACL_IPV4VLAN_PORTS,
+				.offset = offsetof(struct ipv4_5tuple, port_src),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_RANGE,
+				.size = sizeof(uint16_t),
+				.field_index = DSTP_FIELD_IPV4,
+				.input_index = RTE_ACL_IPV4VLAN_PORTS,
+				.offset = offsetof(struct ipv4_5tuple, port_dst),
+		},
+};
+
+#define IPV6_ADDR_LEN 16
+#define IPV6_ADDR_U16 (IPV6_ADDR_LEN / sizeof(uint16_t))
+#define IPV6_ADDR_U32 (IPV6_ADDR_LEN / sizeof(uint32_t))
+
+struct ipv6_5tuple {
+	uint8_t proto;
+	uint32_t ip_src[IPV6_ADDR_U32];
+	uint32_t ip_dst[IPV6_ADDR_U32];
+	uint16_t port_src;
+	uint16_t port_dst;
+};
+
+enum {
+	PROTO_FIELD_IPV6,
+	SRC1_FIELD_IPV6,
+	SRC2_FIELD_IPV6,
+	SRC3_FIELD_IPV6,
+	SRC4_FIELD_IPV6,
+	DST1_FIELD_IPV6,
+	DST2_FIELD_IPV6,
+	DST3_FIELD_IPV6,
+	DST4_FIELD_IPV6,
+	SRCP_FIELD_IPV6,
+	DSTP_FIELD_IPV6,
+	NUM_FIELDS_IPV6
+};
+
+struct rte_acl_field_def ipv6_defs[NUM_FIELDS_IPV6] = {
+		{
+				.type = RTE_ACL_FIELD_TYPE_BITMASK,
+				.size = sizeof(uint8_t),
+				.field_index = PROTO_FIELD_IPV6,
+				.input_index = PROTO_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, proto),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = SRC1_FIELD_IPV6,
+				.input_index = SRC1_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, ip_src[0]),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = SRC2_FIELD_IPV6,
+				.input_index = SRC2_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, ip_src[1]),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = SRC3_FIELD_IPV6,
+				.input_index = SRC3_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, ip_src[2]),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = SRC4_FIELD_IPV6,
+				.input_index = SRC4_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, ip_src[3]),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = DST1_FIELD_IPV6,
+				.input_index = DST1_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, ip_dst[0]),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = DST2_FIELD_IPV6,
+				.input_index = DST2_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, ip_dst[1]),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = DST3_FIELD_IPV6,
+				.input_index = DST3_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, ip_dst[2]),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_MASK,
+				.size = sizeof(uint32_t),
+				.field_index = DST4_FIELD_IPV6,
+				.input_index = DST4_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, ip_dst[3]),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_RANGE,
+				.size = sizeof(uint16_t),
+				.field_index = SRCP_FIELD_IPV6,
+				.input_index = SRCP_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, port_src),
+		},
+		{
+				.type = RTE_ACL_FIELD_TYPE_RANGE,
+				.size = sizeof(uint16_t),
+				.field_index = DSTP_FIELD_IPV6,
+				.input_index = SRCP_FIELD_IPV6,
+				.offset = offsetof(struct ipv6_5tuple, port_dst),
+		},
+};
+
+enum {
+	CB_FLD_SRC_ADDR,
+	CB_FLD_DST_ADDR,
+	CB_FLD_SRC_PORT_LOW,
+	CB_FLD_SRC_PORT_DLM,
+	CB_FLD_SRC_PORT_HIGH,
+	CB_FLD_DST_PORT_LOW,
+	CB_FLD_DST_PORT_DLM,
+	CB_FLD_DST_PORT_HIGH,
+	CB_FLD_PROTO,
+	CB_FLD_NUM,
+};
+
+enum {
+	CB_TRC_SRC_ADDR,
+	CB_TRC_DST_ADDR,
+	CB_TRC_SRC_PORT,
+	CB_TRC_DST_PORT,
+	CB_TRC_PROTO,
+	CB_TRC_NUM,
+};
+
+RTE_ACL_RULE_DEF(acl_rule, RTE_ACL_MAX_FIELDS);
+
+static const char cb_port_delim[] = ":";
+
+static char line[LINE_MAX];
+
+#define dump_verbose(lvl, fh, fmt, args...)                                    \
+    do {                                                                   \
+        if ((lvl) <= (int32_t)config.verbose)                          \
+            fprintf(fh, fmt, ##args);                              \
+    } while (0)
+
+/*
+ * Parse ClassBench input trace (test vectors and expected results) file.
+ * Expected format:
+ * <src_ipv4_addr> <space> <dst_ipv4_addr> <space> \
+ * <src_port> <space> <dst_port> <space> <proto>
+ */
+static int parse_cb_ipv4_trace(char *str, struct ipv4_5tuple *v) {
+	int i;
+	char *s, *sp, *in[CB_TRC_NUM];
+	static const char *dlm = " \t\n";
+
+	s = str;
+	for (i = 0; i != RTE_DIM(in); i++) {
+		in[i] = strtok_r(s, dlm, &sp);
+		if (in[i] == NULL)
+			return -EINVAL;
+		s = NULL;
+	}
+
+	GET_CB_FIELD(in[CB_TRC_SRC_ADDR], v->ip_src, 0, UINT32_MAX, 0);
+	GET_CB_FIELD(in[CB_TRC_DST_ADDR], v->ip_dst, 0, UINT32_MAX, 0);
+	GET_CB_FIELD(in[CB_TRC_SRC_PORT], v->port_src, 0, UINT16_MAX, 0);
+	GET_CB_FIELD(in[CB_TRC_DST_PORT], v->port_dst, 0, UINT16_MAX, 0);
+	GET_CB_FIELD(in[CB_TRC_PROTO], v->proto, 0, UINT8_MAX, 0);
+
+	/* convert to network byte order. */
+	v->ip_src = rte_cpu_to_be_32(v->ip_src);
+	v->ip_dst = rte_cpu_to_be_32(v->ip_dst);
+	v->port_src = rte_cpu_to_be_16(v->port_src);
+	v->port_dst = rte_cpu_to_be_16(v->port_dst);
+
+	return 0;
+}
+
+/*
+ * Parse IPv6 address, expects the following format:
+ * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X is a hexadecimal digit).
+ */
+static int parse_ipv6_addr(const char *in, const char **end,
+                           uint32_t v[IPV6_ADDR_U32], char dlm) {
+	uint32_t addr[IPV6_ADDR_U16];
+
+	GET_CB_FIELD(in, addr[0], 16, UINT16_MAX, ':');
+	GET_CB_FIELD(in, addr[1], 16, UINT16_MAX, ':');
+	GET_CB_FIELD(in, addr[2], 16, UINT16_MAX, ':');
+	GET_CB_FIELD(in, addr[3], 16, UINT16_MAX, ':');
+	GET_CB_FIELD(in, addr[4], 16, UINT16_MAX, ':');
+	GET_CB_FIELD(in, addr[5], 16, UINT16_MAX, ':');
+	GET_CB_FIELD(in, addr[6], 16, UINT16_MAX, ':');
+	GET_CB_FIELD(in, addr[7], 16, UINT16_MAX, dlm);
+
+	*end = in;
+
+	v[0] = (addr[0] << 16) + addr[1];
+	v[1] = (addr[2] << 16) + addr[3];
+	v[2] = (addr[4] << 16) + addr[5];
+	v[3] = (addr[6] << 16) + addr[7];
+
+	return 0;
+}
+
+static int parse_cb_ipv6_addr_trace(const char *in, uint32_t v[IPV6_ADDR_U32]) {
+	int32_t rc;
+	const char *end;
+
+	rc = parse_ipv6_addr(in, &end, v, 0);
+	if (rc != 0)
+		return rc;
+
+	v[0] = rte_cpu_to_be_32(v[0]);
+	v[1] = rte_cpu_to_be_32(v[1]);
+	v[2] = rte_cpu_to_be_32(v[2]);
+	v[3] = rte_cpu_to_be_32(v[3]);
+
+	return 0;
+}
+
+/*
+ * Parse ClassBench input trace (test vectors and expected results) file.
+ * Expected format:
+ * <src_ipv6_addr> <space> <dst_ipv6_addr> <space> \
+ * <src_port> <space> <dst_port> <space> <proto>
+ */
+static int parse_cb_ipv6_trace(char *str, struct ipv6_5tuple *v) {
+	int32_t i, rc;
+	char *s, *sp, *in[CB_TRC_NUM];
+	static const char *dlm = " \t\n";
+
+	s = str;
+	for (i = 0; i != RTE_DIM(in); i++) {
+		in[i] = strtok_r(s, dlm, &sp);
+		if (in[i] == NULL)
+			return -EINVAL;
+		s = NULL;
+	}
+
+	/* get ip6 src address. */
+	rc = parse_cb_ipv6_addr_trace(in[CB_TRC_SRC_ADDR], v->ip_src);
+	if (rc != 0)
+		return rc;
+
+	/* get ip6 dst address. */
+	rc = parse_cb_ipv6_addr_trace(in[CB_TRC_DST_ADDR], v->ip_dst);
+	if (rc != 0)
+		return rc;
+
+	GET_CB_FIELD(in[CB_TRC_SRC_PORT], v->port_src, 0, UINT16_MAX, 0);
+	GET_CB_FIELD(in[CB_TRC_DST_PORT], v->port_dst, 0, UINT16_MAX, 0);
+	GET_CB_FIELD(in[CB_TRC_PROTO], v->proto, 0, UINT8_MAX, 0);
+
+	/* convert to network byte order. */
+	v->port_src = rte_cpu_to_be_16(v->port_src);
+	v->port_dst = rte_cpu_to_be_16(v->port_dst);
+
+	return 0;
+}
+
+/* Bypass comment and empty lines */
+static int skip_line(const char *buf) {
+	uint32_t i;
+
+	for (i = 0; isspace(buf[i]) != 0; i++);
+
+	if (buf[i] == 0 || buf[i] == COMMENT_LEAD_CHAR)
+		return 1;
+
+	return 0;
+}
+
+static void tracef_init(void) {
+	static const char name[] = APP_NAME;
+	FILE *f;
+	size_t sz;
+	uint32_t i, k, n;
+	struct ipv4_5tuple *v;
+	struct ipv6_5tuple *w;
+
+	sz = config.nb_traces * (config.ipv6 ? sizeof(*w) : sizeof(*v));
+	config.traces = rte_zmalloc_socket(name, sz, RTE_CACHE_LINE_SIZE,
+	                                   SOCKET_ID_ANY);
+	if (config.traces == NULL)
+		rte_exit(EXIT_FAILURE,
+		         "Cannot allocate %zu bytes for "
+		         "requested %u number of trace records\n",
+		         sz, config.nb_traces);
+
+	f = fopen(config.trace_file, "r");
+	if (f == NULL)
+		rte_exit(-EINVAL, "failed to open file: %s\n",
+		         config.trace_file);
+
+	v = (struct ipv4_5tuple *) config.traces;
+	w = (struct ipv6_5tuple *) config.traces;
+	k = 0;
+	n = 0;
+	for (i = 0; n != config.nb_traces; i++) {
+		if (fgets(line, sizeof(line), f) == NULL)
+			break;
+
+		if (skip_line(line) != 0) {
+			k++;
+			continue;
+		}
+
+		n = i - k;
+
+		if (config.ipv6) {
+			if (parse_cb_ipv6_trace(line, w + n) != 0)
+				rte_exit(EXIT_FAILURE,
+				         "%s: failed to parse ipv6 trace "
+				         "record at line %u\n",
+				         config.trace_file, i + 1);
+		} else {
+			if (parse_cb_ipv4_trace(line, v + n) != 0)
+				rte_exit(EXIT_FAILURE,
+				         "%s: failed to parse ipv4 trace "
+				         "record at line %u\n",
+				         config.trace_file, i + 1);
+		}
+	}
+
+	config.used_traces = i - k;
+	fclose(f);
+}
+
+static int parse_ipv6_net(const char *in, struct rte_acl_field field[4]) {
+	int32_t rc;
+	const char *mp;
+	uint32_t i, m, v[4];
+	const uint32_t nbu32 = sizeof(uint32_t) * CHAR_BIT;
+
+	/* get address. */
+	rc = parse_ipv6_addr(in, &mp, v, '/');
+	if (rc != 0)
+		return rc;
+
+	/* get mask. */
+	GET_CB_FIELD(mp, m, 0, CHAR_BIT * sizeof(v), 0);
+
+	/* put all together. */
+	for (i = 0; i != RTE_DIM(v); i++) {
+		if (m >= (i + 1) * nbu32)
+			field[i].mask_range.u32 = nbu32;
+		else
+			field[i].mask_range.u32 =
+					m > (i * nbu32) ? m - (i * 32) : 0;
+
+		field[i].value.u32 = v[i];
+	}
+
+	return 0;
+}
+
+static int parse_cb_ipv6_rule(char *str, struct acl_rule *v) {
+	int i, rc;
+	char *s, *sp, *in[CB_FLD_NUM];
+	static const char *dlm = " \t\n";
+
+	/*
+	 * Skip leading '@'
+	 */
+	if (strchr(str, '@') != str)
+		return -EINVAL;
+
+	s = str + 1;
+
+	for (i = 0; i != RTE_DIM(in); i++) {
+		in[i] = strtok_r(s, dlm, &sp);
+		if (in[i] == NULL)
+			return -EINVAL;
+		s = NULL;
+	}
+
+	rc = parse_ipv6_net(in[CB_FLD_SRC_ADDR], v->field + SRC1_FIELD_IPV6);
+	if (rc != 0) {
+		RTE_LOG(ERR, TESTACL,
+		        "failed to read source address/mask: %s\n",
+		        in[CB_FLD_SRC_ADDR]);
+		return rc;
+	}
+
+	rc = parse_ipv6_net(in[CB_FLD_DST_ADDR], v->field + DST1_FIELD_IPV6);
+	if (rc != 0) {
+		RTE_LOG(ERR, TESTACL,
+		        "failed to read destination address/mask: %s\n",
+		        in[CB_FLD_DST_ADDR]);
+		return rc;
+	}
+
+	/* source port. */
+	GET_CB_FIELD(in[CB_FLD_SRC_PORT_LOW],
+	             v->field[SRCP_FIELD_IPV6].value.u16, 0, UINT16_MAX, 0);
+	GET_CB_FIELD(in[CB_FLD_SRC_PORT_HIGH],
+	             v->field[SRCP_FIELD_IPV6].mask_range.u16, 0, UINT16_MAX,
+	             0);
+
+	if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim,
+	            sizeof(cb_port_delim)) != 0)
+		return -EINVAL;
+
+	/* destination port. */
+	GET_CB_FIELD(in[CB_FLD_DST_PORT_LOW],
+	             v->field[DSTP_FIELD_IPV6].value.u16, 0, UINT16_MAX, 0);
+	GET_CB_FIELD(in[CB_FLD_DST_PORT_HIGH],
+	             v->field[DSTP_FIELD_IPV6].mask_range.u16, 0, UINT16_MAX,
+	             0);
+
+	if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim,
+	            sizeof(cb_port_delim)) != 0)
+		return -EINVAL;
+
+	GET_CB_FIELD(in[CB_FLD_PROTO], v->field[PROTO_FIELD_IPV6].value.u8, 0,
+	             UINT8_MAX, '/');
+	GET_CB_FIELD(in[CB_FLD_PROTO], v->field[PROTO_FIELD_IPV6].mask_range.u8,
+	             0, UINT8_MAX, 0);
+
+	return 0;
+}
+
+static int parse_ipv4_net(const char *in, uint32_t *addr, uint32_t *mask_len) {
+	uint8_t a, b, c, d, m;
+
+	GET_CB_FIELD(in, a, 0, UINT8_MAX, '.');
+	GET_CB_FIELD(in, b, 0, UINT8_MAX, '.');
+	GET_CB_FIELD(in, c, 0, UINT8_MAX, '.');
+	GET_CB_FIELD(in, d, 0, UINT8_MAX, '/');
+	GET_CB_FIELD(in, m, 0, sizeof(uint32_t) * CHAR_BIT, 0);
+
+	addr[0] = RTE_IPV4(a, b, c, d);
+	mask_len[0] = m;
+
+	return 0;
+}
+
+/*
+ * Parse ClassBench rules file.
+ * Expected format:
+ * '@'<src_ipv4_addr>'/'<masklen> <space> \
+ * <dst_ipv4_addr>'/'<masklen> <space> \
+ * <src_port_low> <space> ":" <src_port_high> <space> \
+ * <dst_port_low> <space> ":" <dst_port_high> <space> \
+ * <proto>'/'<mask>
+ */
+static int parse_cb_ipv4_rule(char *str, struct acl_rule *v) {
+	int i, rc;
+	char *s, *sp, *in[CB_FLD_NUM];
+	static const char *dlm = " \t\n";
+
+	/*
+	 * Skip leading '@'
+	 */
+	if (strchr(str, '@') != str)
+		return -EINVAL;
+
+	s = str + 1;
+
+	for (i = 0; i != RTE_DIM(in); i++) {
+		in[i] = strtok_r(s, dlm, &sp);
+		if (in[i] == NULL)
+			return -EINVAL;
+		s = NULL;
+	}
+
+	rc = parse_ipv4_net(in[CB_FLD_SRC_ADDR],
+	                    &v->field[SRC_FIELD_IPV4].value.u32,
+	                    &v->field[SRC_FIELD_IPV4].mask_range.u32);
+	if (rc != 0) {
+		RTE_LOG(ERR, TESTACL,
+		        "failed to read source address/mask: %s\n",
+		        in[CB_FLD_SRC_ADDR]);
+		return rc;
+	}
+
+	rc = parse_ipv4_net(in[CB_FLD_DST_ADDR],
+	                    &v->field[DST_FIELD_IPV4].value.u32,
+	                    &v->field[DST_FIELD_IPV4].mask_range.u32);
+	if (rc != 0) {
+		RTE_LOG(ERR, TESTACL,
+		        "failed to read destination address/mask: %s\n",
+		        in[CB_FLD_DST_ADDR]);
+		return rc;
+	}
+
+	/* source port. */
+	GET_CB_FIELD(in[CB_FLD_SRC_PORT_LOW],
+	             v->field[SRCP_FIELD_IPV4].value.u16, 0, UINT16_MAX, 0);
+	GET_CB_FIELD(in[CB_FLD_SRC_PORT_HIGH],
+	             v->field[SRCP_FIELD_IPV4].mask_range.u16, 0, UINT16_MAX,
+	             0);
+
+	if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim,
+	            sizeof(cb_port_delim)) != 0)
+		return -EINVAL;
+
+	/* destination port. */
+	GET_CB_FIELD(in[CB_FLD_DST_PORT_LOW],
+	             v->field[DSTP_FIELD_IPV4].value.u16, 0, UINT16_MAX, 0);
+	GET_CB_FIELD(in[CB_FLD_DST_PORT_HIGH],
+	             v->field[DSTP_FIELD_IPV4].mask_range.u16, 0, UINT16_MAX,
+	             0);
+
+	if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim,
+	            sizeof(cb_port_delim)) != 0)
+		return -EINVAL;
+
+	GET_CB_FIELD(in[CB_FLD_PROTO], v->field[PROTO_FIELD_IPV4].value.u8, 0,
+	             UINT8_MAX, '/');
+	GET_CB_FIELD(in[CB_FLD_PROTO], v->field[PROTO_FIELD_IPV4].mask_range.u8,
+	             0, UINT8_MAX, 0);
+
+	return 0;
+}
+
+typedef int (*parse_5tuple)(char *text, struct acl_rule *rule);
+
+static int add_cb_rules(FILE *f, struct rte_acl_ctx *ctx) {
+	int rc;
+	uint32_t i, k, n;
+	struct acl_rule v;
+	parse_5tuple parser;
+
+	memset(&v, 0, sizeof(v));
+	parser = (config.ipv6 != 0) ? parse_cb_ipv6_rule : parse_cb_ipv4_rule;
+
+	k = 0;
+	for (i = 1; fgets(line, sizeof(line), f) != NULL; i++) {
+		if (skip_line(line) != 0) {
+			k++;
+			continue;
+		}
+
+		n = i - k;
+		rc = parser(line, &v);
+		if (rc != 0) {
+			RTE_LOG(ERR, TESTACL,
+			        "line %u: parse_cb_ipv4vlan_rule"
+			        " failed, error code: %d (%s)\n",
+			        i, rc, strerror(-rc));
+			return rc;
+		}
+
+		v.data.category_mask = RTE_LEN2MASK(
+				RTE_ACL_MAX_CATEGORIES, typeof(v.data.category_mask));
+		v.data.priority = RTE_ACL_MAX_PRIORITY - n;
+		v.data.userdata = n;
+
+		rc = rte_acl_add_rules(ctx, (struct rte_acl_rule *) &v, 1);
+		if (rc != 0) {
+			RTE_LOG(ERR, TESTACL,
+			        "line %u: failed to add rules "
+			        "into ACL context, error code: %d (%s)\n",
+			        i, rc, strerror(-rc));
+			return rc;
+		}
+	}
+
+	return 0;
+}
+
+static void acx_init(void) {
+	int ret;
+	FILE *f;
+	struct rte_acl_config cfg;
+
+	memset(&cfg, 0, sizeof(cfg));
+
+	/* setup ACL build config. */
+	if (config.ipv6) {
+		cfg.num_fields = RTE_DIM(ipv6_defs);
+		memcpy(&cfg.defs, ipv6_defs, sizeof(ipv6_defs));
+	} else {
+		cfg.num_fields = RTE_DIM(ipv4_defs);
+		memcpy(&cfg.defs, ipv4_defs, sizeof(ipv4_defs));
+	}
+	cfg.num_categories = config.bld_categories;
+	cfg.max_size = config.max_size;
+
+	/* setup ACL creation parameters. */
+	prm.rule_size = RTE_ACL_RULE_SZ(cfg.num_fields);
+	prm.max_rule_num = config.nb_rules;
+
+	config.acx = rte_acl_create(&prm);
+	if (config.acx == NULL)
+		rte_exit(rte_errno, "failed to create ACL context\n");
+
+	/* set default classify method for this context. */
+	if (config.alg.alg != RTE_ACL_CLASSIFY_DEFAULT) {
+		ret = rte_acl_set_ctx_classify(config.acx, config.alg.alg);
+		if (ret != 0)
+			rte_exit(ret,
+			         "failed to setup %s method "
+			         "for ACL context\n",
+			         config.alg.name);
+	}
+
+	/* add ACL rules. */
+	f = fopen(config.rule_file, "r");
+	if (f == NULL)
+		rte_exit(-EINVAL, "failed to open file %s\n", config.rule_file);
+
+	ret = add_cb_rules(f, config.acx);
+	if (ret != 0)
+		rte_exit(ret, "failed to add rules into ACL context\n");
+
+	fclose(f);
+
+	/* perform build. */
+	ret = rte_acl_build(config.acx, &cfg);
+
+	dump_verbose(DUMP_NONE, stdout, "rte_acl_build(%u) finished with %d\n",
+	             config.bld_categories, ret);
+
+	rte_acl_dump(config.acx);
+
+	if (ret != 0)
+		rte_exit(ret, "failed to build search context\n");
+}
+
+static uint32_t search_ip5tuples_once(uint32_t categories, uint32_t step,
+                                      const char *alg) {
+	int ret;
+	uint32_t i, j, k, n, r;
+	const uint8_t *data[step], *v;
+	uint32_t results[step * categories];
+
+	v = (const uint8_t *) config.traces;
+	for (i = 0; i != config.used_traces; i += n) {
+		n = RTE_MIN(step, config.used_traces - i);
+
+		for (j = 0; j != n; j++) {
+			data[j] = v;
+			v += config.trace_sz;
+		}
+
+		ret = rte_acl_classify(config.acx, data, results, n,
+		                       categories);
+
+		if (ret != 0)
+			rte_exit(ret, "classify for ipv%c_5tuples returns %d\n",
+			         config.ipv6 ? '6' : '4', ret);
+
+		for (r = 0, j = 0; j != n; j++) {
+			for (k = 0; k != categories; k++, r++) {
+				dump_verbose(DUMP_PKT, stdout,
+				             "ipv%c_5tuple: %u, category: %u, "
+				             "result: %u\n",
+				             config.ipv6 ? '6' : '4', i + j + 1,
+				             k, results[r] - 1);
+			}
+		}
+	}
+
+	dump_verbose(DUMP_SEARCH, stdout, "%s(%u, %u, %s) returns %u\n",
+	             __func__, categories, step, alg, i);
+	return i;
+}
+
+static int search_ip5tuples(__rte_unused void *arg) {
+	uint64_t pkt, start, tm;
+	uint32_t i, lcore;
+	long double st;
+
+	lcore = rte_lcore_id();
+	start = rte_rdtsc_precise();
+	pkt = 0;
+
+	for (i = 0; i != config.iter_num; i++) {
+		pkt += search_ip5tuples_once(config.run_categories,
+		                             config.trace_step,
+		                             config.alg.name);
+	}
+
+	tm = rte_rdtsc_precise() - start;
+
+	st = (long double) tm / rte_get_timer_hz();
+	dump_verbose(DUMP_NONE, stdout,
+	             "%s  @lcore %u: %" PRIu32 " iterations, %" PRIu64
+			             " pkts, %" PRIu32 " categories, %" PRIu64
+			             " cycles (%.2Lf sec), "
+			             "%.2Lf cycles/pkt, %.2Lf pkt/sec\n",
+	             __func__, lcore, i, pkt, config.run_categories, tm, st,
+	             (pkt == 0) ? 0 : (long double) tm / pkt, pkt / st);
+
+	return 0;
+}
+
+static unsigned long get_ulong_opt(const char *opt, const char *name,
+                                   size_t min, size_t max) {
+	unsigned long val;
+	char *end;
+
+	errno = 0;
+	val = strtoul(opt, &end, 0);
+	if (errno != 0 || end[0] != 0 || val > max || val < min)
+		rte_exit(-EINVAL, "invalid value: \"%s\" for option: %s\n", opt,
+		         name);
+	return val;
+}
+
+static void get_alg_opt(const char *opt, const char *name) {
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(acl_alg); i++) {
+		if (strcmp(opt, acl_alg[i].name) == 0) {
+			config.alg = acl_alg[i];
+			return;
+		}
+	}
+
+	rte_exit(-EINVAL, "invalid value: \"%s\" for option: %s\n", opt, name);
+}
+
+static void print_usage(const char *prgname) {
+	uint32_t i, n, rc;
+	char buf[PATH_MAX];
+
+	n = 0;
+	buf[0] = 0;
+
+	for (i = 0; i < RTE_DIM(acl_alg) - 1; i++) {
+		rc = snprintf(buf + n, sizeof(buf) - n, "%s|", acl_alg[i].name);
+		if (rc > sizeof(buf) - n)
+			break;
+		n += rc;
+	}
+
+	strlcpy(buf + n, acl_alg[i].name, sizeof(buf) - n);
+
+	fprintf(stdout,
+	        PRINT_USAGE_START
+	        "--" OPT_RULE_FILE "=<rules set file>\n"
+	        "[--" OPT_TRACE_FILE "=<input traces file>]\n"
+	        "[--" OPT_RULE_NUM
+	        "=<maximum number of rules for ACL context>]\n"
+	        "[--" OPT_TRACE_NUM
+	        "=<number of traces to read binary file in>]\n"
+	        "[--" OPT_TRACE_STEP
+	        "=<number of traces to classify per one call>]\n"
+	        "[--" OPT_BLD_CATEGORIES
+	        "=<number of categories to build with>]\n"
+	        "[--" OPT_RUN_CATEGORIES "=<number of categories to run with> "
+	        "should be either 1 or multiple of %zu, "
+	        "but not greater then %u]\n"
+	        "[--" OPT_MAX_SIZE
+	        "=<size limit (in bytes) for runtime ACL structures> "
+	        "leave 0 for default behaviour]\n"
+	        "[--" OPT_ITER_NUM "=<number of iterations to perform>]\n"
+	        "[--" OPT_VERBOSE "=<verbose level>]\n"
+	        "[--" OPT_SEARCH_ALG "=%s]\n"
+	        "[--" OPT_IPV6 "=<IPv6 rules and trace files>]\n",
+	        prgname, RTE_ACL_RESULTS_MULTIPLIER,
+	        (uint32_t) RTE_ACL_MAX_CATEGORIES, buf);
+}
+
+static void dump_config(FILE *f) {
+	fprintf(f, "%s:\n", __func__);
+	fprintf(f, "%s:%s\n", OPT_RULE_FILE, config.rule_file);
+	fprintf(f, "%s:%s\n", OPT_TRACE_FILE, config.trace_file);
+	fprintf(f, "%s:%u\n", OPT_RULE_NUM, config.nb_rules);
+	fprintf(f, "%s:%u\n", OPT_TRACE_NUM, config.nb_traces);
+	fprintf(f, "%s:%u\n", OPT_TRACE_STEP, config.trace_step);
+	fprintf(f, "%s:%u\n", OPT_BLD_CATEGORIES, config.bld_categories);
+	fprintf(f, "%s:%u\n", OPT_RUN_CATEGORIES, config.run_categories);
+	fprintf(f, "%s:%zu\n", OPT_MAX_SIZE, config.max_size);
+	fprintf(f, "%s:%u\n", OPT_ITER_NUM, config.iter_num);
+	fprintf(f, "%s:%u\n", OPT_VERBOSE, config.verbose);
+	fprintf(f, "%s:%u(%s)\n", OPT_SEARCH_ALG, config.alg.alg,
+	        config.alg.name);
+	fprintf(f, "%s:%u\n", OPT_IPV6, config.ipv6);
+}
+
+static void check_config(void) {
+	if (config.rule_file == NULL) {
+		print_usage(config.prgname);
+		rte_exit(-EINVAL, "mandatory option %s is not specified\n",
+		         OPT_RULE_FILE);
+	}
+}
+
+static void get_input_opts(int argc, char **argv) {
+	static struct option lgopts[] = {{OPT_RULE_FILE,      1, 0, 0},
+	                                 {OPT_TRACE_FILE,     1, 0, 0},
+	                                 {OPT_TRACE_NUM,      1, 0, 0},
+	                                 {OPT_RULE_NUM,       1, 0, 0},
+	                                 {OPT_MAX_SIZE,       1, 0, 0},
+	                                 {OPT_TRACE_STEP,     1, 0, 0},
+	                                 {OPT_BLD_CATEGORIES, 1, 0, 0},
+	                                 {OPT_RUN_CATEGORIES, 1, 0, 0},
+	                                 {OPT_ITER_NUM,       1, 0, 0},
+	                                 {OPT_VERBOSE,        1, 0, 0},
+	                                 {OPT_SEARCH_ALG,     1, 0, 0},
+	                                 {OPT_IPV6,           0, 0, 0},
+	                                 {NULL,               0, 0, 0}};
+
+	int opt, opt_idx;
+
+	while ((opt = getopt_long(argc, argv, "", lgopts, &opt_idx)) != EOF) {
+		if (opt != 0) {
+			print_usage(config.prgname);
+			rte_exit(-EINVAL, "unknown option: %c", opt);
+		}
+
+		if (strcmp(lgopts[opt_idx].name, OPT_RULE_FILE) == 0) {
+			config.rule_file = optarg;
+		} else if (strcmp(lgopts[opt_idx].name, OPT_TRACE_FILE) == 0) {
+			config.trace_file = optarg;
+		} else if (strcmp(lgopts[opt_idx].name, OPT_RULE_NUM) == 0) {
+			config.nb_rules =
+					get_ulong_opt(optarg, lgopts[opt_idx].name, 1,
+					              RTE_ACL_MAX_INDEX + 1);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_MAX_SIZE) == 0) {
+			config.max_size = get_ulong_opt(
+					optarg, lgopts[opt_idx].name, 0, SIZE_MAX);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_TRACE_NUM) == 0) {
+			config.nb_traces = get_ulong_opt(
+					optarg, lgopts[opt_idx].name, 1, UINT32_MAX);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_TRACE_STEP) == 0) {
+			config.trace_step =
+					get_ulong_opt(optarg, lgopts[opt_idx].name, 1,
+					              TRACE_STEP_MAX);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_BLD_CATEGORIES) ==
+		           0) {
+			config.bld_categories =
+					get_ulong_opt(optarg, lgopts[opt_idx].name, 1,
+					              RTE_ACL_MAX_CATEGORIES);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_RUN_CATEGORIES) ==
+		           0) {
+			config.run_categories =
+					get_ulong_opt(optarg, lgopts[opt_idx].name, 1,
+					              RTE_ACL_MAX_CATEGORIES);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_ITER_NUM) == 0) {
+			config.iter_num = get_ulong_opt(
+					optarg, lgopts[opt_idx].name, 1, INT32_MAX);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_VERBOSE) == 0) {
+			config.verbose =
+					get_ulong_opt(optarg, lgopts[opt_idx].name,
+					              DUMP_NONE, DUMP_MAX);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_SEARCH_ALG) == 0) {
+			get_alg_opt(optarg, lgopts[opt_idx].name);
+		} else if (strcmp(lgopts[opt_idx].name, OPT_IPV6) == 0) {
+			config.ipv6 = 1;
+		}
+	}
+	config.trace_sz = config.ipv6 ? sizeof(struct ipv6_5tuple) :
+	                  sizeof(struct ipv4_5tuple);
+}
+
+class TestpmdAPIImpl final : public ::TestpmdAPI::Service {
+public:
+	virtual ::grpc::Status acl_setup([[maybe_unused]] ::grpc::ServerContext *context,
+	                                 const ::AclSetupArgs *request,
+	                                 ::google::protobuf::Empty *response) {
+
+		auto *args = new std::vector<char *>();
+
+		for (auto& arg : request->args()) {
+			args->push_back((char*) arg.c_str());
+		}
+
+		// terminator
+		args->push_back(nullptr);
+
+
+		int ret = rte_eal_init(args->size() - 1, args->data());
+		if (ret < 0)
+			rte_panic("Cannot init EAL\n");
+
+		config.prgname = args->at(0);
+
+		get_input_opts(args->size() - ret - 1, &(*args)[ret]);
+		dump_config(stdout);
+		check_config();
+
+		acx_init();
+
+		if (config.trace_file != nullptr)
+			tracef_init();
+
+		*response = google::protobuf::Empty();
+
+		return grpc::Status::OK;
+	}
+
+	virtual ::grpc::Status
+	acl_search([[maybe_unused]] ::grpc::ServerContext *context,
+	           [[maybe_unused]] const ::google::protobuf::Empty *request,
+	           ::google::protobuf::Empty *response) {
+		uint32_t lcore;
+
+		RTE_LCORE_FOREACH_WORKER(lcore)rte_eal_remote_launch(search_ip5tuples, NULL, lcore);
+
+		search_ip5tuples(NULL);
+
+		rte_eal_mp_wait_lcore();
+
+		*response = google::protobuf::Empty();
+
+		return grpc::Status::OK;
+	}
+
+	virtual ::grpc::Status acl_cleanup_config([[maybe_unused]] ::grpc::ServerContext *context, [[maybe_unused]] const ::google::protobuf::Empty *request,
+	                                          ::google::protobuf::Empty *response) {
+		rte_acl_free(config.acx);
+
+		*response = google::protobuf::Empty();
+		return grpc::Status::OK;
+	}
+};
+
+extern "C" {
+void rte_testpmd_run() {
+#ifdef RTE_RPC_SERVER_PORT
+	std::string address = "0.0.0.0:" RTE_RPC_SERVER_PORT;
+#else
+	std::string address = "0.0.0.0:8000";
+#endif
+
+	grpc::EnableDefaultHealthCheckService(true);
+  	grpc::reflection::InitProtoReflectionServerBuilderPlugin();
+	TestpmdAPIImpl* service = new TestpmdAPIImpl();
+
+	grpc::ServerBuilder builder; // RAII
+	builder.AddListeningPort(address, grpc::InsecureServerCredentials());
+  	builder.RegisterService(service);
+	RTE_LOG(INFO, PMD, "Starting server on %s", address.c_str());
+	std::unique_ptr<grpc::Server> server(builder.BuildAndStart()); // RAII
+	server->Wait();
+}
+}
diff --git a/app/test-pmd-api/api_impl.h b/app/test-pmd-api/api_impl.h
new file mode 100644
index 0000000000..5e81aa9804
--- /dev/null
+++ b/app/test-pmd-api/api_impl.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 University of New Hampshire
+ */
+
+/*
+Dispite this file being attached to a C++ file, all definitions must remain ISO C. In additon,
+any exported members from C++ must be in an "extern C" block.
+*/
+
+void rte_testpmd_run();
\ No newline at end of file
diff --git a/app/test-pmd-api/main.c b/app/test-pmd-api/main.c
new file mode 100644
index 0000000000..b69d2df814
--- /dev/null
+++ b/app/test-pmd-api/main.c
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 University of New Hampshire
+ */
+#include "api_impl.h"
+
+int
+main()
+{
+	rte_testpmd_run();
+	return 0;
+}
-- 
2.30.2


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/4] [RFC] Testpmd RPC API
  2022-04-07 21:47 [PATCH v1 0/4] [RFC] Testpmd RPC API ohilyard
                   ` (3 preceding siblings ...)
  2022-04-07 21:47 ` [PATCH v1 4/4] app/test-pmd-api: Implementation files for the API ohilyard
@ 2022-04-11 14:27 ` Jerin Jacob
  2022-04-11 17:48   ` Owen Hilyard
  4 siblings, 1 reply; 12+ messages in thread
From: Jerin Jacob @ 2022-04-11 14:27 UTC (permalink / raw)
  To: ohilyard; +Cc: dpdk-dev, Honnappa Nagarahalli, Thomas Monjalon

On Fri, Apr 8, 2022 at 3:17 AM <ohilyard@iol.unh.edu> wrote:
>
> From: Owen Hilyard <ohilyard@iol.unh.edu>
>
>     Currently, DTS uses Testpmd for most of its testing. This has been successful in reducing the need to create more test apps, but it has a few drawbacks. First, if some part of DPDK is not exposed via Testpmd or one of the example applications, for the purposes of DTS it is not testable. This is a situation I’d like to avoid. However, adding new functionality to Testpmd is labor-intensive. Testpmd currently uses a hand-written LL(1) parser (https://en.wikipedia.org/wiki/LL_parser) to parse command line options. This makes adding new functionality difficult since the parser is stored as a series of several thousand line long lookup tables. To look at it another way, 64% of the 52238 lines in Testpmd are related to command line input in some way. The command line interface of testpmd also presents several challenges for the underlying implementation, since it requires that everything a user might want to reference is identified via something that is reasonable to ask a user to type. As of right now, this is handled via either strings or integers. This can be handled by creating a global registry for objects, but it is still extra work that I think can be avoided. In addition, this leads to more places where things can go wrong.
>
> This is what DTS running a single command in testpmd looks like right now:
> https://drive.google.com/file/d/1hvTcjfVdh8-I3CUNoq6bx82EuNQSK6qW/view?usp=sharing
>
>     This approach has a number of disadvantages. First, it requires assembling all commands as strings inside of the test suite and sending them through a full round trip of SSH. This means that any non-trivial command, such as creating an RTE flow, will involve a lot of string templating. This normally wouldn’t be a big issue, except that some of the test suites are designed to hundreds of commands over the course of a test, paying the cost of an SSH round trip for each. Once Testpmd has the commands, it will then call the appropriate functions inside of DPDK, and then print out all of the state to standard out. All of this is sent back to DTS, where the author of the test case then needs to handle all possible outputs of Trex, often by either declaring the presence of a single word or short phrase in the output as meaning success or failure. In my opinion, this is something that is perfectly fine for humans to interact with, but it causes a lot of issues with automation due to its inherent inflexibility and the less-than-ideal methods of information transfer. This is why I am proposing the creation of an automation-oriented pmd, with a focus on exposing as much.
>
> https://drive.google.com/file/d/1wj4-RnFPVERCzM8b68VJswAOEI9cg-X8/view?usp=sharing
>
>         That diagram is a high-level overview of the design, which explicitly excludes implementation details. However, it already has some benefits. First, making DPDK do something is a normal method call, instead of needing to format things into a string. This provides a much better interface for people working in both DTS and DPDK. Second, the ability to return structured data means that there won’t be parsers on both sides of communication anymore. Structured data also allows much more verbosity, since it is no longer an interface designed for humans. If a test case author needs to return the bytes of every received packet back to DTS for comparison with the expected value, they can. If you need to return a pointer for DTS to use later, that becomes reasonable. Simply moving to shuffling structured data around and using RPC already provides a lot of benefits.
>         The next obvious question would be what to use for the implementation. The initial attempt was made using Python on both sides and the standard library xmlrpc module. The RPC aspect of this approach worked very well, with the ability to send arbitrary python objects back and forth between DTS and app. However, having Python interacting with DPDK has a few issues. First, DPDK is generally very multi-threaded and the most common implementation of Python, CPython, does not have concurrency. It has something known as the global interpretr lock, which is a global mutex. This makes it very difficult to interact with blocking, multi-threaded code. The other issue is that I was not able to find a binding generator that I feel would be sufficient for DPDK. Many generators assumed sizeof(int) == 4 or had other portability issues such as assuming GCC or Clang as a C compiler. Others focused on some subset of C, meaning they would throw errors on alignment annotations.
>     Given this, I decided to look for cross-language RPC libraries. Although libraries exist for performing xmlrpc in C, they generally appeared quite difficult to use and required a lot of manual work. The next best option was gRPC. gRPC allows using a simple language, protobuf, with a language extension for rpc. It provides code generation to make it easy to use multiple languages together, since it was developed to make polyglot microservice interaction easier. The only drawback is that it considers C++ good enough for C support. In this case, I was able to easily integrate DPDK with C++, so that isn’t much of a concern. I used C++17 in the attached patches, but the minimum requirements are C++11. If there is concern about modern C++ causing too much mental overhead, a “C with classes” subset of C++ could easily be used. I also added an on-by-default option to use a C++ compiler, allowing anyone who does not have a C++ compiler available to them to turn off everything that uses C++. This disables the application written for this RFC.
>     One of the major benefits of gRPC is the asynchronous API. This allows streaming data on both sides of an RPC call. This allows streaming logs back to DTS, streaming large amounts of data from low-memory systems back to DTS for processing, and would allow DTS to easily feed DPDK data, ending the test quickly on a failure. Currently, due to the overhead of sending data to Testpmd, it is common to just send all of the commands over and run everything since that will be much faster when the test passes, but it can cost a lot of time in the event of a failure. There are also optional security features for requiring authentication before allowing code execution. I think that a discussion on whether using them for DTS is necessary is warranted, although I personally think that it’s not worth the effort given the type of environment this sample application is expected to run in.
>     For this RFC, I ported test-acl because it was mostly self-contained and was something I could run on my laptop. It should be fairly easy to see how you would expand this proof of concept to cover more of DPDK, and I think that most of the functions currently used in testpmd could be ported over to this approach, saving a lot of development time. However, I would like to see some more interest before I take on a task like that. This will require a lot of work on the DTS side to implement, but it will make it much easier to add new features to DTS.


Thanks, Owen for POC.

In my view, May using this scheme is probably over-engineered. The
reason for thinking so is
-Now that, Test code is also part of DPDK, Exposing as services may
not be required.
-Now in DPDK, we have two types of existing test cases to verify the API
-- Noninteractive  - These test cases, can simply run over ssh with
bash invocation and return the test from a remote PC,
-- Interactive - Testpmd one, I believe, Feeding stdin
programmatically would suffice to test all the combinations.
-We need to add all test cases in this model and we need to maintain
two sets of programs.(Traditional tests and gRPC model-based tests).

Just my 2c.


>
> Owen Hilyard (4):
>   app/test-pmd-api: Add C++ Compiler
>   app/test-pmd-api: Add POC with gRPC deps
>   app/test-pmd-api: Add protobuf file
>   app/test-pmd-api: Implementation files for the API
>
>  app/meson.build              |   17 +
>  app/test-pmd-api/api.proto   |   12 +
>  app/test-pmd-api/api_impl.cc | 1160 ++++++++++++++++++++++++++++++++++
>  app/test-pmd-api/api_impl.h  |   10 +
>  app/test-pmd-api/main.c      |   11 +
>  app/test-pmd-api/meson.build |   96 +++
>  meson.build                  |    3 +
>  meson_options.txt            |    2 +
>  8 files changed, 1311 insertions(+)
>  create mode 100644 app/test-pmd-api/api.proto
>  create mode 100644 app/test-pmd-api/api_impl.cc
>  create mode 100644 app/test-pmd-api/api_impl.h
>  create mode 100644 app/test-pmd-api/main.c
>  create mode 100644 app/test-pmd-api/meson.build
>
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/4] [RFC] Testpmd RPC API
  2022-04-11 14:27 ` [PATCH v1 0/4] [RFC] Testpmd RPC API Jerin Jacob
@ 2022-04-11 17:48   ` Owen Hilyard
  2022-04-12  6:07     ` Jerin Jacob
  0 siblings, 1 reply; 12+ messages in thread
From: Owen Hilyard @ 2022-04-11 17:48 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dpdk-dev, Honnappa Nagarahalli, Thomas Monjalon

[-- Attachment #1: Type: text/plain, Size: 4673 bytes --]

>
> scheme is probably over-engineered


I tried my hardest to keep this as simple as possible. The requirements
imposed by DTS being a distributed system in Python restricted what I could
do a lot. Needing to be compatible with DPDK's license also got rid of a
lot of options. Binding generators are made for simple projects, and DPDK
is not a simple project. There were some other options related to choice in
the RPC framework, but very few RPC protocols seem to work well with C and
be usable from Python, which is why I ended up using C++ with gRPC. Most of
the code in api_impl.cc is taken from /app/test-acl/main.c, and the new
part is mostly the C++ class at the bottom. Overall, this proposal comes
out to ~100 lines of new C++, 9 lines of C, 12 lines of gRPC Protobuf and
100 lines of Meson. gRPC may be able to do a lot more than I need it to for
the proof of concept, but many of the features that are not used, like
bi-directional streaming, become very useful in writing more complicated
tests. Overall, this solution is probably more capable than we need it to
be, but I think that those extra capabilities don't come at a very large
cost.


> Now that, Test code is also part of DPDK.
>

DTS is pure python. I tried to use FFI to call directly into DPDK from
Python and then use xmlrpc from the python standard library. As mentioned
in the writeup, I couldn't find a binding generator that would properly
handle DPDK's allocators, which made it so that anything passed to DPDK
using python was allocated using the system malloc. I don't think it is
wise to attempt to programmatically re-write the generated code to allow
for custom allocators. The original reason for needing to have DTS and DPDK
in the same repository was so that tests could be committed and run
alongside the feature patch.

Interactive - Testpmd one, I believe, Feeding stdin programmatically would
> suffice to test all the combinations.
>

One of the issues this is trying to address is that human-readable strings
are a poor way to pass complex information between two programs. DTS is a
distributed system, and it can have up to 3 physical servers involved in
any given test. This means that it's not stdin via a pipe, it's an entire
SSH session. This adds a noticeable amount of overhead when trying to send
and verify the result of sending 1,000+ packets, since the lack of
structured output means each packet must be checked before the next can be
sent. This might be solvable by adding a structured output mode to testpmd,
but that would involve committing to writing output twice for every
function in testpmd forever.

We need to add all test cases in this model and we need to maintain two
> sets of programs.(Traditional tests and gRPC model-based tests).
>

Assuming by traditional tests you mean the unit tests run by Meson, I would
argue that we are already maintaining 2 kinds of tests. The unit tests, and
the python-based DTS tests. My intention is to create a thin wrapper around
DPDK that would be exposed via gRPC, like you see here, and use that as
midware. Then, we would have two front-ends. Testpmd, which takes text and
then calls midware as it does now, and the gRPC frontend, which parses
messages from the RPC server and runs the midware. This would enable
testpmd to still be used to sanity check a DPDK installation, but we would
not need to continually expand Testpmd. The primary issue is that, right
now, anything not included in Testpmd is not really testable by DTS. This
includes portions of the RTE Flow API, which was part of my reason for
proposing this. The RTE Flow API would, in my estimation, if fully
implemented into Testpmd, probably add at least another 10,000 lines of
code. As mentioned in my proposal, Testpmd already does more parsing and
lexing than it does interaction with DPDK by line count. Also, since I am
proposing making this a separate application, we would be able to gradually
migrate the tests inside of DTS. This would have no effect on anything
except for Testpmd, the new application and the addition of a flag to
toggle the use of a C++ compiler.

I'm not sure exactly what you mean by gRPC model-based tests. gRPC uses
classes to model services, but for this usecase we are essentially using it
to transfer function arguments across the internet and then pass the return
value back. Any RPC framework would function similarly if I ignored the
restrictions of which languages to use, and the choice is not important to
how tests are conducted. Put another way, how you write a test for DTS will
not change much if you are using this or testpmd, it's just how you
transfer data and get it back that I want to change.

[-- Attachment #2: Type: text/html, Size: 5463 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/4] [RFC] Testpmd RPC API
  2022-04-11 17:48   ` Owen Hilyard
@ 2022-04-12  6:07     ` Jerin Jacob
  2022-04-13 12:47       ` Owen Hilyard
  0 siblings, 1 reply; 12+ messages in thread
From: Jerin Jacob @ 2022-04-12  6:07 UTC (permalink / raw)
  To: Owen Hilyard; +Cc: dpdk-dev, Honnappa Nagarahalli, Thomas Monjalon

On Mon, Apr 11, 2022 at 11:19 PM Owen Hilyard <ohilyard@iol.unh.edu> wrote:
>>
>> scheme is probably over-engineered
>
>
> I tried my hardest to keep this as simple as possible. The requirements imposed by DTS being a distributed system in Python restricted what I could do a lot. Needing to be compatible with DPDK's license also got rid of a lot of options. Binding generators are made for simple projects, and DPDK is not a simple project. There were some other options related to choice in the RPC framework, but very few RPC protocols seem to work well with C and be usable from Python, which is why I ended up using C++ with gRPC. Most of the code in api_impl.cc is taken from /app/test-acl/main.c, and the new part is mostly the C++ class at the bottom. Overall, this proposal comes out to ~100 lines of new C++, 9 lines of C, 12 lines of gRPC Protobuf and 100 lines of Meson. gRPC may be able to do a lot more than I need it to for the proof of concept, but many of the features that are not used, like bi-directional streaming, become very useful in writing more complicated tests. Overall, this solution is probably more capable than we need it to be, but I think that those extra capabilities don't come at a very large cost.


Now it is clear, I was carried away with the POC test application and
I was not knowing existing DTS tests are based on python

Is below a fair summary?

1) DPDK has interactive test cases and no interactive test cases.

For The interactive test case like testpmd, I agree that we can enable
RPC service via gRPC server in C++ as  and client in Python, and
something along the lines of exposing the existing test-pmd command
line function as service
to avoid command line parsing and reuse the existing python test suite.

If so, I think, gRPC service would be along with existing testpmd
functions, like start_packet_forwarding(). Also, We don't need to
rewrite the existing testpmd,
Instead, RPC service, we can add in existing app/test-pmd/ and hook to
existing core testpmd functions to bypass the command-line parsing in
C and control from python client as needed as service.

Also, I agree that pulling in gRPC C++ server boilerplate and hooking
to C functions is a good idea as it is the best C-based RPC scheme
available today.

2)I think, DPDK has only one interactive test case which is testpmd,
Remaining test cases are non-interactive, non-interactive test cases
can simply run over ssh with passwordless login. Right?
Do we need gRPC for that? Will the following scheme suffice? If not,
How you are planning to do noninteractive test cases?
i.e
a)Copy test to target
b) result=`ssh username@IP /path/to/testapp/in/target`

I think, key should be leveraging existing test cases as much as
possible and make easy for developers to add new test cases.


>>
>> Now that, Test code is also part of DPDK.
>
>
> DTS is pure python. I tried to use FFI to call directly into DPDK from Python and then use xmlrpc from the python standard library. As mentioned in the writeup, I couldn't find a binding generator that would properly handle DPDK's allocators, which made it so that anything passed to DPDK using python was allocated using the system malloc. I don't think it is wise to attempt to programmatically re-write the generated code to allow for custom allocators. The original reason for needing to have DTS and DPDK in the same repository was so that tests could be committed and run alongside the feature patch.
>
>> Interactive - Testpmd one, I believe, Feeding stdin programmatically would suffice to test all the combinations.
>
>
> One of the issues this is trying to address is that human-readable strings are a poor way to pass complex information between two programs. DTS is a distributed system, and it can have up to 3 physical servers involved in any given test. This means that it's not stdin via a pipe, it's an entire SSH session. This adds a noticeable amount of overhead when trying to send and verify the result of sending 1,000+ packets, since the lack of structured output means each packet must be checked before the next can be sent. This might be solvable by adding a structured output mode to testpmd, but that would involve committing to writing output twice for every function in testpmd forever.
>
>> We need to add all test cases in this model and we need to maintain two sets of programs.(Traditional tests and gRPC model-based tests).
>
>
> Assuming by traditional tests you mean the unit tests run by Meson, I would argue that we are already maintaining 2 kinds of tests. The unit tests, and the python-based DTS tests. My intention is to create a thin wrapper around DPDK that would be exposed via gRPC, like you see here, and use that as midware. Then, we would have two front-ends. Testpmd, which takes text and then calls midware as it does now, and the gRPC frontend, which parses messages from the RPC server and runs the midware. This would enable testpmd to still be used to sanity check a DPDK installation, but we would not need to continually expand Testpmd. The primary issue is that, right now, anything not included in Testpmd is not really testable by DTS. This includes portions of the RTE Flow API, which was part of my reason for proposing this. The RTE Flow API would, in my estimation, if fully implemented into Testpmd, probably add at least another 10,000 lines of code. As mentioned in my proposal, Testpmd already does more parsing and lexing than it does interaction with DPDK by line count. Also, since I am proposing making this a separate application, we would be able to gradually migrate the tests inside of DTS. This would have no effect on anything except for Testpmd, the new application and the addition of a flag to toggle the use of a C++ compiler.
>
> I'm not sure exactly what you mean by gRPC model-based tests. gRPC uses classes to model services, but for this usecase we are essentially using it to transfer function arguments across the internet and then pass the return value back. Any RPC framework would function similarly if I ignored the restrictions of which languages to use, and the choice is not important to how tests are conducted. Put another way, how you write a test for DTS will not change much if you are using this or testpmd, it's just how you transfer data and get it back that I want to change.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/4] [RFC] Testpmd RPC API
  2022-04-12  6:07     ` Jerin Jacob
@ 2022-04-13 12:47       ` Owen Hilyard
  2022-04-14 12:07         ` Ananyev, Konstantin
  0 siblings, 1 reply; 12+ messages in thread
From: Owen Hilyard @ 2022-04-13 12:47 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dpdk-dev, Honnappa Nagarahalli, Thomas Monjalon

[-- Attachment #1: Type: text/plain, Size: 9536 bytes --]

>
> If so, I think, gRPC service would be along with existing
> testpmd functions, like start_packet_forwarding().


It was my intention to re-use existing functions. I used the ACL tests as
an example because they are more self-contained then Testpmd, which made
creating the proof of concept much easier.

Also, We don't need to rewrite the existing testpmd, Instead, RPC service,
> we can add in existing app/test-pmd/
>

The reason that I split out the services is that there doesn't seem to be a
way to produce multiple binaries without re-writing that section of the
build system. I wanted to avoid the hard requirement of having a C++
compiler available in order to be able to use testpmd, since that may
affect what platforms Testpmd can run on and I want to avoid this being any
kind of breaking change. If we decide to go the route of putting it all in
a single application, we would need to conditionally enable the gRPC
service at build time. Meson's current lack of support for conditionally
detecting compilers causes issues here.

I think, DPDK has only one interactive test case which is testpmd,
>

Could you point me to that test case? Either invocation or source is ok. I
can't see anything that would lead me to assume use of testpmd in "meson
test --list". To my knowledge, all of the test cases that use testpmd are
in DTS. If there is a test that uses testpmd but is not part of DTS, I
think it would be a candidate for moving into DTS assuming it's not a unit
test.

How you are planning to do noninteractive test cases?


I'm not planning to make any change to unit testing, you can read more
about how testing is currently conducted here:
https://www.dpdk.org/blog/2021/07/05/dpdk-testing-approaches/

If there is a unit test that involves testpmd, there are two possibilities.
1. If we are making a separate application for Testpmd with the gRPC api,
then nothing changes except for possibly changing where some of the testpmd
source lives in order to enable code reuse between the two applications.
2. If gRPC is being added to Testpmd, then the unit test should still
function as it previously did if I do any necessary refactoring as
correctly.

I think, key should be leveraging existing test cases as much as possible
> and make easy for developers to add new test cases.


That is part of the reason why I want to be able to do this. Adding a new
test in DTS is very easy if the functionality needed already exists in
Testpmd. If the functionality does not exist, then adding the test becomes
difficult, due to the required modifications to the Testpmd lexer and
parser to accommodate the new command. My plan is to leave unit testing in
C, but help make it easier to expose C functions to Python for integration
testing. This gives us the best of both worlds in terms of access to DPDK
and the ability to use a high-level language to write the tests.

On Tue, Apr 12, 2022 at 2:07 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:

> On Mon, Apr 11, 2022 at 11:19 PM Owen Hilyard <ohilyard@iol.unh.edu>
> wrote:
> >>
> >> scheme is probably over-engineered
> >
> >
> > I tried my hardest to keep this as simple as possible. The requirements
> imposed by DTS being a distributed system in Python restricted what I could
> do a lot. Needing to be compatible with DPDK's license also got rid of a
> lot of options. Binding generators are made for simple projects, and DPDK
> is not a simple project. There were some other options related to choice in
> the RPC framework, but very few RPC protocols seem to work well with C and
> be usable from Python, which is why I ended up using C++ with gRPC. Most of
> the code in api_impl.cc is taken from /app/test-acl/main.c, and the new
> part is mostly the C++ class at the bottom. Overall, this proposal comes
> out to ~100 lines of new C++, 9 lines of C, 12 lines of gRPC Protobuf and
> 100 lines of Meson. gRPC may be able to do a lot more than I need it to for
> the proof of concept, but many of the features that are not used, like
> bi-directional streaming, become very useful in writing more complicated
> tests. Overall, this solution is probably more capable than we need it to
> be, but I think that those extra capabilities don't come at a very large
> cost.
>
>
> Now it is clear, I was carried away with the POC test application and
> I was not knowing existing DTS tests are based on python
>
> Is below a fair summary?
>
> 1) DPDK has interactive test cases and no interactive test cases.
>
> For The interactive test case like testpmd, I agree that we can enable
> RPC service via gRPC server in C++ as  and client in Python, and
> something along the lines of exposing the existing test-pmd command
> line function as service
> to avoid command line parsing and reuse the existing python test suite.
>
> If so, I think, gRPC service would be along with existing testpmd
> functions, like start_packet_forwarding(). Also, We don't need to
> rewrite the existing testpmd,
> Instead, RPC service, we can add in existing app/test-pmd/ and hook to
> existing core testpmd functions to bypass the command-line parsing in
> C and control from python client as needed as service.
>
> Also, I agree that pulling in gRPC C++ server boilerplate and hooking
> to C functions is a good idea as it is the best C-based RPC scheme
> available today.
>
> 2)I think, DPDK has only one interactive test case which is testpmd,
> Remaining test cases are non-interactive, non-interactive test cases
> can simply run over ssh with passwordless login. Right?
> Do we need gRPC for that? Will the following scheme suffice? If not,
> How you are planning to do noninteractive test cases?
> i.e
> a)Copy test to target
> b) result=`ssh username@IP /path/to/testapp/in/target`
>
> I think, key should be leveraging existing test cases as much as
> possible and make easy for developers to add new test cases.
>
>
> >>
> >> Now that, Test code is also part of DPDK.
> >
> >
> > DTS is pure python. I tried to use FFI to call directly into DPDK from
> Python and then use xmlrpc from the python standard library. As mentioned
> in the writeup, I couldn't find a binding generator that would properly
> handle DPDK's allocators, which made it so that anything passed to DPDK
> using python was allocated using the system malloc. I don't think it is
> wise to attempt to programmatically re-write the generated code to allow
> for custom allocators. The original reason for needing to have DTS and DPDK
> in the same repository was so that tests could be committed and run
> alongside the feature patch.
> >
> >> Interactive - Testpmd one, I believe, Feeding stdin programmatically
> would suffice to test all the combinations.
> >
> >
> > One of the issues this is trying to address is that human-readable
> strings are a poor way to pass complex information between two programs.
> DTS is a distributed system, and it can have up to 3 physical servers
> involved in any given test. This means that it's not stdin via a pipe, it's
> an entire SSH session. This adds a noticeable amount of overhead when
> trying to send and verify the result of sending 1,000+ packets, since the
> lack of structured output means each packet must be checked before the next
> can be sent. This might be solvable by adding a structured output mode to
> testpmd, but that would involve committing to writing output twice for
> every function in testpmd forever.
> >
> >> We need to add all test cases in this model and we need to maintain two
> sets of programs.(Traditional tests and gRPC model-based tests).
> >
> >
> > Assuming by traditional tests you mean the unit tests run by Meson, I
> would argue that we are already maintaining 2 kinds of tests. The unit
> tests, and the python-based DTS tests. My intention is to create a thin
> wrapper around DPDK that would be exposed via gRPC, like you see here, and
> use that as midware. Then, we would have two front-ends. Testpmd, which
> takes text and then calls midware as it does now, and the gRPC frontend,
> which parses messages from the RPC server and runs the midware. This would
> enable testpmd to still be used to sanity check a DPDK installation, but we
> would not need to continually expand Testpmd. The primary issue is that,
> right now, anything not included in Testpmd is not really testable by DTS.
> This includes portions of the RTE Flow API, which was part of my reason for
> proposing this. The RTE Flow API would, in my estimation, if fully
> implemented into Testpmd, probably add at least another 10,000 lines of
> code. As mentioned in my proposal, Testpmd already does more parsing and
> lexing than it does interaction with DPDK by line count. Also, since I am
> proposing making this a separate application, we would be able to gradually
> migrate the tests inside of DTS. This would have no effect on anything
> except for Testpmd, the new application and the addition of a flag to
> toggle the use of a C++ compiler.
> >
> > I'm not sure exactly what you mean by gRPC model-based tests. gRPC uses
> classes to model services, but for this usecase we are essentially using it
> to transfer function arguments across the internet and then pass the return
> value back. Any RPC framework would function similarly if I ignored the
> restrictions of which languages to use, and the choice is not important to
> how tests are conducted. Put another way, how you write a test for DTS will
> not change much if you are using this or testpmd, it's just how you
> transfer data and get it back that I want to change.
>

[-- Attachment #2: Type: text/html, Size: 11088 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v1 0/4] [RFC] Testpmd RPC API
  2022-04-13 12:47       ` Owen Hilyard
@ 2022-04-14 12:07         ` Ananyev, Konstantin
  2022-04-14 20:09           ` Owen Hilyard
  0 siblings, 1 reply; 12+ messages in thread
From: Ananyev, Konstantin @ 2022-04-14 12:07 UTC (permalink / raw)
  To: Owen Hilyard, Jerin Jacob; +Cc: dpdk-dev, Honnappa Nagarahalli, Thomas Monjalon


Hi everyone,

First of all thanks Owen for stepping forward with this RFC.
Few thoughts on this subject below.
Konstantin   

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Thursday, April 14, 2022 12:59 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Subject: FW: [PATCH v1 0/4] [RFC] Testpmd RPC API
> 
> 
> 
> From: Owen Hilyard <ohilyard@iol.unh.edu>
> Sent: Wednesday, April 13, 2022 1:47 PM
> To: Jerin Jacob <jerinjacobk@gmail.com>
> Cc: dpdk-dev <dev@dpdk.org>; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Thomas Monjalon <thomas@monjalon.net>
> Subject: Re: [PATCH v1 0/4] [RFC] Testpmd RPC API
> 
> If so, I think, gRPC service would be along with existing testpmd functions, like start_packet_forwarding().
> 
> It was my intention to re-use existing functions. I used the ACL tests as an example because they are more self-contained then Testpmd,
> which made creating the proof of concept much easier.
> 
> Also, We don't need to rewrite the existing testpmd, Instead, RPC service, we can add in existing app/test-pmd/
> 
> The reason that I split out the services is that there doesn't seem to be a way to produce multiple binaries without re-writing that section of
> the build system. I wanted to avoid the hard requirement of having a C++ compiler available in order to be able to use testpmd, since that
> may affect what platforms Testpmd can run on and I want to avoid this being any kind of breaking change. If we decide to go the route of
> putting it all in a single application, we would need to conditionally enable the gRPC service at build time. Meson's current lack of support
> for conditionally detecting compilers causes issues here.
> 
> I think, DPDK has only one interactive test case which is testpmd,
> 
> Could you point me to that test case? Either invocation or source is ok. I can't see anything that would lead me to assume use of testpmd in
> "meson test --list". To my knowledge, all of the test cases that use testpmd are in DTS. If there is a test that uses testpmd but is not part of
> DTS, I think it would be a candidate for moving into DTS assuming it's not a unit test.
> 
> How you are planning to do noninteractive test cases?
> 
> I'm not planning to make any change to unit testing, you can read more about how testing is currently conducted
> here: https://www.dpdk.org/blog/2021/07/05/dpdk-testing-approaches/
> 
> If there is a unit test that involves testpmd, there are two possibilities.
> 1. If we are making a separate application for Testpmd with the gRPC api, then nothing changes except for possibly changing where some
> of the testpmd source lives in order to enable code reuse between the two applications.
> 2. If gRPC is being added to Testpmd, then the unit test should still function as it previously did if I do any necessary refactoring as correctly.
> 
> I think, key should be leveraging existing test cases as much as possible and make easy for developers to add new test cases.
> 
> That is part of the reason why I want to be able to do this. Adding a new test in DTS is very easy if the functionality needed already exists in
> Testpmd. If the functionality does not exist, then adding the test becomes difficult, due to the required modifications to the Testpmd lexer
> and parser to accommodate the new command. My plan is to leave unit testing in C, but help make it easier to expose C functions to Python
> for integration testing. This gives us the best of both worlds in terms of access to DPDK and the ability to use a high-level language to write
> the tests.
> 
> On Tue, Apr 12, 2022 at 2:07 AM Jerin Jacob <mailto:jerinjacobk@gmail.com> wrote:
> On Mon, Apr 11, 2022 at 11:19 PM Owen Hilyard <mailto:ohilyard@iol.unh.edu> wrote:
> >>
> >> scheme is probably over-engineered
> >
> >
> > I tried my hardest to keep this as simple as possible. The requirements imposed by DTS being a distributed system in Python restricted
> what I could do a lot. Needing to be compatible with DPDK's license also got rid of a lot of options. Binding generators are made for simple
> projects, and DPDK is not a simple project. There were some other options related to choice in the RPC framework, but very few RPC
> protocols seem to work well with C and be usable from Python, which is why I ended up using C++ with gRPC. Most of the code in
> api_impl.cc is taken from /app/test-acl/main.c, and the new part is mostly the C++ class at the bottom. Overall, this proposal comes out to
> ~100 lines of new C++, 9 lines of C, 12 lines of gRPC Protobuf and 100 lines of Meson. gRPC may be able to do a lot more than I need it to
> for the proof of concept, but many of the features that are not used, like bi-directional streaming, become very useful in writing more
> complicated tests. Overall, this solution is probably more capable than we need it to be, but I think that those extra capabilities don't come
> at a very large cost.
> 
> 
> Now it is clear, I was carried away with the POC test application and
> I was not knowing existing DTS tests are based on python
> 
> Is below a fair summary?
> 
> 1) DPDK has interactive test cases and no interactive test cases.
> 
> For The interactive test case like testpmd, I agree that we can enable
> RPC service via gRPC server in C++ as  and client in Python, and
> something along the lines of exposing the existing test-pmd command
> line function as service
> to avoid command line parsing and reuse the existing python test suite.
> 
> If so, I think, gRPC service would be along with existing testpmd
> functions, like start_packet_forwarding(). Also, We don't need to
> rewrite the existing testpmd,
> Instead, RPC service, we can add in existing app/test-pmd/ and hook to
> existing core testpmd functions to bypass the command-line parsing in
> C and control from python client as needed as service.
> 
> Also, I agree that pulling in gRPC C++ server boilerplate and hooking
> to C functions is a good idea as it is the best C-based RPC scheme
> available today.
> 
> 2)I think, DPDK has only one interactive test case which is testpmd,
> Remaining test cases are non-interactive, non-interactive test cases
> can simply run over ssh with passwordless login. Right?
> Do we need gRPC for that? Will the following scheme suffice? If not,
> How you are planning to do noninteractive test cases?
> i.e
> a)Copy test to target
> b) result=`ssh username@IP /path/to/testapp/in/target`
> 
> I think, key should be leveraging existing test cases as much as
> possible and make easy for developers to add new test cases.
> 
> 
> >>
> >> Now that, Test code is also part of DPDK.
> >
> >
> > DTS is pure python. I tried to use FFI to call directly into DPDK from Python and then use xmlrpc from the python standard library. As
> mentioned in the writeup, I couldn't find a binding generator that would properly handle DPDK's allocators, which made it so that anything
> passed to DPDK using python was allocated using the system malloc. I don't think it is wise to attempt to programmatically re-write the
> generated code to allow for custom allocators. The original reason for needing to have DTS and DPDK in the same repository was so that
> tests could be committed and run alongside the feature patch.
> >
> >> Interactive - Testpmd one, I believe, Feeding stdin programmatically would suffice to test all the combinations.
> >
> >
> > One of the issues this is trying to address is that human-readable strings are a poor way to pass complex information between two
> programs. DTS is a distributed system, and it can have up to 3 physical servers involved in any given test. This means that it's not stdin via a
> pipe, it's an entire SSH session. This adds a noticeable amount of overhead when trying to send and verify the result of sending 1,000+
> packets, since the lack of structured output means each packet must be checked before the next can be sent. This might be solvable by
> adding a structured output mode to testpmd, but that would involve committing to writing output twice for every function in testpmd
> forever.
> >
> >> We need to add all test cases in this model and we need to maintain two sets of programs.(Traditional tests and gRPC model-based
> tests).
> >
> >
> > Assuming by traditional tests you mean the unit tests run by Meson, I would argue that we are already maintaining 2 kinds of tests. The
> unit tests, and the python-based DTS tests. My intention is to create a thin wrapper around DPDK that would be exposed via gRPC, like you
> see here, and use that as midware. Then, we would have two front-ends. Testpmd, which takes text and then calls midware as it does now,
> and the gRPC frontend, which parses messages from the RPC server and runs the midware. This would enable testpmd to still be used to
> sanity check a DPDK installation, but we would not need to continually expand Testpmd. The primary issue is that, right now, anything not
> included in Testpmd is not really testable by DTS. This includes portions of the RTE Flow API, which was part of my reason for proposing this.
> The RTE Flow API would, in my estimation, if fully implemented into Testpmd, probably add at least another 10,000 lines of code. As
> mentioned in my proposal, Testpmd already does more parsing and lexing than it does interaction with DPDK by line count. Also, since I am
> proposing making this a separate application, we would be able to gradually migrate the tests inside of DTS. This would have no effect on
> anything except for Testpmd, the new application and the addition of a flag to toggle the use of a C++ compiler.
> >
> > I'm not sure exactly what you mean by gRPC model-based tests. gRPC uses classes to model services, but for this usecase we are
> essentially using it to transfer function arguments across the internet and then pass the return value back. Any RPC framework would
> function similarly if I ignored the restrictions of which languages to use, and the choice is not important to how tests are conducted. Put
> another way, how you write a test for DTS will not change much if you are using this or testpmd, it's just how you transfer data and get it
> back that I want to change.

- In general I think it is a good idea to adding gRPC binding to testpmd to expose/improve testing automation.
  Though I think we shouldn’t remove existing CLI interface.
  Ideally I’d like to have both – CLI and gRPC for all commands.
  Don’t know how realistic is that, but at least for major commands -  port/queue configure, start/stop, etc.
- Conditional compilation (new meson flag or so) is probably good enough for this case.   
- About RFC itself - I understand that you choose testacl for simplicity, but in fact, it is a standalone application
  that has not much common with testpmd itself and the problems that you mentioned:
  interactive commands, parameter and results parsing, etc.
  Would it be possible to try implement something more realistic with testpmd itself,
  like simple test-pmd port/queue configure, start,  result collection, etc.?
  To get a better idea how it is going to work and how complicated it would be.



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 0/4] [RFC] Testpmd RPC API
  2022-04-14 12:07         ` Ananyev, Konstantin
@ 2022-04-14 20:09           ` Owen Hilyard
  0 siblings, 0 replies; 12+ messages in thread
From: Owen Hilyard @ 2022-04-14 20:09 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: Jerin Jacob, dpdk-dev, Honnappa Nagarahalli, Thomas Monjalon

[-- Attachment #1: Type: text/plain, Size: 13019 bytes --]

>
>   Though I think we shouldn’t remove existing CLI interface.
>

I agree, it's a very useful debugging tool for validating environments. I
think having two "frontends", the CLI and API, which both consume one
"backend" testpmd library would be the easiest way to go about doing that
while keeping long-term maintenance low.

Conditional compilation (new meson flag or so) is probably good enough for
> this case.
>

One of the changes I made was an on-by-default meson flag to enable C++
compilation. If that flag is on, and all dependencies are present, then the
application will be built.

Would it be possible to try implement something more realistic with testpmd
> itself


I would consider it a "phase 2" version of this RFC. The hard part was
getting gRPC working inside of Meson, which is why I picked a simple app to
port. If this RFC moves forward, I can look at porting the functionality
needed for the nic single core performance test (
http://git.dpdk.org/tools/dts/tree/test_plans/nic_single_core_perf_test_plan.rst
).

On Thu, Apr 14, 2022 at 8:08 AM Ananyev, Konstantin <
konstantin.ananyev@intel.com> wrote:

>
> Hi everyone,
>
> First of all thanks Owen for stepping forward with this RFC.
> Few thoughts on this subject below.
> Konstantin
>
> > -----Original Message-----
> > From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Sent: Thursday, April 14, 2022 12:59 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Subject: FW: [PATCH v1 0/4] [RFC] Testpmd RPC API
> >
> >
> >
> > From: Owen Hilyard <ohilyard@iol.unh.edu>
> > Sent: Wednesday, April 13, 2022 1:47 PM
> > To: Jerin Jacob <jerinjacobk@gmail.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Honnappa Nagarahalli <
> Honnappa.Nagarahalli@arm.com>; Thomas Monjalon <thomas@monjalon.net>
> > Subject: Re: [PATCH v1 0/4] [RFC] Testpmd RPC API
> >
> > If so, I think, gRPC service would be along with existing testpmd
> functions, like start_packet_forwarding().
> >
> > It was my intention to re-use existing functions. I used the ACL tests
> as an example because they are more self-contained then Testpmd,
> > which made creating the proof of concept much easier.
> >
> > Also, We don't need to rewrite the existing testpmd, Instead, RPC
> service, we can add in existing app/test-pmd/
> >
> > The reason that I split out the services is that there doesn't seem to
> be a way to produce multiple binaries without re-writing that section of
> > the build system. I wanted to avoid the hard requirement of having a C++
> compiler available in order to be able to use testpmd, since that
> > may affect what platforms Testpmd can run on and I want to avoid this
> being any kind of breaking change. If we decide to go the route of
> > putting it all in a single application, we would need to conditionally
> enable the gRPC service at build time. Meson's current lack of support
> > for conditionally detecting compilers causes issues here.
> >
> > I think, DPDK has only one interactive test case which is testpmd,
> >
> > Could you point me to that test case? Either invocation or source is ok.
> I can't see anything that would lead me to assume use of testpmd in
> > "meson test --list". To my knowledge, all of the test cases that use
> testpmd are in DTS. If there is a test that uses testpmd but is not part of
> > DTS, I think it would be a candidate for moving into DTS assuming it's
> not a unit test.
> >
> > How you are planning to do noninteractive test cases?
> >
> > I'm not planning to make any change to unit testing, you can read more
> about how testing is currently conducted
> > here: https://www.dpdk.org/blog/2021/07/05/dpdk-testing-approaches/
> >
> > If there is a unit test that involves testpmd, there are two
> possibilities.
> > 1. If we are making a separate application for Testpmd with the gRPC
> api, then nothing changes except for possibly changing where some
> > of the testpmd source lives in order to enable code reuse between the
> two applications.
> > 2. If gRPC is being added to Testpmd, then the unit test should still
> function as it previously did if I do any necessary refactoring as
> correctly.
> >
> > I think, key should be leveraging existing test cases as much as
> possible and make easy for developers to add new test cases.
> >
> > That is part of the reason why I want to be able to do this. Adding a
> new test in DTS is very easy if the functionality needed already exists in
> > Testpmd. If the functionality does not exist, then adding the test
> becomes difficult, due to the required modifications to the Testpmd lexer
> > and parser to accommodate the new command. My plan is to leave unit
> testing in C, but help make it easier to expose C functions to Python
> > for integration testing. This gives us the best of both worlds in terms
> of access to DPDK and the ability to use a high-level language to write
> > the tests.
> >
> > On Tue, Apr 12, 2022 at 2:07 AM Jerin Jacob <mailto:
> jerinjacobk@gmail.com> wrote:
> > On Mon, Apr 11, 2022 at 11:19 PM Owen Hilyard <mailto:
> ohilyard@iol.unh.edu> wrote:
> > >>
> > >> scheme is probably over-engineered
> > >
> > >
> > > I tried my hardest to keep this as simple as possible. The
> requirements imposed by DTS being a distributed system in Python restricted
> > what I could do a lot. Needing to be compatible with DPDK's license also
> got rid of a lot of options. Binding generators are made for simple
> > projects, and DPDK is not a simple project. There were some other
> options related to choice in the RPC framework, but very few RPC
> > protocols seem to work well with C and be usable from Python, which is
> why I ended up using C++ with gRPC. Most of the code in
> > api_impl.cc is taken from /app/test-acl/main.c, and the new part is
> mostly the C++ class at the bottom. Overall, this proposal comes out to
> > ~100 lines of new C++, 9 lines of C, 12 lines of gRPC Protobuf and 100
> lines of Meson. gRPC may be able to do a lot more than I need it to
> > for the proof of concept, but many of the features that are not used,
> like bi-directional streaming, become very useful in writing more
> > complicated tests. Overall, this solution is probably more capable than
> we need it to be, but I think that those extra capabilities don't come
> > at a very large cost.
> >
> >
> > Now it is clear, I was carried away with the POC test application and
> > I was not knowing existing DTS tests are based on python
> >
> > Is below a fair summary?
> >
> > 1) DPDK has interactive test cases and no interactive test cases.
> >
> > For The interactive test case like testpmd, I agree that we can enable
> > RPC service via gRPC server in C++ as  and client in Python, and
> > something along the lines of exposing the existing test-pmd command
> > line function as service
> > to avoid command line parsing and reuse the existing python test suite.
> >
> > If so, I think, gRPC service would be along with existing testpmd
> > functions, like start_packet_forwarding(). Also, We don't need to
> > rewrite the existing testpmd,
> > Instead, RPC service, we can add in existing app/test-pmd/ and hook to
> > existing core testpmd functions to bypass the command-line parsing in
> > C and control from python client as needed as service.
> >
> > Also, I agree that pulling in gRPC C++ server boilerplate and hooking
> > to C functions is a good idea as it is the best C-based RPC scheme
> > available today.
> >
> > 2)I think, DPDK has only one interactive test case which is testpmd,
> > Remaining test cases are non-interactive, non-interactive test cases
> > can simply run over ssh with passwordless login. Right?
> > Do we need gRPC for that? Will the following scheme suffice? If not,
> > How you are planning to do noninteractive test cases?
> > i.e
> > a)Copy test to target
> > b) result=`ssh username@IP /path/to/testapp/in/target`
> >
> > I think, key should be leveraging existing test cases as much as
> > possible and make easy for developers to add new test cases.
> >
> >
> > >>
> > >> Now that, Test code is also part of DPDK.
> > >
> > >
> > > DTS is pure python. I tried to use FFI to call directly into DPDK from
> Python and then use xmlrpc from the python standard library. As
> > mentioned in the writeup, I couldn't find a binding generator that would
> properly handle DPDK's allocators, which made it so that anything
> > passed to DPDK using python was allocated using the system malloc. I
> don't think it is wise to attempt to programmatically re-write the
> > generated code to allow for custom allocators. The original reason for
> needing to have DTS and DPDK in the same repository was so that
> > tests could be committed and run alongside the feature patch.
> > >
> > >> Interactive - Testpmd one, I believe, Feeding stdin programmatically
> would suffice to test all the combinations.
> > >
> > >
> > > One of the issues this is trying to address is that human-readable
> strings are a poor way to pass complex information between two
> > programs. DTS is a distributed system, and it can have up to 3 physical
> servers involved in any given test. This means that it's not stdin via a
> > pipe, it's an entire SSH session. This adds a noticeable amount of
> overhead when trying to send and verify the result of sending 1,000+
> > packets, since the lack of structured output means each packet must be
> checked before the next can be sent. This might be solvable by
> > adding a structured output mode to testpmd, but that would involve
> committing to writing output twice for every function in testpmd
> > forever.
> > >
> > >> We need to add all test cases in this model and we need to maintain
> two sets of programs.(Traditional tests and gRPC model-based
> > tests).
> > >
> > >
> > > Assuming by traditional tests you mean the unit tests run by Meson, I
> would argue that we are already maintaining 2 kinds of tests. The
> > unit tests, and the python-based DTS tests. My intention is to create a
> thin wrapper around DPDK that would be exposed via gRPC, like you
> > see here, and use that as midware. Then, we would have two front-ends.
> Testpmd, which takes text and then calls midware as it does now,
> > and the gRPC frontend, which parses messages from the RPC server and
> runs the midware. This would enable testpmd to still be used to
> > sanity check a DPDK installation, but we would not need to continually
> expand Testpmd. The primary issue is that, right now, anything not
> > included in Testpmd is not really testable by DTS. This includes
> portions of the RTE Flow API, which was part of my reason for proposing
> this.
> > The RTE Flow API would, in my estimation, if fully implemented into
> Testpmd, probably add at least another 10,000 lines of code. As
> > mentioned in my proposal, Testpmd already does more parsing and lexing
> than it does interaction with DPDK by line count. Also, since I am
> > proposing making this a separate application, we would be able to
> gradually migrate the tests inside of DTS. This would have no effect on
> > anything except for Testpmd, the new application and the addition of a
> flag to toggle the use of a C++ compiler.
> > >
> > > I'm not sure exactly what you mean by gRPC model-based tests. gRPC
> uses classes to model services, but for this usecase we are
> > essentially using it to transfer function arguments across the internet
> and then pass the return value back. Any RPC framework would
> > function similarly if I ignored the restrictions of which languages to
> use, and the choice is not important to how tests are conducted. Put
> > another way, how you write a test for DTS will not change much if you
> are using this or testpmd, it's just how you transfer data and get it
> > back that I want to change.
>
> - In general I think it is a good idea to adding gRPC binding to testpmd
> to expose/improve testing automation.
>   Though I think we shouldn’t remove existing CLI interface.
>   Ideally I’d like to have both – CLI and gRPC for all commands.
>   Don’t know how realistic is that, but at least for major commands -
> port/queue configure, start/stop, etc.
> - Conditional compilation (new meson flag or so) is probably good enough
> for this case.
> - About RFC itself - I understand that you choose testacl for simplicity,
> but in fact, it is a standalone application
>   that has not much common with testpmd itself and the problems that you
> mentioned:
>   interactive commands, parameter and results parsing, etc.
>   Would it be possible to try implement something more realistic with
> testpmd itself,
>   like simple test-pmd port/queue configure, start,  result collection,
> etc.?
>   To get a better idea how it is going to work and how complicated it
> would be.
>
>
>

[-- Attachment #2: Type: text/html, Size: 15381 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v1 1/4] app/test-pmd-api: Add C++ Compiler
  2022-04-07 21:47 ` [PATCH v1 1/4] app/test-pmd-api: Add C++ Compiler ohilyard
@ 2023-10-02 18:33   ` Stephen Hemminger
  0 siblings, 0 replies; 12+ messages in thread
From: Stephen Hemminger @ 2023-10-02 18:33 UTC (permalink / raw)
  To: ohilyard; +Cc: dev, Honnappa.Nagarahalli, thomas

On Thu,  7 Apr 2022 17:47:05 -0400
ohilyard@iol.unh.edu wrote:

> From: Owen Hilyard <ohilyard@iol.unh.edu>
> 
> Adds a C++ compiler to the project, which is currently enabled by
> default for ease of testing. Meson currently lacks a way to try to get a
> compiler, and failing to find a compiler for a language always causes a
> hard error, so this is the only workable approach.
> 
> Signed-off-by: Owen Hilyard <ohilyard@iol.unh.edu>

This patch has a problem.
What ever editor you used failed to add a end of line (newline)
on the last line of the file. Git accepts this but complains, other tools
do not handle it well.

Pleas rebase and fix the series.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-10-02 18:33 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-07 21:47 [PATCH v1 0/4] [RFC] Testpmd RPC API ohilyard
2022-04-07 21:47 ` [PATCH v1 1/4] app/test-pmd-api: Add C++ Compiler ohilyard
2023-10-02 18:33   ` Stephen Hemminger
2022-04-07 21:47 ` [PATCH v1 2/4] app/test-pmd-api: Add POC with gRPC deps ohilyard
2022-04-07 21:47 ` [PATCH v1 3/4] app/test-pmd-api: Add protobuf file ohilyard
2022-04-07 21:47 ` [PATCH v1 4/4] app/test-pmd-api: Implementation files for the API ohilyard
2022-04-11 14:27 ` [PATCH v1 0/4] [RFC] Testpmd RPC API Jerin Jacob
2022-04-11 17:48   ` Owen Hilyard
2022-04-12  6:07     ` Jerin Jacob
2022-04-13 12:47       ` Owen Hilyard
2022-04-14 12:07         ` Ananyev, Konstantin
2022-04-14 20:09           ` Owen Hilyard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).