* [RFC PATCH v1 00/10] dts: add hello world testcase
@ 2022-08-24 16:24 Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 01/10] dts: hello world config options Juraj Linkeš
` (10 more replies)
0 siblings, 11 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
The patch is built on top of the existing SSH connection patch submitted
earlier and contains the rest of the code to run the most basic
testcase, hello world, which just runs the hello world application and
verifies the output.
The code added is mostly about setting up the testbed, building DPDK on
the system under test and then running the testcases defined in one
testsuite.
There are three areas which I'm planning on improving:
1. Revisit how the methods in Node, SutNode and TrafficGeneratorNode can
be redistributed so that we have a clean split of code that common to
both node types and that's specific to both.
2. Add abstraction that handle OS differences.
3. Better split of code into commits.
A new version with the above changes will be submitted in around
two-three weeks time.
Juraj Linkeš (10):
dts: hello world config options
dts: hello world cli parameters and env vars
dts: ssh connection additions for hello world
dts: add basic node management methods
dts: add system under test node
dts: add traffic generator node
dts: add testcase and basic test results
dts: add test runner and statistics collector
dts: add hello world testplan
dts: add hello world testsuite
dts/conf.yaml | 20 +-
dts/framework/config/__init__.py | 141 ++++-
dts/framework/config/conf_yaml_schema.json | 139 ++++-
dts/framework/dts.py | 174 +++++-
dts/framework/exception.py | 15 +
dts/framework/logger.py | 9 +-
dts/framework/node.py | 395 +++++++++++++-
dts/framework/settings.py | 96 +++-
dts/framework/ssh_connection.py | 19 +
dts/framework/ssh_pexpect.py | 61 ++-
dts/framework/stats_reporter.py | 70 +++
dts/framework/sut_node.py | 603 +++++++++++++++++++++
dts/framework/test_case.py | 274 ++++++++++
dts/framework/test_result.py | 218 ++++++++
dts/framework/tg_node.py | 78 +++
dts/framework/utils.py | 14 +
dts/test_plans/hello_world_test_plan.rst | 68 +++
dts/tests/TestSuite_hello_world.py | 80 +++
18 files changed, 2432 insertions(+), 42 deletions(-)
create mode 100644 dts/framework/stats_reporter.py
create mode 100644 dts/framework/sut_node.py
create mode 100644 dts/framework/test_case.py
create mode 100644 dts/framework/test_result.py
create mode 100644 dts/framework/tg_node.py
create mode 100644 dts/test_plans/hello_world_test_plan.rst
create mode 100644 dts/tests/TestSuite_hello_world.py
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 01/10] dts: hello world config options
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 02/10] dts: hello world cli parameters and env vars Juraj Linkeš
` (9 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
There are two categories of new config options: DPDK build/config
options and testing options.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 20 ++-
dts/framework/config/__init__.py | 141 ++++++++++++++++++++-
dts/framework/config/conf_yaml_schema.json | 139 +++++++++++++++++++-
3 files changed, 289 insertions(+), 11 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index cb12ea3d0f..36399c6e74 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -1,7 +1,21 @@
executions:
- - system_under_test: "SUT 1"
+ - target_descriptions:
+ - cpu: native
+ compiler: gcc
+ arch: x86_64
+ os: linux
+ perf: false
+ func: true
+ test_suites:
+ - hello_world
+ system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
- hostname: "SUT IP address or hostname"
+ hostname: sut1.change.me.localhost
+ os: linux
user: root
- password: "Leave blank to use SSH keys"
+ password: a
+ arch: x86_64
+ bypass_core0: true
+ cores: 1
+ memory_channels: 4
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index a0fdffcd77..158baac143 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -6,11 +6,14 @@
"""
Generic port and topology nodes configuration file load function
"""
+import abc
import json
import os.path
import pathlib
+from abc import abstractmethod
from dataclasses import dataclass
-from typing import Any, Optional
+from enum import Enum, auto, unique
+from typing import Any, Optional, TypedDict, Union
import warlock
import yaml
@@ -18,6 +21,53 @@
from framework.settings import SETTINGS
+class StrEnum(Enum):
+ @staticmethod
+ def _generate_next_value_(
+ name: str, start: int, count: int, last_values: object
+ ) -> str:
+ return name
+
+
+@unique
+class OS(StrEnum):
+ linux = auto()
+ freebsd = auto()
+ windows = auto()
+
+
+@unique
+class Architecture(StrEnum):
+ i686 = auto()
+ x86_64 = auto()
+ x86_32 = auto()
+ arm64 = auto()
+ ppc64le = auto()
+
+
+@unique
+class Compiler(StrEnum):
+ gcc = auto()
+ clang = auto()
+ icc = auto()
+ msvc = auto()
+
+
+@unique
+class CPU(StrEnum):
+ native = auto()
+ armv8a = auto()
+ dpaa2 = auto()
+ thunderx = auto()
+ xgene1 = auto()
+
+
+@unique
+class NodeType(StrEnum):
+ physical = auto()
+ virtual = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -28,7 +78,12 @@ class NodeConfiguration:
name: str
hostname: str
user: str
+ os: OS
+ arch: Architecture
password: Optional[str]
+ bypass_core0: bool
+ cores: str
+ memory_channels: int
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -36,20 +91,101 @@ def from_dict(d: dict) -> "NodeConfiguration":
name=d["name"],
hostname=d["hostname"],
user=d["user"],
+ os=OS(d["os"]),
+ arch=Architecture(d["arch"]),
password=d.get("password"),
+ bypass_core0=d.get("bypass_core0", False),
+ cores=d["cores"],
+ memory_channels=d["memory_channels"],
)
+@dataclass(slots=True, frozen=True)
+class TargetDescription:
+ cpu: CPU
+ compiler: Compiler
+ arch: Architecture
+ os: OS
+
+ @staticmethod
+ def from_dict(d: dict) -> "TargetDescription":
+ return TargetDescription(
+ cpu=CPU(d["cpu"]),
+ compiler=Compiler(d["compiler"]),
+ arch=Architecture(d["arch"]),
+ os=OS(d["os"]),
+ )
+
+ def __str__(self):
+ return f"{self.arch}-{self.os}-{self.cpu}-{self.compiler}"
+
+
+class TestSuiteConfigDict(TypedDict):
+ suite: str
+ cases: list[str]
+
+
+# https://github.com/python/mypy/issues/5374
+@dataclass(slots=True, frozen=True) # type: ignore
+class TestSuiteConfig(abc.ABC):
+ test_suite: str
+
+ @staticmethod
+ def from_dict(
+ entry: str | TestSuiteConfigDict,
+ ) -> Union["AllTestCasesTestSuiteConfig", "SelectedTestCasesTestSuiteConfig"]:
+ if isinstance(entry, str):
+ return AllTestCasesTestSuiteConfig(test_suite=entry)
+ elif isinstance(entry, dict):
+ return SelectedTestCasesTestSuiteConfig(
+ test_suite=entry["suite"], test_cases=entry["cases"]
+ )
+ else:
+ raise TypeError(f"{type(entry)} is not valid for a test suite config.")
+
+ @abstractmethod
+ def get_requested_test_cases(self) -> Optional[list[str]]:
+ raise NotImplementedError()
+
+
+@dataclass(slots=True, frozen=True)
+class AllTestCasesTestSuiteConfig(TestSuiteConfig):
+ def get_requested_test_cases(self) -> Optional[list[str]]:
+ return None
+
+
+@dataclass(slots=True, frozen=True)
+class SelectedTestCasesTestSuiteConfig(TestSuiteConfig):
+ test_cases: list[str]
+
+ def get_requested_test_cases(self) -> Optional[list[str]]:
+ return self.test_cases
+
+
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
+ target_descriptions: list[TargetDescription]
+ perf: bool
+ func: bool
+ test_suites: list[TestSuiteConfig]
system_under_test: NodeConfiguration
@staticmethod
def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
+ target_descriptions: list[TargetDescription] = list(
+ map(TargetDescription.from_dict, d["target_descriptions"])
+ )
+ test_suites: list[TestSuiteConfig] = list(
+ map(TestSuiteConfig.from_dict, d["test_suites"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
return ExecutionConfiguration(
+ target_descriptions=target_descriptions,
+ perf=d["perf"],
+ func=d["func"],
+ test_suites=test_suites,
system_under_test=node_map[sut_name],
)
@@ -57,6 +193,7 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
@dataclass(slots=True, frozen=True)
class Configuration:
executions: list[ExecutionConfiguration]
+ nodes: list[NodeConfiguration]
@staticmethod
def from_dict(d: dict) -> "Configuration":
@@ -74,7 +211,7 @@ def from_dict(d: dict) -> "Configuration":
)
)
- return Configuration(executions=executions)
+ return Configuration(executions=executions, nodes=nodes)
def load_config() -> Configuration:
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 04b2bec3a5..d1cc990fd5 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -6,13 +6,88 @@
"type": "string",
"description": "A unique identifier for a node"
},
- "node_role": {
+ "OS": {
"type": "string",
- "description": "The role a node plays in DTS",
"enum": [
- "system_under_test",
- "traffic_generator"
+ "linux"
]
+ },
+ "ARCH": {
+ "type": "string",
+ "enum": [
+ "x86_64"
+ ]
+ },
+ "compiler": {
+ "type": "string",
+ "enum": [
+ "gcc",
+ "clang",
+ "icc",
+ "mscv"
+ ]
+ },
+ "cpu": {
+ "type": "string",
+ "description": "Native should be the default on x86",
+ "enum": [
+ "native",
+ "armv8a",
+ "dpaa2",
+ "thunderx",
+ "xgene1"
+ ]
+ },
+ "target": {
+ "type": "object",
+ "description": "Targets supported by DTS",
+ "properties": {
+ "arch": {
+ "type": "string",
+ "enum": [
+ "ALL",
+ "x86_64",
+ "arm64",
+ "ppc64le",
+ "other"
+ ]
+ },
+ "cpu": {
+ "$ref": "#/definitions/cpu"
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
+ },
+ "compiler": {
+ "$ref": "#/definitions/compiler"
+ }
+ },
+ "additionalProperties": false
+ },
+ "test_suite": {
+ "type": "string",
+ "enum": [
+ "hello_world"
+ ]
+ },
+ "test_target": {
+ "type": "object",
+ "properties": {
+ "suite": {
+ "$ref": "#/definitions/test_suite"
+ },
+ "cases": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minimum": 1
+ }
+ },
+ "required": [
+ "suite"
+ ],
+ "additionalProperties": false
}
},
"type": "object",
@@ -34,16 +109,36 @@
"type": "string",
"description": "The user to access this node with."
},
+ "os": {
+ "$ref": "#/definitions/OS"
+ },
+ "arch": {
+ "$ref": "#/definitions/ARCH"
+ },
"password": {
"type": "string",
"description": "The password to use on this node. SSH keys are preferred."
+ },
+ "bypass_core0": {
+ "type": "boolean",
+ "description": "Indicate whether DPDK should omit using the first core or not."
+ },
+ "cores": {
+ "type": "string",
+ "description": "Comma-separated list of cores to use, e.g.: 1,2,3,4,5,18-22"
+ },
+ "memory_channels": {
+ "type": "integer",
+ "description": "How many memory channels to use."
}
},
"additionalProperties": false,
"required": [
"name",
+ "os",
+ "user",
"hostname",
- "user"
+ "arch"
]
},
"minimum": 1
@@ -55,11 +150,43 @@
"properties": {
"system_under_test": {
"$ref": "#/definitions/node_name"
+ },
+ "target_descriptions": {
+ "type": "array",
+ "items": {
+ "$ref": "#/definitions/target"
+ },
+ "minimum": 1
+ },
+ "test_suites": {
+ "type": "array",
+ "items": {
+ "oneOf": [
+ {
+ "$ref": "#/definitions/test_suite"
+ },
+ {
+ "$ref": "#/definitions/test_target"
+ }
+ ]
+ }
+ },
+ "perf": {
+ "type": "boolean",
+ "description": "Enable performance testing"
+ },
+ "func": {
+ "type": "boolean",
+ "description": "Enable functional testing"
}
},
"additionalProperties": false,
"required": [
- "system_under_test"
+ "system_under_test",
+ "target_descriptions",
+ "perf",
+ "func",
+ "test_suites"
]
},
"minimum": 1
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 02/10] dts: hello world cli parameters and env vars
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 01/10] dts: hello world config options Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 03/10] dts: ssh connection additions for hello world Juraj Linkeš
` (8 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
Add command line arguments (and the corresponding env variables) that
specify DPDK build and test execution workflow.
Also split the configuration into two parts, one that can be changed
during runtime and one that can't. We will need to change the git
refspec to a DPDK tarball path when support is added for building DPDK
from the repo.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/logger.py | 9 ++--
dts/framework/settings.py | 96 +++++++++++++++++++++++++++++++++++++--
2 files changed, 96 insertions(+), 9 deletions(-)
diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index 920ce0fb15..15cae3e4f9 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -8,6 +8,8 @@
import os.path
from typing import TypedDict
+from .settings import SETTINGS
+
"""
DTS logger module with several log level. DTS framework and TestSuite log
will saved into different log files.
@@ -66,10 +68,9 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
self.logger.addHandler(sh)
self.sh = sh
- if not os.path.exists("output"):
- os.mkdir("output")
+ logging_file_prefix = os.path.join(SETTINGS.output_dir, node)
- fh = logging.FileHandler(f"output/{node}.log")
+ fh = logging.FileHandler(f"{logging_file_prefix}.log")
fh.setFormatter(
logging.Formatter(
fmt="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
@@ -83,7 +84,7 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
# This outputs EVERYTHING, intended for post-mortem debugging
# Also optimized for processing via AWK (awk -F '|' ...)
- verbose_handler = logging.FileHandler(f"output/{node}.verbose.log")
+ verbose_handler = logging.FileHandler(f"{logging_file_prefix}.verbose.log")
verbose_handler.setFormatter(
logging.Formatter(
fmt="%(asctime)s|%(name)s|%(levelname)s|%(pathname)s|%(lineno)d|%(funcName)s|"
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index c9621d4e3d..1ff3af4438 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -7,6 +7,7 @@
import argparse
import os
from dataclasses import dataclass
+from enum import Enum, unique
from typing import Any
@@ -38,10 +39,40 @@ def wrapper(**kwargs) -> _EnvironmentArgument:
@dataclass(slots=True, frozen=True)
-class _Settings:
+class _EnvSettings:
config_file_path: str
+ compile_timeout: int
timeout: float
verbose: bool
+ output_dir: str
+ skip_setup: bool
+ test_cases: list
+ re_run: int
+ remote_dpdk_dir: str
+
+
+@dataclass(slots=True)
+class _RuntimeSettings:
+ dpdk_ref: str
+
+
+class _Settings(_EnvSettings, _RuntimeSettings):
+ pass
+
+
+@unique
+class DTSRuntimeErrors(Enum):
+ NO_ERR = 0
+ GENERIC_ERR = 1
+ DPDK_BUILD_ERR = (2,)
+ SUT_SETUP_ERR = (3,)
+ TG_SETUP_ERR = (4,)
+ SUITE_SETUP_ERR = (5,)
+ SUITE_EXECUTE_ERR = 6
+
+
+# TODO singleton
+DTSRuntimeError: DTSRuntimeErrors = DTSRuntimeErrors.NO_ERR
def _get_parser() -> argparse.ArgumentParser:
@@ -63,6 +94,14 @@ def _get_parser() -> argparse.ArgumentParser:
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK.",
)
+ parser.add_argument(
+ "--compile-timeout",
+ action=_env_arg("DTS_COMPILE_TIMEOUT"),
+ default=1200,
+ required=False,
+ help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
+ )
+
parser.add_argument(
"-v",
"--verbose",
@@ -72,15 +111,62 @@ def _get_parser() -> argparse.ArgumentParser:
help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages to the console.",
)
+ parser.add_argument(
+ "--dpdk-ref",
+ "--git",
+ "--snapshot",
+ action=_env_arg("DTS_DPDK_REF"),
+ default="dep/dpdk.tar.gz",
+ help="[DTS_DPDK_REF] Reference to DPDK source code, "
+ "can be either a path to a tarball or a git refspec. "
+ "In case of a tarball, it will be extracted in the same directory.",
+ )
+
+ parser.add_argument(
+ "--output-dir",
+ "--output",
+ action=_env_arg("DTS_OUTPUT_DIR"),
+ default="output",
+ help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
+ )
+
+ parser.add_argument(
+ "-s",
+ "--skip-setup",
+ action=_env_arg("DTS_SKIP_SETUP"),
+ help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+ )
+
+ parser.add_argument(
+ "--test-cases",
+ action=_env_arg("DTS_TESTCASES"),
+ help="[DTS_TESTCASES] Comma-separated list of testcases to execute",
+ )
+
+ parser.add_argument(
+ "--re-run",
+ "--re_run",
+ action=_env_arg("DTS_RERUN"),
+ default=0,
+ help="[DTS_RERUN] Re-run tests the specified amount of times if a test failure occurs",
+ )
+
return parser
def _get_settings() -> _Settings:
- args = _get_parser().parse_args()
+ parsed_args = _get_parser().parse_args()
return _Settings(
- config_file_path=args.config_file,
- timeout=float(args.timeout),
- verbose=(args.verbose == "Y"),
+ config_file_path=parsed_args.config_file,
+ compile_timeout=parsed_args.compile_timeout,
+ timeout=parsed_args.timeout,
+ verbose=(parsed_args.verbose == "Y"),
+ output_dir=parsed_args.output_dir,
+ skip_setup=(parsed_args.skip_setup == "Y"),
+ test_cases=parsed_args.test_cases.split(","),
+ re_run=int(parsed_args.re_run),
+ remote_dpdk_dir="/tmp/",
+ dpdk_ref=parsed_args.dpdk_ref,
)
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 03/10] dts: ssh connection additions for hello world
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 01/10] dts: hello world config options Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 02/10] dts: hello world cli parameters and env vars Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 04/10] dts: add basic node management methods Juraj Linkeš
` (7 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
SCP is needed to transfer DPDK tarballs between nodes.
Also add keepalive method that testcases use.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/ssh_connection.py | 19 ++++++++++
dts/framework/ssh_pexpect.py | 61 +++++++++++++++++++++++++++++++--
2 files changed, 78 insertions(+), 2 deletions(-)
diff --git a/dts/framework/ssh_connection.py b/dts/framework/ssh_connection.py
index bbf7c8ef01..ec7333e565 100644
--- a/dts/framework/ssh_connection.py
+++ b/dts/framework/ssh_connection.py
@@ -8,6 +8,7 @@
from typing import Any, Optional
from .logger import DTSLOG
+from .settings import SETTINGS
from .ssh_pexpect import SSHPexpect
@@ -68,3 +69,21 @@ def close(self, force: bool = False) -> None:
self.logger.logger_exit()
self.session.close(force)
+
+ def check_available(self) -> bool:
+ MAGIC_STR = "DTS_CHECK_SESSION"
+ out = self.session.send_command("echo %s" % MAGIC_STR, timeout=0.1)
+ # if not available, try to send ^C and check again
+ if MAGIC_STR not in out:
+ self.logger.info("Try to recover session...")
+ self.session.send_command("^C", timeout=SETTINGS.timeout)
+ out = self.session.send_command("echo %s" % MAGIC_STR, timeout=0.1)
+ if MAGIC_STR not in out:
+ return False
+
+ return True
+
+ def copy_file_to(
+ self, src: str, dst: str = "~/", password: str = "", node_session: Any = None
+ ) -> None:
+ self.session.copy_file_to(src, dst, password, node_session)
diff --git a/dts/framework/ssh_pexpect.py b/dts/framework/ssh_pexpect.py
index e8f64515c0..b8eb10025e 100644
--- a/dts/framework/ssh_pexpect.py
+++ b/dts/framework/ssh_pexpect.py
@@ -5,11 +5,16 @@
#
import time
-from typing import Optional
+from typing import Any, Optional
+import pexpect
from pexpect import pxssh
-from .exception import SSHConnectionException, SSHSessionDeadException, TimeoutException
+from .exception import (
+ SSHConnectionException,
+ SSHSessionDeadException,
+ TimeoutException,
+)
from .logger import DTSLOG
from .utils import GREEN, RED
@@ -203,3 +208,55 @@ def close(self, force: bool = False) -> None:
def isalive(self) -> bool:
return self.session.isalive()
+
+ def copy_file_to(
+ self, src: str, dst: str = "~/", password: str = "", node_session: Any = None
+ ) -> None:
+ """
+ Sends a local file to a remote place.
+ """
+ command: str
+ if ":" in self.node:
+ command = "scp -v -P {0} -o NoHostAuthenticationForLocalhost=yes {1} {2}@{3}:{4}".format(
+ str(self.port), src, self.username, self.ip, dst
+ )
+ else:
+ command = "scp -v {0} {1}@{2}:{3}".format(
+ src, self.username, self.node, dst
+ )
+ if password == "":
+ self._spawn_scp(command, self.password, node_session)
+ else:
+ self._spawn_scp(command, password, node_session)
+
+ def _spawn_scp(self, scp_cmd: str, password: str, node_session: Any) -> None:
+ """
+ Transfer a file with SCP
+ """
+ self.logger.info(scp_cmd)
+ # if node_session is not None, copy file from/to node env
+ # if node_session is None, copy file from/to current dts env
+ p: pexpect.spawn
+ if node_session is not None:
+ node_session.session.clean_session()
+ node_session.session.__sendline(scp_cmd)
+ p = node_session.session.session
+ else:
+ p = pexpect.spawn(scp_cmd)
+ time.sleep(0.5)
+ ssh_newkey: str = "Are you sure you want to continue connecting"
+ i: int = p.expect(
+ [ssh_newkey, "[pP]assword", "# ", pexpect.EOF, pexpect.TIMEOUT], 120
+ )
+ if i == 0: # add once in trust list
+ p.sendline("yes")
+ i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2)
+
+ if i == 1:
+ time.sleep(0.5)
+ p.sendline(password)
+ p.expect("Exit status 0", 60)
+ if i == 4:
+ self.logger.error("SCP TIMEOUT error %d" % i)
+ if node_session is None:
+ p.close()
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 04/10] dts: add basic node management methods
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
` (2 preceding siblings ...)
2022-08-24 16:24 ` [RFC PATCH v1 03/10] dts: ssh connection additions for hello world Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 05/10] dts: add system under test node Juraj Linkeš
` (6 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
The nodes DTS is working with are either a system under test node (where
DPDK runs) and a traffic generator node.
The added methods are common to both system under test nodes and traffic
generator nodes.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/node.py | 395 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 387 insertions(+), 8 deletions(-)
diff --git a/dts/framework/node.py b/dts/framework/node.py
index e5c5454ebe..c08c79cca3 100644
--- a/dts/framework/node.py
+++ b/dts/framework/node.py
@@ -4,9 +4,13 @@
# Copyright(c) 2022 University of New Hampshire
#
+import dataclasses
+import re
+from abc import ABC
from typing import Optional
-from .config import NodeConfiguration
+from framework.config import OS, NodeConfiguration
+
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
from .ssh_connection import SSHConnection
@@ -16,22 +20,41 @@
"""
-class Node(object):
+@dataclasses.dataclass(slots=True, frozen=True)
+class CPUCore:
+ thread: str
+ socket: str
+ core: int
+
+
+class Node(ABC):
"""
Basic module for node management. This module implements methods that
manage a node, such as information gathering (of CPU/PCI/NIC) and
environment setup.
"""
- _config: NodeConfiguration
+ name: str
+ skip_setup: bool
+ sessions: list[SSHConnection]
+ default_hugepages_cleared: bool
+ prefix_list: list[str]
+ cores: list[CPUCore]
+ number_of_cores: int
logger: DTSLOG
main_session: SSHConnection
- name: str
+ _config: NodeConfiguration
_other_sessions: list[SSHConnection]
def __init__(self, node_config: NodeConfiguration):
self._config = node_config
self.name = node_config.name
+ self.skip_setup = SETTINGS.skip_setup
+ self.default_hugepages_cleared = False
+ self.prefix_list = []
+ self.cores = []
+ self.number_of_cores = 0
+ self._dpdk_dir = None
self.logger = getLogger(self.name)
self.logger.info(f"Created node: {self.name}")
@@ -42,22 +65,23 @@ def __init__(self, node_config: NodeConfiguration):
self.get_username(),
self.get_password(),
)
+ self._other_sessions = []
def get_ip_address(self) -> str:
"""
- Get SUT's ip address.
+ Get Node's ip address.
"""
return self._config.hostname
def get_password(self) -> Optional[str]:
"""
- Get SUT's login password.
+ Get Node's login password.
"""
return self._config.password
def get_username(self) -> str:
"""
- Get SUT's login username.
+ Get Node's login username.
"""
return self._config.user
@@ -66,6 +90,7 @@ def send_expect(
command: str,
expected: str,
timeout: float = SETTINGS.timeout,
+ alt_session: bool = False,
verify: bool = False,
trim_whitespace: bool = True,
) -> str | int:
@@ -81,19 +106,373 @@ def send_expect(
if trim_whitespace:
expected = expected.strip()
+ if alt_session and len(self._other_sessions):
+ return self._other_sessions[0].send_expect(
+ command, expected, timeout, verify
+ )
+
return self.main_session.send_expect(command, expected, timeout, verify)
- def send_command(self, cmds: str, timeout: float = SETTINGS.timeout) -> str:
+ def send_command(
+ self, cmds: str, timeout: float = SETTINGS.timeout, alt_session: bool = False
+ ) -> str:
"""
Send commands to node and return string before timeout.
"""
+ if alt_session and len(self._other_sessions):
+ return self._other_sessions[0].send_command(cmds, timeout)
+
return self.main_session.send_command(cmds, timeout)
+ def get_session_output(self, timeout: float = SETTINGS.timeout):
+ """
+ Get session output message before timeout
+ """
+ return self.main_session.get_session_before(timeout)
+
+ def get_total_huge_pages(self):
+ """
+ Get the huge page number of Node.
+ """
+ huge_pages = self.send_expect(
+ "awk '/HugePages_Total/ { print $2 }' /proc/meminfo", "# ", alt_session=True
+ )
+ if huge_pages != "":
+ return int(huge_pages.split()[0])
+ return 0
+
+ def mount_huge_pages(self):
+ """
+ Mount hugepage file system on Node.
+ """
+ self.send_expect("umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts`", "# ")
+ out = self.send_expect("awk '/hugetlbfs/ { print $2 }' /proc/mounts", "# ")
+ # only mount hugepage when no hugetlbfs mounted
+ if not len(out):
+ self.send_expect("mkdir -p /mnt/huge", "# ")
+ self.send_expect("mount -t hugetlbfs nodev /mnt/huge", "# ")
+
+ def strip_hugepage_path(self):
+ mounts = self.send_expect("cat /proc/mounts |grep hugetlbfs", "# ")
+ infos = mounts.split()
+ if len(infos) >= 2:
+ return infos[1]
+ else:
+ return ""
+
+ def set_huge_pages(self, huge_pages, numa=""):
+ """
+ Set numbers of huge pages
+ """
+ page_size = self.send_expect(
+ "awk '/Hugepagesize/ {print $2}' /proc/meminfo", "# "
+ )
+
+ if not numa:
+ self.send_expect(
+ "echo %d > /sys/kernel/mm/hugepages/hugepages-%skB/nr_hugepages"
+ % (huge_pages, page_size),
+ "# ",
+ 5,
+ )
+ else:
+ # sometimes we set hugepage on kernel cmdline, so we clear it
+ if not self.default_hugepages_cleared:
+ self.send_expect(
+ "echo 0 > /sys/kernel/mm/hugepages/hugepages-%skB/nr_hugepages"
+ % (page_size),
+ "# ",
+ 5,
+ )
+ self.default_hugepages_cleared = True
+
+ # some platform not support numa, example VM SUT
+ try:
+ self.send_expect(
+ "echo %d > /sys/devices/system/node/%s/hugepages/hugepages-%skB/nr_hugepages"
+ % (huge_pages, numa, page_size),
+ "# ",
+ 5,
+ )
+ except:
+ self.logger.warning("set %d hugepage on %s error" % (huge_pages, numa))
+ self.send_expect(
+ "echo %d > /sys/kernel/mm/hugepages/hugepages-%skB/nr_hugepages"
+ % (huge_pages.page_size),
+ "# ",
+ 5,
+ )
+
+ def get_dpdk_pids(self, prefix_list, alt_session):
+ """
+ get all dpdk applications on Node.
+ """
+ file_directories = [
+ "/var/run/dpdk/%s/config" % file_prefix for file_prefix in prefix_list
+ ]
+ pids = []
+ pid_reg = r"p(\d+)"
+ for config_file in file_directories:
+ # Covers case where the process is run as a unprivileged user and does not generate the file
+ isfile = self.send_expect(
+ "ls -l {}".format(config_file), "# ", 20, alt_session
+ )
+ if isfile:
+ cmd = "lsof -Fp %s" % config_file
+ out = self.send_expect(cmd, "# ", 20, alt_session)
+ if len(out):
+ lines = out.split("\r\n")
+ for line in lines:
+ m = re.match(pid_reg, line)
+ if m:
+ pids.append(m.group(1))
+ for pid in pids:
+ self.send_expect("kill -9 %s" % pid, "# ", 20, alt_session)
+ self.get_session_output(timeout=2)
+
+ hugepage_info = [
+ "/var/run/dpdk/%s/hugepage_info" % file_prefix
+ for file_prefix in prefix_list
+ ]
+ for hugepage in hugepage_info:
+ # Covers case where the process is run as a unprivileged user and does not generate the file
+ isfile = self.send_expect(
+ "ls -l {}".format(hugepage), "# ", 20, alt_session
+ )
+ if isfile:
+ cmd = "lsof -Fp %s" % hugepage
+ out = self.send_expect(cmd, "# ", 20, alt_session)
+ if len(out) and "No such file or directory" not in out:
+ self.logger.warning("There are some dpdk process not free hugepage")
+ self.logger.warning("**************************************")
+ self.logger.warning(out)
+ self.logger.warning("**************************************")
+
+ # remove directory
+ directorys = ["/var/run/dpdk/%s" % file_prefix for file_prefix in prefix_list]
+ for directory in directorys:
+ cmd = "rm -rf %s" % directory
+ self.send_expect(cmd, "# ", 20, alt_session)
+
+ # delete hugepage on mnt path
+ if getattr(self, "hugepage_path", None):
+ for file_prefix in prefix_list:
+ cmd = "rm -rf %s/%s*" % (self.hugepage_path, file_prefix)
+ self.send_expect(cmd, "# ", 20, alt_session)
+
+ def kill_all(self, alt_session=True):
+ """
+ Kill all dpdk applications on Node.
+ """
+ if "Traffic" in str(self):
+ self.logger.info("kill_all: called by tg")
+ pass
+ else:
+ if self.prefix_list:
+ self.logger.info("kill_all: called by SUT and prefix list has value.")
+ self.get_dpdk_pids(self.prefix_list, alt_session)
+ # init prefix_list
+ self.prefix_list = []
+ else:
+ self.logger.info("kill_all: called by SUT and has no prefix list.")
+ out = self.send_command(
+ "ls -l /var/run/dpdk |awk '/^d/ {print $NF}'",
+ timeout=0.5,
+ alt_session=True,
+ )
+ # the last directory is expect string, eg: [PEXPECT]#
+ if out != "":
+ dir_list = out.split("\r\n")
+ self.get_dpdk_pids(dir_list[:-1], alt_session)
+
+ def get_os(self) -> OS:
+ return self._config.os
+
+ def init_core_list(self):
+ """
+ Load or create core information of Node.
+ """
+ if not self.cores or not self.number_of_cores:
+ self.init_core_list_uncached()
+
+ def init_core_list_uncached(self):
+ """
+ Scan cores on Node and create core information list.
+ """
+ init_core_list_uncached = getattr(
+ self, "init_core_list_uncached_%s" % self.get_os()
+ )
+ init_core_list_uncached()
+
+ def init_core_list_uncached_linux(self):
+ """
+ Scan cores in linux and create core information list.
+ """
+ self.cores = []
+
+ cpuinfo = self.send_expect(
+ "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \#", "#", alt_session=True
+ )
+
+ cpuinfo = [i for i in cpuinfo.split() if re.match("^\d.+", i)]
+ # haswell cpu on cottonwood core id not correct
+ # need additional coremap for haswell cpu
+ core_id = 0
+ coremap = {}
+ for line in cpuinfo:
+ (thread, core, socket, node) = line.split(",")[0:4]
+
+ if core not in list(coremap.keys()):
+ coremap[core] = core_id
+ core_id += 1
+
+ if self._config.bypass_core0 and core == "0" and socket == "0":
+ self.logger.info("Core0 bypassed")
+ continue
+ if self._config.arch == "arm64" or self._config.arch == "ppc64":
+ self.cores.append(
+ CPUCore(thread=thread, socket=node, core=coremap[core])
+ )
+ else:
+ self.cores.append(
+ CPUCore(thread=thread, socket=socket, core=coremap[core])
+ )
+
+ self.number_of_cores = len(self.cores)
+
+ def get_core_list(self, config, socket=-1, from_last=False):
+ """
+ Get lcore array according to the core config like "all", "1S/1C/1T".
+ We can specify the physical CPU socket by the "socket" parameter.
+ """
+ if config == "all":
+ cores = []
+ if socket != -1:
+ for core in self.cores:
+ if int(core.socket) == socket:
+ cores.append(core.thread)
+ else:
+ cores = [core.thread for core in self.cores]
+ return cores
+
+ m = re.match("([1234])S/([0-9]+)C/([12])T", config)
+
+ if m:
+ nr_sockets = int(m.group(1))
+ nr_cores = int(m.group(2))
+ nr_threads = int(m.group(3))
+
+ partial_cores = self.cores
+
+ # If not specify socket sockList will be [0,1] in numa system
+ # If specify socket will just use the socket
+ if socket < 0:
+ sockList = set([int(core.socket) for core in partial_cores])
+ else:
+ for n in partial_cores:
+ if int(n.socket) == socket:
+ sockList = [int(n.socket)]
+
+ if from_last:
+ sockList = list(sockList)[-nr_sockets:]
+ else:
+ sockList = list(sockList)[:nr_sockets]
+ partial_cores = [n for n in partial_cores if int(n.socket) in sockList]
+ thread_list = set([int(n.thread) for n in partial_cores])
+ thread_list = list(thread_list)
+
+ # filter usable core to core_list
+ temp = []
+ for sock in sockList:
+ core_list = set(
+ [int(n.core) for n in partial_cores if int(n.socket) == sock]
+ )
+ if from_last:
+ core_list = list(core_list)[-nr_cores:]
+ else:
+ core_list = list(core_list)[:nr_cores]
+ temp.extend(core_list)
+
+ core_list = temp
+
+ # if system core less than request just use all cores in in socket
+ if len(core_list) < (nr_cores * nr_sockets):
+ partial_cores = self.cores
+ sockList = set([int(n.socket) for n in partial_cores])
+
+ if from_last:
+ sockList = list(sockList)[-nr_sockets:]
+ else:
+ sockList = list(sockList)[:nr_sockets]
+ partial_cores = [n for n in partial_cores if int(n.socket) in sockList]
+
+ temp = []
+ for sock in sockList:
+ core_list = list(
+ [int(n.thread) for n in partial_cores if int(n.socket) == sock]
+ )
+ if from_last:
+ core_list = core_list[-nr_cores:]
+ else:
+ core_list = core_list[:nr_cores]
+ temp.extend(core_list)
+
+ core_list = temp
+
+ partial_cores = [n for n in partial_cores if int(n.core) in core_list]
+ temp = []
+ if len(core_list) < nr_cores:
+ raise ValueError(
+ "Cannot get requested core configuration "
+ "requested {} have {}".format(config, self.cores)
+ )
+ if len(sockList) < nr_sockets:
+ raise ValueError(
+ "Cannot get requested core configuration "
+ "requested {} have {}".format(config, self.cores)
+ )
+ # recheck the core_list and create the thread_list
+ i = 0
+ for sock in sockList:
+ coreList_aux = [
+ int(core_list[n])
+ for n in range((nr_cores * i), (nr_cores * i + nr_cores))
+ ]
+ for core in coreList_aux:
+ thread_list = list(
+ [
+ int(n.thread)
+ for n in partial_cores
+ if ((int(n.core) == core) and (int(n.socket) == sock))
+ ]
+ )
+ if from_last:
+ thread_list = thread_list[-nr_threads:]
+ else:
+ thread_list = thread_list[:nr_threads]
+ temp.extend(thread_list)
+ thread_list = temp
+ i += 1
+ return list(map(str, thread_list))
+
+ def create_session(self, name: str) -> SSHConnection:
+ connection = SSHConnection(
+ self.get_ip_address(),
+ name,
+ getLogger(name, node=self.name),
+ self.get_username(),
+ self.get_password(),
+ )
+ self._other_sessions.append(connection)
+ return connection
+
def node_exit(self) -> None:
"""
Recover all resource before node exit
"""
if self.main_session:
self.main_session.close()
+ for session in self._other_sessions:
+ session.close()
self.logger.logger_exit()
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 05/10] dts: add system under test node
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
` (3 preceding siblings ...)
2022-08-24 16:24 ` [RFC PATCH v1 04/10] dts: add basic node management methods Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 06/10] dts: add traffic generator node Juraj Linkeš
` (5 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
The SUT node contains methods to configure the node and build and
configure DPDK.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/sut_node.py | 603 ++++++++++++++++++++++++++++++++++++++
1 file changed, 603 insertions(+)
create mode 100644 dts/framework/sut_node.py
diff --git a/dts/framework/sut_node.py b/dts/framework/sut_node.py
new file mode 100644
index 0000000000..c9f5e69d73
--- /dev/null
+++ b/dts/framework/sut_node.py
@@ -0,0 +1,603 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+#
+
+import os
+import re
+import tarfile
+import time
+from typing import List, Optional, Union
+
+from framework.config import NodeConfiguration
+
+from .exception import ParameterInvalidException
+from .node import Node
+from .settings import SETTINGS
+
+
+class SutNode(Node):
+ """
+ A class for managing connections to the System under test, providing
+ methods that retrieve the necessary information about the node (such as
+ cpu, memory and NIC details) and configuration capabilities.
+ """
+
+ def __init__(self, node_config: NodeConfiguration):
+ super(SutNode, self).__init__(node_config)
+ self.tg_node = None
+ self.architecture = node_config.arch
+ self.prefix_subfix = (
+ str(os.getpid()) + "_" + time.strftime("%Y%m%d%H%M%S", time.localtime())
+ )
+ self.hugepage_path = None
+ self.dpdk_version = ""
+ self.testpmd = None
+
+ def prerequisites(self):
+ """
+ Copy DPDK package to SUT and apply patch files.
+ """
+ self.prepare_package()
+ self.sut_prerequisites()
+
+ def prepare_package(self):
+ if not self.skip_setup:
+ assert os.path.isfile(SETTINGS.dpdk_ref) is True, "Invalid package"
+
+ out = self.send_expect(
+ "ls -d %s" % SETTINGS.remote_dpdk_dir, "# ", verify=True
+ )
+ if out == 2:
+ self.send_expect("mkdir -p %s" % SETTINGS.remote_dpdk_dir, "# ")
+
+ out = self.send_expect(
+ "ls %s && cd %s" % (SETTINGS.remote_dpdk_dir, SETTINGS.remote_dpdk_dir),
+ "#",
+ verify=True,
+ )
+ if out == -1:
+ raise IOError(
+ f"A failure occurred when creating {SETTINGS.remote_dpdk_dir} on "
+ f"{self}."
+ )
+ self.main_session.copy_file_to(SETTINGS.dpdk_ref, SETTINGS.remote_dpdk_dir)
+ self.kill_all()
+
+ # enable core dump
+ self.send_expect("ulimit -c unlimited", "#")
+
+ with tarfile.open(SETTINGS.dpdk_ref) as dpdk_tar:
+ dpdk_top_dir = dpdk_tar.getnames()[0]
+
+ remote_dpdk_top_dir = os.path.join(SETTINGS.remote_dpdk_dir, dpdk_top_dir)
+
+ # unpack the code and change to the working folder
+ self.send_expect("rm -rf %s" % remote_dpdk_top_dir, "#")
+
+ remote_dpdk_path = os.path.join(
+ SETTINGS.remote_dpdk_dir, os.path.basename(SETTINGS.dpdk_ref)
+ )
+
+ # unpack dpdk
+ out = self.send_expect(
+ f"tar xfm {remote_dpdk_path} -C {SETTINGS.remote_dpdk_dir}",
+ "# ",
+ 60,
+ verify=True,
+ )
+ if out == -1:
+ raise IOError(
+ f"Extracting remote DPDK package {remote_dpdk_path} to "
+ f"{SETTINGS.remote_dpdk_dir} failed."
+ )
+
+ # check dpdk dir name is expect
+ out = self.send_expect("ls %s" % remote_dpdk_top_dir, "# ", 20, verify=True)
+ if out == -1:
+ raise FileNotFoundError(
+ f"Remote DPDK dir {remote_dpdk_top_dir} not found."
+ )
+
+ def set_target(self, target):
+ """
+ Set env variable, these have to be setup all the time. Some tests
+ need to compile example apps by themselves and will fail otherwise.
+ Set hugepage on SUT and install modules required by DPDK.
+ Configure default ixgbe PMD function.
+ """
+ self.target = target
+
+ self.set_toolchain(target)
+
+ # set env variable
+ self.set_env_variable()
+
+ if not self.skip_setup:
+ self.build_install_dpdk(target)
+
+ self.setup_memory()
+
+ def set_env_variable(self):
+ # These have to be setup all the time. Some tests need to compile
+ # example apps by themselves and will fail otherwise.
+ self.send_expect("export RTE_TARGET=" + self.target, "#")
+ self.send_expect("export RTE_SDK=`pwd`", "#")
+
+ def build_install_dpdk(self, target):
+ """
+ Build DPDK source code with specified target.
+ """
+ if self.get_os() == "linux":
+ self.build_install_dpdk_linux_meson(target)
+
+ def build_install_dpdk_linux_meson(self, target):
+ """
+ Build DPDK source code on linux use meson
+ """
+ build_time = 1800
+ target_info = target.split("-")
+ arch = target_info[0]
+ toolchain = target_info[3]
+
+ default_library = "static"
+ if arch == "i686":
+ # find the pkg-config path and set the PKG_CONFIG_LIBDIR environmental variable to point it
+ out = self.send_expect("find /usr -type d -name pkgconfig", "# ")
+ pkg_path = ""
+ res_path = out.split("\r\n")
+ for cur_path in res_path:
+ if "i386" in cur_path:
+ pkg_path = cur_path
+ break
+ assert (
+ pkg_path != ""
+ ), "please make sure you env have the i386 pkg-config path"
+
+ self.send_expect("export CFLAGS=-m32", "# ")
+ self.send_expect("export PKG_CONFIG_LIBDIR=%s" % pkg_path, "# ")
+
+ self.send_expect("rm -rf " + target, "#")
+ out = self.send_expect(
+ "CC=%s meson -Denable_kmods=True -Dlibdir=lib --default-library=%s %s"
+ % (toolchain, default_library, target),
+ "[~|~\]]# ",
+ build_time,
+ )
+ assert "FAILED" not in out, "meson setup failed ..."
+
+ out = self.send_expect("ninja -C %s" % target, "[~|~\]]# ", build_time)
+ assert "FAILED" not in out, "ninja complie failed ..."
+
+ # copy kmod file to the folder same as make
+ out = self.send_expect(
+ "find ./%s/kernel/ -name *.ko" % target, "# ", verify=True
+ )
+ self.send_expect("mkdir -p %s/kmod" % target, "# ")
+ if not isinstance(out, int) and len(out) > 0:
+ kmod = out.split("\r\n")
+ for mod in kmod:
+ self.send_expect("cp %s %s/kmod/" % (mod, target), "# ")
+
+ def build_dpdk_apps(self, folder):
+ """
+ Build dpdk sample applications.
+ """
+ if self.get_os() == "linux":
+ return self.build_dpdk_apps_linux_meson(folder)
+
+ def build_dpdk_apps_linux_meson(self, folder):
+ """
+ Build dpdk sample applications on linux use meson
+ """
+ # icc compile need more time
+ if "icc" in self.target:
+ timeout = 300
+ else:
+ timeout = 90
+
+ target_info = self.target.split("-")
+ arch = target_info[0]
+ if arch == "i686":
+ # find the pkg-config path and set the PKG_CONFIG_LIBDIR environmental variable to point it
+ out = self.send_expect("find /usr -type d -name pkgconfig", "# ")
+ pkg_path = ""
+ res_path = out.split("\r\n")
+ for cur_path in res_path:
+ if "i386" in cur_path:
+ pkg_path = cur_path
+ break
+ assert (
+ pkg_path != ""
+ ), "please make sure you env have the i386 pkg-config path"
+
+ self.send_expect("export CFLAGS=-m32", "# ", alt_session=True)
+ self.send_expect(
+ "export PKG_CONFIG_LIBDIR=%s" % pkg_path, "# ", alt_session=True
+ )
+
+ folder_info = folder.split("/")
+ name = folder_info[-1]
+
+ if name == "examples":
+ example = "all"
+ else:
+ example = "/".join(folder_info[folder_info.index("examples") + 1 :])
+ out = self.send_expect(
+ "meson configure -Dexamples=%s %s" % (example, self.target), "# "
+ )
+ assert "FAILED" not in out, "Compilation error..."
+ out = self.send_expect("ninja -C %s" % self.target, "[~|~\]]# ", timeout)
+ assert "FAILED" not in out, "Compilation error..."
+
+ # verify the app build in the config path
+ if example != "all":
+ out = self.send_expect("ls %s" % self.apps_name[name], "# ", verify=True)
+ assert isinstance(out, str), (
+ "please confirm %s app path and name in app_name.cfg" % name
+ )
+
+ return out
+
+ def filter_cores_from_node_cfg(self) -> None:
+ # get core list from conf.yaml
+ core_list = []
+ all_core_list = [str(core.core) for core in self.cores]
+ core_list_str = self._config.cores
+ if core_list_str == "":
+ core_list = all_core_list
+ split_by_comma = core_list_str.split(",")
+ range_cores = []
+ for item in split_by_comma:
+ if "-" in item:
+ tmp = item.split("-")
+ range_cores.extend(
+ [str(i) for i in range(int(tmp[0]), int(tmp[1]) + 1)]
+ )
+ else:
+ core_list.append(item)
+ core_list.extend(range_cores)
+
+ abnormal_core_list = []
+ for core in core_list:
+ if core not in all_core_list:
+ abnormal_core_list.append(core)
+
+ if abnormal_core_list:
+ self.logger.info(
+ "those %s cores are out of range system, all core list of system are %s"
+ % (abnormal_core_list, all_core_list)
+ )
+ raise Exception("configured cores out of range system")
+
+ core_list = [core for core in self.cores if str(core.core) in core_list]
+ self.cores = core_list
+ self.number_of_cores = len(self.cores)
+
+ def create_eal_parameters(
+ self,
+ fixed_prefix: bool = False,
+ socket: Optional[int] = None,
+ cores: Union[str, List[int], List[str]] = "default",
+ prefix: str = "",
+ no_pci: bool = False,
+ vdevs: List[str] = None,
+ other_eal_param: str = "",
+ ) -> str:
+ """
+ generate eal parameters character string;
+ :param fixed_prefix: use fixed file-prefix or not, when it is true,
+ the file-prefix will not be added a timestamp
+ :param socket: the physical CPU socket index, -1 means no care cpu socket;
+ :param cores: set the core info, eg:
+ cores=[0,1,2,3],
+ cores=['0', '1', '2', '3'],
+ cores='default',
+ cores='1S/4C/1T',
+ cores='all';
+ :param prefix: set file prefix string, eg:
+ prefix='vf';
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True;
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1'];
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments';
+ :return: eal param string, eg:
+ '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+ if DPDK version < 20.11-rc4, eal_str eg:
+ '-c 0xf -w 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+ """
+ if vdevs is None:
+ vdevs = []
+
+ if socket is None:
+ socket = -1
+
+ config = {
+ "cores": cores,
+ "prefix": prefix,
+ "no_pci": no_pci,
+ "vdevs": vdevs,
+ "other_eal_param": other_eal_param,
+ }
+
+ eal_parameter_creator = _EalParameter(
+ sut_node=self, fixed_prefix=fixed_prefix, socket=socket, **config
+ )
+ eal_str = eal_parameter_creator.make_eal_param()
+
+ return eal_str
+
+ def set_toolchain(self, target):
+ """
+ This looks at the current target and instantiates an attribute to
+ be either a NodeLinuxApp or NodeBareMetal object. These latter two
+ classes are private and should not be used directly by client code.
+ """
+ self.kill_all()
+ self.target = target
+ [arch, _, _, toolchain] = target.split("-")
+
+ if toolchain == "icc":
+ icc_vars = os.getenv("ICC_VARS", "/opt/intel/composer_xe_2013/bin/")
+ icc_vars += "compilervars.sh"
+
+ if arch == "x86_64":
+ icc_arch = "intel64"
+ elif arch == "i686":
+ icc_arch = "ia32"
+ self.send_expect("source " + icc_vars + " " + icc_arch, "# ")
+
+ self.architecture = arch
+
+ def sut_prerequisites(self):
+ """
+ Prerequest function should be called before execute any test case.
+ Will call function to scan all lcore's information which on SUT.
+ Then call pci scan function to collect nic device information.
+ At last setup SUT' environment for validation.
+ """
+ out = self.send_expect(f"cd {SETTINGS.remote_dpdk_dir}", "# ")
+ assert "No such file or directory" not in out, "Can't switch to dpdk folder!!!"
+ out = self.send_expect("cat VERSION", "# ")
+ if "No such file or directory" in out:
+ self.logger.error("Can't get DPDK version due to VERSION not exist!!!")
+ else:
+ self.dpdk_version = out
+ self.send_expect("alias ls='ls --color=none'", "#")
+
+ self.init_core_list()
+ self.filter_cores_from_node_cfg()
+
+ def setup_memory(self, hugepages=-1):
+ """
+ Setup hugepage on SUT.
+ """
+ function_name = "setup_memory_%s" % self.get_os()
+ try:
+ setup_memory = getattr(self, function_name)
+ setup_memory(hugepages)
+ except AttributeError:
+ self.logger.error("%s is not implemented" % function_name)
+
+ def setup_memory_linux(self, hugepages=-1):
+ """
+ Setup Linux hugepages.
+ """
+ hugepages_size = self.send_expect(
+ "awk '/Hugepagesize/ {print $2}' /proc/meminfo", "# "
+ )
+ total_huge_pages = self.get_total_huge_pages()
+ numa_nodes = self.send_expect("ls /sys/devices/system/node | grep node*", "# ")
+ if not numa_nodes:
+ total_numa_nodes = -1
+ else:
+ numa_nodes = numa_nodes.splitlines()
+ total_numa_nodes = len(numa_nodes)
+ self.logger.info(numa_nodes)
+
+ force_socket = False
+
+ if int(hugepages_size) < (1024 * 1024):
+ if hugepages <= 0:
+ if self.architecture == "x86_64":
+ arch_huge_pages = 4096
+ elif self.architecture == "i686":
+ arch_huge_pages = 512
+ force_socket = True
+ # set huge pagesize for x86_x32 abi target
+ elif self.architecture == "x86_x32":
+ arch_huge_pages = 256
+ force_socket = True
+ elif self.architecture == "ppc_64":
+ arch_huge_pages = 512
+ elif self.architecture == "arm64":
+ if int(hugepages_size) >= (512 * 1024):
+ arch_huge_pages = 8
+ else:
+ arch_huge_pages = 2048
+ else:
+ arch_huge_pages = 256
+ else:
+ arch_huge_pages = hugepages
+
+ if total_huge_pages != arch_huge_pages:
+ if total_numa_nodes == -1:
+ self.set_huge_pages(arch_huge_pages)
+ else:
+ # before all hugepage average distribution by all socket,
+ # but sometimes create mbuf pool on socket 0 failed when
+ # setup testpmd, so set all huge page on first socket
+ if force_socket:
+ self.set_huge_pages(arch_huge_pages, numa_nodes[0])
+ self.logger.info("force_socket on %s" % numa_nodes[0])
+ else:
+ # set huge pages to all numa_nodes
+ for numa_node in numa_nodes:
+ self.set_huge_pages(arch_huge_pages, numa_node)
+
+ self.mount_huge_pages()
+ self.hugepage_path = self.strip_hugepage_path()
+
+ def get_memory_channels(self):
+ n = self._config.memory_channels
+ if n is not None and n > 0:
+ return n
+ else:
+ return 1
+
+
+class _EalParameter(object):
+ def __init__(
+ self,
+ sut_node: SutNode,
+ fixed_prefix: bool,
+ socket: int,
+ cores: Union[str, List[int], List[str]],
+ prefix: str,
+ no_pci: bool,
+ vdevs: List[str],
+ other_eal_param: str,
+ ):
+ """
+ generate eal parameters character string;
+ :param sut_node: SUT Node;
+ :param fixed_prefix: use fixed file-prefix or not, when it is true,
+ the file-prefix will not be added a timestamp
+ :param socket: the physical CPU socket index, -1 means no care cpu socket;
+ :param cores: set the core info, eg:
+ cores=[0,1,2,3],
+ cores=['0','1','2','3'],
+ cores='default',
+ cores='1S/4C/1T',
+ cores='all';
+ param prefix: set file prefix string, eg:
+ prefix='vf';
+ param no_pci: switch of disable PCI bus eg:
+ no_pci=True;
+ param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1'];
+ param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments';
+ """
+ self.os = sut_node.get_os()
+ self.fixed_prefix = fixed_prefix
+ self.socket = socket
+ self.sut_node = sut_node
+ self.cores = self._validate_cores(cores)
+ self.prefix = prefix
+ self.no_pci = no_pci
+ self.vdevs = vdevs
+ self.other_eal_param = other_eal_param
+
+ _param_validate_exception_info_template = (
+ "Invalid parameter of %s about value of %s, Please reference API doc."
+ )
+
+ @staticmethod
+ def _validate_cores(cores: Union[str, List[int], List[str]]):
+ core_string_match = r"default|all|\d+S/\d+C/\d+T|$"
+ if isinstance(cores, list) and (
+ all(map(lambda _core: type(_core) == int, cores))
+ or all(map(lambda _core: type(_core) == str, cores))
+ ):
+ return cores
+ elif type(cores) == str and re.match(core_string_match, cores, re.I):
+ return cores
+ else:
+ raise ParameterInvalidException("cores", cores)
+
+ def _make_cores_param(self) -> str:
+ is_use_default_cores = (
+ self.cores == ""
+ or isinstance(self.cores, str)
+ and self.cores.lower() == "default"
+ )
+ if is_use_default_cores:
+ default_cores = "1S/2C/1T"
+ core_list = self.sut_node.get_core_list(default_cores)
+ else:
+ core_list = self._get_cores()
+
+ def _get_consecutive_cores_range(_cores: List[int]):
+ _formated_core_list = []
+ _tmp_cores_list = list(sorted(map(int, _cores)))
+ _segment = _tmp_cores_list[:1]
+ for _core_num in _tmp_cores_list[1:]:
+ if _core_num - _segment[-1] == 1:
+ _segment.append(_core_num)
+ else:
+ _formated_core_list.append(
+ f"{_segment[0]}-{_segment[-1]}"
+ if len(_segment) > 1
+ else f"{_segment[0]}"
+ )
+ _index = _tmp_cores_list.index(_core_num)
+ _formated_core_list.extend(
+ _get_consecutive_cores_range(_tmp_cores_list[_index:])
+ )
+ _segment.clear()
+ break
+ if len(_segment) > 0:
+ _formated_core_list.append(
+ f"{_segment[0]}-{_segment[-1]}"
+ if len(_segment) > 1
+ else f"{_segment[0]}"
+ )
+ return _formated_core_list
+
+ return f'-l {",".join(_get_consecutive_cores_range(core_list))}'
+
+ def _make_memory_channels(self) -> str:
+ param_template = "-n {}"
+ return param_template.format(self.sut_node.get_memory_channels())
+
+ def _make_no_pci_param(self) -> str:
+ if self.no_pci is True:
+ return "--no-pci"
+ else:
+ return ""
+
+ def _make_prefix_param(self) -> str:
+ if self.prefix == "":
+ fixed_file_prefix = "dpdk" + "_" + self.sut_node.prefix_subfix
+ else:
+ fixed_file_prefix = self.prefix
+ if not self.fixed_prefix:
+ fixed_file_prefix = (
+ fixed_file_prefix + "_" + self.sut_node.prefix_subfix
+ )
+ fixed_file_prefix = self._do_os_handle_with_prefix_param(fixed_file_prefix)
+ return fixed_file_prefix
+
+ def _make_vdevs_param(self) -> str:
+ if len(self.vdevs) == 0:
+ return ""
+ else:
+ _vdevs = ["--vdev " + vdev for vdev in self.vdevs]
+ return " ".join(_vdevs)
+
+ def _get_cores(self) -> List[int]:
+ if type(self.cores) == list:
+ return self.cores
+ elif isinstance(self.cores, str):
+ return self.sut_node.get_core_list(self.cores, socket=self.socket)
+
+ def _do_os_handle_with_prefix_param(self, file_prefix: str) -> str:
+ self.sut_node.prefix_list.append(file_prefix)
+ return "--file-prefix=" + file_prefix
+
+ def make_eal_param(self) -> str:
+ _eal_str = " ".join(
+ [
+ self._make_cores_param(),
+ self._make_memory_channels(),
+ self._make_prefix_param(),
+ self._make_no_pci_param(),
+ self._make_vdevs_param(),
+ # append user defined eal parameters
+ self.other_eal_param,
+ ]
+ )
+ return _eal_str
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 06/10] dts: add traffic generator node
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
` (4 preceding siblings ...)
2022-08-24 16:24 ` [RFC PATCH v1 05/10] dts: add system under test node Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 07/10] dts: add testcase and basic test results Juraj Linkeš
` (4 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
The Traffic Generator node is responsible for configuring and running
traffic generators. For HelloWorld, we don't need any traffic, so this
is just a barebones implementation demonstrating the two nodes in use
in DTS.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/tg_node.py | 78 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 78 insertions(+)
create mode 100644 dts/framework/tg_node.py
diff --git a/dts/framework/tg_node.py b/dts/framework/tg_node.py
new file mode 100644
index 0000000000..109019e740
--- /dev/null
+++ b/dts/framework/tg_node.py
@@ -0,0 +1,78 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+#
+
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2019 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+#
+
+"""
+Interface for bulk traffic generators.
+"""
+
+from framework.config import NodeConfiguration
+
+from .node import Node
+
+
+class TrafficGeneratorNode(Node):
+ """
+ A class for managing connections to the node running the Traffic generator,
+ providing methods that retrieve the necessary information about the node
+ (such as cpu, memory and NIC details), configure it and configure and
+ manage the Traffic generator.
+ """
+
+ def __init__(self, node_config: NodeConfiguration):
+ super(TrafficGeneratorNode, self).__init__(node_config)
+ # check the python version of TG
+ self.sut_nodes = []
+ self.re_run_time = 0
+
+ def prerequisites(self):
+ """
+ Setup hugepages and kernel modules on TG.
+ """
+ self.kill_all()
+
+ if not self.skip_setup:
+ total_huge_pages = self.get_total_huge_pages()
+ hugepages_size = self.send_expect(
+ "awk '/Hugepagesize/ {print $2}' /proc/meminfo", "# "
+ )
+ if total_huge_pages == 0:
+ self.mount_huge_pages()
+ if hugepages_size == "524288":
+ self.set_huge_pages(8)
+ else:
+ self.set_huge_pages(1024)
+
+ self.send_expect("modprobe uio", "# ")
+
+ self.tg_prerequisites()
+
+ def tg_prerequisites(self):
+ """
+ Prerequest function should be called before execute any test case.
+ Will call function to scan all lcore's information which on TG.
+ """
+
+ self.init_core_list()
+
+ self.disable_lldp()
+
+ def set_re_run(self, re_run_time):
+ """
+ set failed case re-run time
+ """
+ self.re_run_time = int(re_run_time)
+
+ def disable_lldp(self):
+ """
+ Disable TG ports LLDP.
+ """
+ result = self.send_expect("lldpad -d", "# ")
+ if result:
+ self.logger.error(result.strip())
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 07/10] dts: add testcase and basic test results
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
` (5 preceding siblings ...)
2022-08-24 16:24 ` [RFC PATCH v1 06/10] dts: add traffic generator node Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 08/10] dts: add test runner and statistics collector Juraj Linkeš
` (3 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
TestCase implements methods for setting up and tearing down testcases
and basic workflow methods.
Result stores information about the testbed and the results of testcases
that ran on the testbed.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/exception.py | 15 ++
dts/framework/test_case.py | 274 +++++++++++++++++++++++++++++++++++
dts/framework/test_result.py | 218 ++++++++++++++++++++++++++++
dts/framework/utils.py | 14 ++
4 files changed, 521 insertions(+)
create mode 100644 dts/framework/test_case.py
create mode 100644 dts/framework/test_result.py
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 8466990aa5..6a0d133c65 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -28,6 +28,21 @@ def get_output(self) -> str:
return self.output
+class VerifyFailure(Exception):
+ """
+ To be used within the test cases to verify if a command output
+ is as it was expected.
+ """
+
+ value: str
+
+ def __init__(self, value: str):
+ self.value = value
+
+ def __str__(self):
+ return repr(self.value)
+
+
class SSHConnectionException(Exception):
"""
SSH connection error.
diff --git a/dts/framework/test_case.py b/dts/framework/test_case.py
new file mode 100644
index 0000000000..301711f656
--- /dev/null
+++ b/dts/framework/test_case.py
@@ -0,0 +1,274 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+#
+
+"""
+A base class for creating DTS test cases.
+"""
+
+import re
+import time
+import traceback
+
+from .exception import TimeoutException, VerifyFailure
+from .logger import getLogger
+from .test_result import Result
+
+
+class TestCase(object):
+ def __init__(self, sut_nodes, tg_node, suitename, target, func):
+ self.sut_node = sut_nodes[0]
+ self.sut_nodes = sut_nodes
+ self.tg_node = tg_node
+ self.suite_name = suitename
+ self.target = target
+
+ # local variable
+ self._requested_tests = None
+ self._subtitle = None
+
+ # check session and reconnect if possible
+ for sut_node in self.sut_nodes:
+ self._check_and_reconnect(node=sut_node)
+ self._check_and_reconnect(node=self.tg_node)
+
+ # result object for save suite result
+ self._suite_result = Result()
+ self._suite_result.sut = self.sut_node.node["IP"]
+ self._suite_result.target = target
+ self._suite_result.test_suite = self.suite_name
+ if self._suite_result is None:
+ raise ValueError("Result object should not None")
+
+ self._enable_func = func
+
+ # command history
+ self.setup_history = list()
+ self.test_history = list()
+
+ def init_log(self):
+ # get log handler
+ class_name = self.__class__.__name__
+ self.logger = getLogger(class_name)
+
+ def _check_and_reconnect(self, node=None):
+ try:
+ result = node.session.check_available()
+ except:
+ result = False
+
+ if result is False:
+ node.reconnect_session()
+ if "sut" in str(type(node)):
+ node.send_expect("cd %s" % node.base_dir, "#")
+ node.set_env_variable()
+
+ try:
+ result = node.alt_session.check_available()
+ except:
+ result = False
+
+ if result is False:
+ node.reconnect_session(alt_session=True)
+
+ def set_up_all(self):
+ pass
+
+ def set_up(self):
+ pass
+
+ def tear_down(self):
+ pass
+
+ def tear_down_all(self):
+ pass
+
+ def verify(self, passed, description):
+ if not passed:
+ raise VerifyFailure(description)
+
+ def _get_functional_cases(self):
+ """
+ Get all functional test cases.
+ """
+ return self._get_test_cases(r"test_(?!perf_)")
+
+ def _has_it_been_requested(self, test_case, test_name_regex):
+ """
+ Check whether test case has been requested for validation.
+ """
+ name_matches = re.match(test_name_regex, test_case.__name__)
+
+ if self._requested_tests is not None:
+ return name_matches and test_case.__name__ in self._requested_tests
+
+ return name_matches
+
+ def set_requested_cases(self, case_list):
+ """
+ Pass down input cases list for check
+ """
+ if self._requested_tests is None:
+ self._requested_tests = case_list
+ elif case_list is not None:
+ self._requested_tests += case_list
+
+ def _get_test_cases(self, test_name_regex):
+ """
+ Return case list which name matched regex.
+ """
+ for test_case_name in dir(self):
+ test_case = getattr(self, test_case_name)
+ if callable(test_case) and self._has_it_been_requested(
+ test_case, test_name_regex
+ ):
+ yield test_case
+
+ def execute_setup_all(self):
+ """
+ Execute suite setup_all function before cases.
+ """
+ # clear all previous output
+ for sut_node in self.sut_nodes:
+ sut_node.get_session_output(timeout=0.1)
+ self.tg_node.get_session_output(timeout=0.1)
+
+ # save into setup history list
+ self.enable_history(self.setup_history)
+
+ try:
+ self.set_up_all()
+ return True
+ except Exception as v:
+ self.logger.error("set_up_all failed:\n" + traceback.format_exc())
+ # record all cases blocked
+ if self._enable_func:
+ for case_obj in self._get_functional_cases():
+ self._suite_result.test_case = case_obj.__name__
+ self._suite_result.test_case_blocked(
+ "set_up_all failed: {}".format(str(v))
+ )
+ return False
+
+ def _execute_test_case(self, case_obj):
+ """
+ Execute specified test case in specified suite. If any exception occurred in
+ validation process, save the result and tear down this case.
+ """
+ case_name = case_obj.__name__
+ self._suite_result.test_case = case_obj.__name__
+
+ # save into test command history
+ self.test_history = list()
+ self.enable_history(self.test_history)
+
+ case_result = True
+ try:
+ self.logger.info("Test Case %s Begin" % case_name)
+
+ self.running_case = case_name
+ # clean session
+ for sut_node in self.sut_nodes:
+ sut_node.get_session_output(timeout=0.1)
+ self.tg_node.get_session_output(timeout=0.1)
+ # run set_up function for each case
+ self.set_up()
+ # run test case
+ case_obj()
+
+ self._suite_result.test_case_passed()
+
+ self.logger.info("Test Case %s Result PASSED:" % case_name)
+
+ except VerifyFailure as v:
+ case_result = False
+ self._suite_result.test_case_failed(str(v))
+ self.logger.error("Test Case %s Result FAILED: " % (case_name) + str(v))
+ except KeyboardInterrupt:
+ self._suite_result.test_case_blocked("Skipped")
+ self.logger.error("Test Case %s SKIPPED: " % (case_name))
+ self.tear_down()
+ raise KeyboardInterrupt("Stop DTS")
+ except TimeoutException as e:
+ case_result = False
+ self._suite_result.test_case_failed(str(e))
+ self.logger.error("Test Case %s Result FAILED: " % (case_name) + str(e))
+ self.logger.error("%s" % (e.get_output()))
+ except Exception:
+ case_result = False
+ trace = traceback.format_exc()
+ self._suite_result.test_case_failed(trace)
+ self.logger.error("Test Case %s Result ERROR: " % (case_name) + trace)
+ finally:
+ self.execute_tear_down()
+ return case_result
+
+ def execute_test_cases(self):
+ """
+ Execute all test cases in one suite.
+ """
+ # prepare debugger rerun case environment
+ if self._enable_func:
+ for case_obj in self._get_functional_cases():
+ for i in range(self.tg_node.re_run_time + 1):
+ ret = self.execute_test_case(case_obj)
+
+ if ret is False and self.tg_node.re_run_time:
+ for sut_node in self.sut_nodes:
+ sut_node.get_session_output(timeout=0.5 * (i + 1))
+ self.tg_node.get_session_output(timeout=0.5 * (i + 1))
+ time.sleep(i + 1)
+ self.logger.info(
+ " Test case %s failed and re-run %d time"
+ % (case_obj.__name__, i + 1)
+ )
+ else:
+ break
+
+ def execute_test_case(self, case_obj):
+ """
+ Execute test case or enter into debug mode.
+ """
+ return self._execute_test_case(case_obj)
+
+ def get_result(self):
+ """
+ Return suite test result
+ """
+ return self._suite_result
+
+ def execute_tear_downall(self):
+ """
+ execute suite tear_down_all function
+ """
+ try:
+ self.tear_down_all()
+ except Exception:
+ self.logger.error("tear_down_all failed:\n" + traceback.format_exc())
+
+ for sut_node in self.sut_nodes:
+ sut_node.kill_all()
+ self.tg_node.kill_all()
+
+ def execute_tear_down(self):
+ """
+ execute suite tear_down function
+ """
+ try:
+ self.tear_down()
+ except Exception:
+ self.logger.error("tear_down failed:\n" + traceback.format_exc())
+ self.logger.warning(
+ "tear down %s failed, might iterfere next case's result!"
+ % self.running_case
+ )
+
+ def enable_history(self, history):
+ """
+ Enable history for all Node's default session
+ """
+ for sut_node in self.sut_nodes:
+ sut_node.session.set_history(history)
+
+ self.tg_node.session.set_history(history)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
new file mode 100644
index 0000000000..7be79df7f2
--- /dev/null
+++ b/dts/framework/test_result.py
@@ -0,0 +1,218 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+#
+
+"""
+Generic result container and reporters
+"""
+
+
+class Result(object):
+ """
+ Generic result container. Useful to store/retrieve results during
+ a DTF execution.
+
+ It manages and hide an internal complex structure like the one shown below.
+ This is presented to the user with a property based interface.
+
+ internals = [
+ 'sut1', [
+ 'kdriver',
+ 'firmware',
+ 'pkg',
+ 'driver',
+ 'dpdk_version',
+ 'target1', 'nic1', [
+ 'suite1', [
+ 'case1', ['PASSED', ''],
+ 'case2', ['PASSED', ''],
+ ],
+ ],
+ 'target2', 'nic1', [
+ 'suite2', [
+ 'case3', ['PASSED', ''],
+ 'case4', ['FAILED', 'message'],
+ ],
+ 'suite3', [
+ 'case5', ['BLOCKED', 'message'],
+ ],
+ ]
+ ]
+ ]
+
+ """
+
+ def __init__(self):
+ self.__sut = 0
+ self.__target = 0
+ self.__test_suite = 0
+ self.__test_case = 0
+ self.__test_result = None
+ self.__message = None
+ self.__internals = []
+ self.__failed_suts = {}
+ self.__failed_targets = {}
+
+ def __set_sut(self, sut):
+ if sut not in self.__internals:
+ self.__internals.append(sut)
+ self.__internals.append([])
+ self.__sut = self.__internals.index(sut)
+
+ def __get_sut(self):
+ return self.__internals[self.__sut]
+
+ def current_dpdk_version(self, sut):
+ """
+ Returns the dpdk version for a given SUT
+ """
+ try:
+ sut_idx = self.__internals.index(sut)
+ return self.__internals[sut_idx + 1][4]
+ except:
+ return ""
+
+ def __set_dpdk_version(self, dpdk_version):
+ if dpdk_version not in self.internals[self.__sut + 1]:
+ dpdk_current = self.__get_dpdk_version()
+ if dpdk_current:
+ if dpdk_version not in dpdk_current:
+ self.internals[self.__sut + 1][4] = (
+ dpdk_current + "/" + dpdk_version
+ )
+ else:
+ self.internals[self.__sut + 1].append(dpdk_version)
+
+ def __get_dpdk_version(self):
+ try:
+ return self.internals[self.__sut + 1][4]
+ except:
+ return ""
+
+ def __current_targets(self):
+ return self.internals[self.__sut + 1]
+
+ def __set_target(self, target):
+ targets = self.__current_targets()
+ if target not in targets:
+ targets.append(target)
+ targets.append("_nic_")
+ targets.append([])
+ self.__target = targets.index(target)
+
+ def __get_target(self):
+ return self.__current_targets()[self.__target]
+
+ def __current_suites(self):
+ return self.__current_targets()[self.__target + 2]
+
+ def __set_test_suite(self, test_suite):
+ suites = self.__current_suites()
+ if test_suite not in suites:
+ suites.append(test_suite)
+ suites.append([])
+ self.__test_suite = suites.index(test_suite)
+
+ def __get_test_suite(self):
+ return self.__current_suites()[self.__test_suite]
+
+ def __current_cases(self):
+ return self.__current_suites()[self.__test_suite + 1]
+
+ def __set_test_case(self, test_case):
+ cases = self.__current_cases()
+ cases.append(test_case)
+ cases.append([])
+ self.__test_case = cases.index(test_case)
+
+ def __get_test_case(self):
+ return self.__current_cases()[self.__test_case]
+
+ def __get_internals(self):
+ return self.__internals
+
+ def __current_result(self):
+ return self.__current_cases()[self.__test_case + 1]
+
+ def __set_test_case_result(self, result, message):
+ test_case = self.__current_result()
+ test_case.append(result)
+ test_case.append(message)
+ self.__test_result = result
+ self.__message = message
+
+ def copy_suite(self, suite_result):
+ self.__current_suites()[self.__test_suite + 1] = suite_result.__current_cases()
+
+ def test_case_passed(self):
+ """
+ Set last test case added as PASSED
+ """
+ self.__set_test_case_result(result="PASSED", message="")
+
+ def test_case_failed(self, message):
+ """
+ Set last test case added as FAILED
+ """
+ self.__set_test_case_result(result="FAILED", message=message)
+
+ def test_case_blocked(self, message):
+ """
+ Set last test case added as BLOCKED
+ """
+ self.__set_test_case_result(result="BLOCKED", message=message)
+
+ def all_suts(self):
+ """
+ Returns all the SUTs it's aware of.
+ """
+ return self.__internals[::2]
+
+ def all_targets(self, sut):
+ """
+ Returns the targets for a given SUT
+ """
+ try:
+ sut_idx = self.__internals.index(sut)
+ except:
+ return None
+ return self.__internals[sut_idx + 1][5::3]
+
+ def add_failed_sut(self, sut, msg):
+ """
+ Sets the given SUT as failing due to msg
+ """
+ self.__failed_suts[sut] = msg
+
+ def remove_failed_sut(self, sut):
+ """
+ Remove the given SUT from failed SUTs collection
+ """
+ if sut in self.__failed_suts:
+ self.__failed_suts.pop(sut)
+
+ def add_failed_target(self, sut, target, msg):
+ """
+ Sets the given SUT, target as failing due to msg
+ """
+ self.__failed_targets[sut + target] = msg
+
+ def remove_failed_target(self, sut, target):
+ """
+ Remove the given SUT, target from failed targets collection
+ """
+ key_word = sut + target
+ if key_word in self.__failed_targets:
+ self.__failed_targets.pop(key_word)
+
+ """
+ Attributes defined as properties to hide the implementation from the
+ presented interface.
+ """
+ sut = property(__get_sut, __set_sut)
+ dpdk_version = property(__get_dpdk_version, __set_dpdk_version)
+ target = property(__get_target, __set_target)
+ test_suite = property(__get_test_suite, __set_test_suite)
+ test_case = property(__get_test_case, __set_test_case)
+ internals = property(__get_internals)
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 2a174831d0..aac4d3505b 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -4,6 +4,7 @@
# Copyright(c) 2022 University of New Hampshire
#
+import inspect
import sys
@@ -15,6 +16,19 @@ def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
+def get_subclasses(module, clazz):
+ """
+ Get module attribute name and attribute.
+ """
+ for subclazz_name, subclazz in inspect.getmembers(module):
+ if (
+ hasattr(subclazz, "__bases__")
+ and subclazz.__bases__
+ and clazz in subclazz.__bases__
+ ):
+ yield (subclazz_name, subclazz)
+
+
def check_dts_python_version() -> None:
if sys.version_info.major < 3 or (
sys.version_info.major == 3 and sys.version_info.minor < 10
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 08/10] dts: add test runner and statistics collector
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
` (6 preceding siblings ...)
2022-08-24 16:24 ` [RFC PATCH v1 07/10] dts: add testcase and basic test results Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 09/10] dts: add hello world testplan Juraj Linkeš
` (2 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
Add functions responsible for initializing testbed setup, testcase
discovery and execution. The stats collector gathers the pass/fail
results and provides a short report.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/dts.py | 174 +++++++++++++++++++++++++++++---
dts/framework/stats_reporter.py | 70 +++++++++++++
2 files changed, 232 insertions(+), 12 deletions(-)
create mode 100644 dts/framework/stats_reporter.py
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 1938ea6af8..39e07d9eec 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -4,38 +4,179 @@
# Copyright(c) 2022 University of New Hampshire
#
+import os
import sys
+import traceback
from typing import Iterable, Optional
import framework.logger as logger
from .config import CONFIGURATION
+from .exception import VerifyFailure
from .logger import getLogger
from .node import Node
-from .settings import SETTINGS
-from .utils import check_dts_python_version
+from .settings import SETTINGS, DTSRuntimeError, DTSRuntimeErrors
+from .stats_reporter import StatsReporter
+from .sut_node import SutNode
+from .test_case import TestCase
+from .test_result import Result
+from .tg_node import TrafficGeneratorNode
+from .utils import check_dts_python_version, get_subclasses
+requested_tests: Optional[list[str]] = None
+result: Optional[Result] = None
+stats_report: Optional[StatsReporter] = None
log_handler: Optional[logger.DTSLOG] = None
+def dts_nodes_init():
+ """
+ Create dts SUT/TG instance and initialize them.
+ """
+ sut_nodes = []
+ tg_node = None
+ for node_config in CONFIGURATION.nodes:
+ if hasattr(node_config, 'memory_channels'):
+ sut_nodes.append(SutNode(node_config))
+ else:
+ tg_node = TrafficGeneratorNode(node_config)
+ tg_node.set_re_run(SETTINGS.re_run if SETTINGS.re_run > 0 else 0)
+
+ return sut_nodes, tg_node
+
+
+def dts_run_prerequisites(nodes):
+ """
+ Run dts prerequisites function.
+ """
+ # TODO nodes config contains both sut and tg nodes
+ try:
+ for node in nodes:
+ node.prerequisites()
+ except Exception as ex:
+ log_handler.error("NODE PREREQ EXCEPTION " + traceback.format_exc())
+ result.add_failed_sut(node, str(ex))
+ if isinstance(node, TrafficGeneratorNode):
+ DTSRuntimeError = DTSRuntimeErrors.TG_SETUP_ERR
+ else:
+ DTSRuntimeError = DTSRuntimeErrors.SUT_SETUP_ERR
+ return False
+ return True
+
+
+def dts_run_target(sut_nodes, tg_node, targets, test_suites):
+ """
+ Run each target in execution targets.
+ """
+ for target in targets:
+ target = str(target)
+ log_handler.info("\nTARGET " + target)
+ result.target = target
+
+ try:
+ for sut_node in sut_nodes:
+ sut_node.set_target(target)
+ except AssertionError as ex:
+ DTSRuntimeError = DTSRuntimeErrors.DPDK_BUILD_ERR
+ log_handler.error(" TARGET ERROR: " + str(ex))
+ result.add_failed_target(result.sut, target, str(ex))
+ continue
+ except Exception as ex:
+ DTSRuntimeError = DTSRuntimeErrors.GENERIC_ERR
+ log_handler.error(" !!! DEBUG IT: " + traceback.format_exc())
+ result.add_failed_target(result.sut, target, str(ex))
+ continue
+
+ dts_run_suite(sut_nodes, tg_node, test_suites, target)
+
+
+def dts_run_suite(sut_nodes, tg_node, test_suites, target):
+ """
+ Run each suite in test suite list.
+ """
+ for suite_name in test_suites:
+ try:
+ # check whether config the test cases
+ append_requested_case_list = None
+ if ":" in suite_name:
+ case_list = suite_name[suite_name.find(":") + 1 :]
+ append_requested_case_list = case_list.split("\\")
+ suite_name = suite_name[: suite_name.find(":")]
+ result.test_suite = suite_name
+ _suite_full_name = "TestSuite_" + suite_name
+ suite_module = __import__(
+ "tests." + _suite_full_name, fromlist=[_suite_full_name]
+ )
+ for test_classname, test_class in get_subclasses(suite_module, TestCase):
+
+ suite_obj = test_class(sut_nodes, tg_node, target, suite_name)
+ suite_obj.init_log()
+ suite_obj.set_requested_cases(requested_tests)
+ suite_obj.set_requested_cases(append_requested_case_list)
+
+ log_handler.info("\nTEST SUITE : " + test_classname)
+
+ if suite_obj.execute_setup_all():
+ suite_obj.execute_test_cases()
+
+ # save suite cases result
+ result.copy_suite(suite_obj.get_result())
+
+ log_handler.info("\nTEST SUITE ENDED: " + test_classname)
+ except VerifyFailure:
+ DTSRuntimeError = DTSRuntimeErrors.SUITE_EXECUTE_ERR
+ log_handler.error(" !!! DEBUG IT: " + traceback.format_exc())
+ except KeyboardInterrupt:
+ # stop/save result/skip execution
+ log_handler.error(" !!! STOPPING DTS")
+ break
+ except Exception as e:
+ DTSRuntimeError = DTSRuntimeErrors.GENERIC_ERR
+ log_handler.error(str(e))
+ finally:
+ try:
+ suite_obj.execute_tear_downall()
+ except Exception as e:
+ DTSRuntimeError = DTSRuntimeErrors.GENERIC_ERR
+ log_handler.error(str(e))
+ try:
+ stats_report.save(result)
+ except Exception as e:
+ DTSRuntimeError = DTSRuntimeErrors.GENERIC_ERR
+ log_handler.error(str(e))
+
+
def run_all() -> None:
"""
Main process of DTS, it will run all test suites in the config file.
"""
global log_handler
+ global result
+ global stats_report
+ global requested_tests
# check the python version of the server that run dts
check_dts_python_version()
+ # prepare the output folder
+ if not os.path.exists(SETTINGS.output_dir):
+ os.mkdir(SETTINGS.output_dir)
+
# init log_handler handler
if SETTINGS.verbose is True:
logger.set_verbose()
log_handler = getLogger("dts")
- nodes = {}
- # This try/finally block means "Run the try block, if there is an exception,
+ # run designated test cases
+ requested_tests = SETTINGS.test_cases
+
+ # report objects
+ stats_report = StatsReporter(SETTINGS.output_dir + "/statistics.txt")
+ result = Result()
+
+ # This try/finally block means "Run the try block and if there is an exception,
# run the finally block before passing it upward. If there is not an exception,
# run the finally block after the try block is finished." This helps avoid the
# problem of python's interpreter exit context, which essentially prevents you
@@ -45,26 +186,35 @@ def run_all() -> None:
# An except block SHOULD NOT be added to this. A failure at this level should
# deliver a full stack trace for debugging, since the only place that exceptions
# should be caught and handled is in the testing code.
+ nodes = []
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
- sut_config = execution.system_under_test
- if sut_config.name not in nodes:
- nodes[sut_config.name] = Node(sut_config)
+ sut_nodes, tg_node = dts_nodes_init()
+ nodes.extend(sut_nodes)
+ nodes.append(tg_node)
+
+ # Run SUT prerequisites
+ if dts_run_prerequisites(nodes) is False:
+ continue
+ result.dpdk_version = sut_nodes[0].dpdk_version
+ dts_run_target(
+ sut_nodes, tg_node, execution.target_descriptions, execution.test_suites
+ )
finally:
- quit_execution(nodes.values())
+ quit_execution(nodes)
-def quit_execution(sut_nodes: Iterable[Node]) -> None:
+def quit_execution(nodes: Iterable[Node]) -> None:
"""
Close session to SUT and TG before quit.
Return exit status when failure occurred.
"""
- for sut_node in sut_nodes:
+ for node in nodes:
# close all session
- sut_node.node_exit()
+ node.node_exit()
if log_handler is not None:
log_handler.info("DTS ended")
- sys.exit(0)
+ sys.exit(DTSRuntimeError)
diff --git a/dts/framework/stats_reporter.py b/dts/framework/stats_reporter.py
new file mode 100644
index 0000000000..a8d589bc7b
--- /dev/null
+++ b/dts/framework/stats_reporter.py
@@ -0,0 +1,70 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+#
+
+"""
+Simple text file statistics generator
+"""
+
+
+class StatsReporter(object):
+ """
+ Generates a small statistics file containing the number of passing,
+ failing and blocked tests. It makes use of a Result instance as input.
+ """
+
+ def __init__(self, filename):
+ self.filename = filename
+
+ def __add_stat(self, test_result):
+ if test_result is not None:
+ if test_result[0] == "PASSED":
+ self.passed += 1
+ if test_result[0] == "FAILED":
+ self.failed += 1
+ if test_result[0] == "BLOCKED":
+ self.blocked += 1
+ self.total += 1
+
+ def __count_stats(self):
+ for sut in self.result.all_suts():
+ for target in self.result.all_targets(sut):
+ for suite in self.result.all_test_suites(sut, target):
+ for case in self.result.all_test_cases(sut, target, suite):
+ test_result = self.result.result_for(sut, target, suite, case)
+ if len(test_result):
+ self.__add_stat(test_result)
+
+ def __write_stats(self):
+ sut_nodes = self.result.all_suts()
+ if len(sut_nodes) == 1:
+ self.stats_file.write(
+ "dpdk_version = {}\n".format(
+ self.result.current_dpdk_version(sut_nodes[0])
+ )
+ )
+ else:
+ for sut in sut_nodes:
+ dpdk_version = self.result.current_dpdk_version(sut)
+ self.stats_file.write(
+ "{}.dpdk_version = {}\n".format(sut, dpdk_version)
+ )
+ self.__count_stats()
+ self.stats_file.write("Passed = %d\n" % self.passed)
+ self.stats_file.write("Failed = %d\n" % self.failed)
+ self.stats_file.write("Blocked = %d\n" % self.blocked)
+ rate = 0
+ if self.total > 0:
+ rate = self.passed * 100.0 / self.total
+ self.stats_file.write("Pass rate = %.1f\n" % rate)
+
+ def save(self, result):
+ self.passed = 0
+ self.failed = 0
+ self.blocked = 0
+ self.total = 0
+ self.stats_file = open(self.filename, "w+")
+ self.result = result
+ self.__write_stats()
+ self.stats_file.close()
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 09/10] dts: add hello world testplan
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
` (7 preceding siblings ...)
2022-08-24 16:24 ` [RFC PATCH v1 08/10] dts: add test runner and statistics collector Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 10/10] dts: add hello world testsuite Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
The testplan describes the capabilities of the tested application along
with the description of testcases to test it.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/test_plans/hello_world_test_plan.rst | 68 ++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 dts/test_plans/hello_world_test_plan.rst
diff --git a/dts/test_plans/hello_world_test_plan.rst b/dts/test_plans/hello_world_test_plan.rst
new file mode 100644
index 0000000000..566a9bb10c
--- /dev/null
+++ b/dts/test_plans/hello_world_test_plan.rst
@@ -0,0 +1,68 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+
+=============================================
+Sample Application Tests: Hello World Example
+=============================================
+
+This example is one of the most simple RTE application that can be
+done. The program will just print a "helloworld" message on every
+enabled lcore.
+
+Command Usage::
+
+ ./dpdk-helloworld -c COREMASK [-m NB] [-r NUM] [-n NUM]
+
+ EAL option list:
+ -c COREMASK: hexadecimal bitmask of cores we are running on
+ -m MB : memory to allocate (default = size of hugemem)
+ -n NUM : force number of memory channels (don't detect)
+ -r NUM : force number of memory ranks (don't detect)
+ --huge-file: base filename for hugetlbfs entries
+ debug options:
+ --no-huge : use malloc instead of hugetlbfs
+ --no-pci : disable pci
+ --no-hpet : disable hpet
+ --no-shconf: no shared config (mmap'd files)
+
+
+Prerequisites
+=============
+
+Support igb_uio and vfio driver, if used vfio, kernel need 3.6+ and enable vt-d in bios.
+When used vfio , used "modprobe vfio" and "modprobe vfio-pci" insmod vfio driver, then used
+"./tools/dpdk_nic_bind.py --bind=vfio-pci device_bus_id" to bind vfio driver to test driver.
+
+To find out the mapping of lcores (processor) to core id and socket (physical
+id), the command below can be used::
+
+ $ grep "processor\|physical id\|core id\|^$" /proc/cpuinfo
+
+The total logical core number will be used as ``helloworld`` input parameters.
+
+
+Test Case: run hello world on single lcores
+===========================================
+
+To run example in single lcore ::
+
+ $ ./dpdk-helloworld -c 1
+ hello from core 0
+
+Check the output is exact the lcore 0
+
+
+Test Case: run hello world on every lcores
+==========================================
+
+To run the example in all the enabled lcore ::
+
+ $ ./dpdk-helloworld -cffffff
+ hello from core 1
+ hello from core 2
+ hello from core 3
+ ...
+ ...
+ hello from core 0
+
+Verify the output of according to all the core masks.
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v1 10/10] dts: add hello world testsuite
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
` (8 preceding siblings ...)
2022-08-24 16:24 ` [RFC PATCH v1 09/10] dts: add hello world testplan Juraj Linkeš
@ 2022-08-24 16:24 ` Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-08-24 16:24 UTC (permalink / raw)
To: thomas, david.marchand, ronan.randles, Honnappa.Nagarahalli,
ohilyard, lijuan.tu
Cc: dev, Juraj Linkeš
The testsuite implements the testcases defined in the corresponding test
plan.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/tests/TestSuite_hello_world.py | 80 ++++++++++++++++++++++++++++++
1 file changed, 80 insertions(+)
create mode 100644 dts/tests/TestSuite_hello_world.py
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
new file mode 100644
index 0000000000..8be33330aa
--- /dev/null
+++ b/dts/tests/TestSuite_hello_world.py
@@ -0,0 +1,80 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+Test HelloWorld example.
+"""
+
+import os.path
+from framework.test_case import TestCase
+
+
+class TestHelloWorld(TestCase):
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+ hello_world Prerequisites:
+ helloworld build pass
+ """
+ out = self.sut_node.build_dpdk_apps("examples/helloworld")
+ self.app_helloworld_path = os.path.join(self.target, "examples", "dpdk-helloworld")
+
+ self.verify("Error" not in out, "compilation error 1")
+ self.verify("No such file" not in out, "compilation error 2")
+
+ def set_up(self):
+ """
+ Run before each test case.
+ Nothing to do.
+ """
+ pass
+
+ def test_hello_world_single_core(self):
+ """
+ Run hello world on single lcores
+ Only received hello message from core0
+ """
+
+ # get the mask for the first core
+ cores = self.sut_node.get_core_list("1S/1C/1T")
+ eal_para = self.sut_node.create_eal_parameters(cores="1S/1C/1T")
+ cmdline = "./%s %s" % (self.app_helloworld_path, eal_para)
+ out = self.sut_node.send_expect(cmdline, "# ", 30)
+ self.verify(
+ "hello from core %s" % cores[0] in out,
+ "EAL not started on core%s" % cores[0],
+ )
+
+ def test_hello_world_all_cores(self):
+ """
+ Run hello world on all lcores
+ Received hello message from all lcores
+ """
+
+ # get the maximum logical core number
+ cores = self.sut_node.get_core_list("all")
+ eal_para = self.sut_node.create_eal_parameters(cores=cores)
+
+ cmdline = "./%s %s " % (self.app_helloworld_path, eal_para)
+ out = self.sut_node.send_expect(cmdline, "# ", 50)
+ for core in cores:
+ self.verify(
+ "hello from core %s" % core in out,
+ "EAL not started on core%s" % core,
+ )
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ Nothing to do.
+ """
+ pass
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ Nothing to do.
+ """
+ pass
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 00/10] dts: add hello world testcase
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
` (9 preceding siblings ...)
2022-08-24 16:24 ` [RFC PATCH v1 10/10] dts: add hello world testsuite Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 01/10] dts: add node and os abstractions Juraj Linkeš
` (10 more replies)
10 siblings, 11 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Add code needed to run the HelloWorld testcase which just runs the hello
world dpdk application.
The patch first outlines the basic class architecture, covering Nodes
(hosts that DTS connects to), RemoteSession, OS-specific code, test
runner and Exceptions.
The patchset currently heavily refactors this original DTS code needed
to run the testcase:
* DPDK build on the System under Test
* DPDK eal args construction, app running and shutting down
* SUT hugepage memory configuration
* Test runner
The code that still needs to be refactored:
* Test results
* TestCase/TestSuite class
* Test runner parts interfacing with TestCase
* The HelloWorld testsuite itself
The code is divided into sub-packages, some of which are divided
further. It's possible that some sub-directories should be flattened to
simplify imports. I've also had to make some concessions to code
structure to avoid circular imports. I'll continue thinking about this
going forward and v3 may have a different dir/import structure.
The code has been ported from DTS and we may want/need to change some
designs, such as what configuration to do and when - for example, we may
not want DTS to configure hugepages (as that may be too much of a system
modification or it just simply makes sense to configure once outside of
DTS and not touch it in every single run).
Juraj Linkeš (10):
dts: add node and os abstractions
dts: add ssh command verification
dts: add dpdk build on sut
dts: add dpdk execution handling
dts: add node memory setup
dts: add test results module
dts: add simple stats report
dts: add testsuite class
dts: add hello world testplan
dts: add hello world testsuite
dts/conf.yaml | 16 +-
dts/framework/config/__init__.py | 186 +++++++++-
dts/framework/config/conf_yaml_schema.json | 137 +++++++-
dts/framework/dts.py | 153 ++++++--
dts/framework/exception.py | 190 +++++++++-
dts/framework/remote_session/__init__.py | 23 +-
dts/framework/remote_session/arch/__init__.py | 20 ++
dts/framework/remote_session/arch/arch.py | 57 +++
dts/framework/remote_session/factory.py | 14 +
dts/framework/remote_session/os/__init__.py | 17 +
.../remote_session/os/linux_session.py | 111 ++++++
dts/framework/remote_session/os/os_session.py | 170 +++++++++
.../remote_session/os/posix_session.py | 220 ++++++++++++
.../remote_session/remote_session.py | 94 ++++-
dts/framework/remote_session/ssh_session.py | 69 +++-
dts/framework/settings.py | 65 +++-
dts/framework/stats_reporter.py | 65 ++++
dts/framework/test_case.py | 246 +++++++++++++
dts/framework/test_result.py | 217 ++++++++++++
dts/framework/testbed_model/__init__.py | 7 +-
dts/framework/testbed_model/hw/__init__.py | 17 +
dts/framework/testbed_model/hw/cpu.py | 164 +++++++++
dts/framework/testbed_model/node.py | 62 ----
dts/framework/testbed_model/node/__init__.py | 7 +
dts/framework/testbed_model/node/node.py | 169 +++++++++
dts/framework/testbed_model/node/sut_node.py | 331 ++++++++++++++++++
dts/framework/utils.py | 35 ++
dts/test_plans/hello_world_test_plan.rst | 68 ++++
dts/tests/TestSuite_hello_world.py | 53 +++
29 files changed, 2854 insertions(+), 129 deletions(-)
create mode 100644 dts/framework/remote_session/arch/__init__.py
create mode 100644 dts/framework/remote_session/arch/arch.py
create mode 100644 dts/framework/remote_session/factory.py
create mode 100644 dts/framework/remote_session/os/__init__.py
create mode 100644 dts/framework/remote_session/os/linux_session.py
create mode 100644 dts/framework/remote_session/os/os_session.py
create mode 100644 dts/framework/remote_session/os/posix_session.py
create mode 100644 dts/framework/stats_reporter.py
create mode 100644 dts/framework/test_case.py
create mode 100644 dts/framework/test_result.py
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
delete mode 100644 dts/framework/testbed_model/node.py
create mode 100644 dts/framework/testbed_model/node/__init__.py
create mode 100644 dts/framework/testbed_model/node/node.py
create mode 100644 dts/framework/testbed_model/node/sut_node.py
create mode 100644 dts/test_plans/hello_world_test_plan.rst
create mode 100644 dts/tests/TestSuite_hello_world.py
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 01/10] dts: add node and os abstractions
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 02/10] dts: add ssh command verification Juraj Linkeš
` (9 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The abstraction model in DTS is as follows:
Node, defining and implementing methods common to and the base of SUT
(system under test) Node and TG (traffic generator) Node.
Remote Session, defining and implementing methods common to any remote
session implementation, such as SSH Session.
OSSession, defining and implementing methods common to any operating
system/distribution, such as Linux.
OSSession uses a derived Remote Session and Node in turn uses a derived
OSSession. This split delegates OS-specific and connection-specific code
to specialized classes designed to handle the differences.
The base classes implement the methods or parts of methods that are
common to all implementations and defines abstract methods that must be
implemented by derived classes.
Part of the abstractions is the DTS test execution skeleton: node init,
execution setup, build setup and then test execution.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 8 +-
dts/framework/config/__init__.py | 70 +++++++++-
dts/framework/config/conf_yaml_schema.json | 66 +++++++++-
dts/framework/dts.py | 116 +++++++++++++----
dts/framework/exception.py | 87 ++++++++++++-
dts/framework/remote_session/__init__.py | 18 +--
dts/framework/remote_session/factory.py | 14 ++
dts/framework/remote_session/os/__init__.py | 17 +++
.../remote_session/os/linux_session.py | 11 ++
dts/framework/remote_session/os/os_session.py | 46 +++++++
.../remote_session/os/posix_session.py | 12 ++
.../remote_session/remote_session.py | 23 +++-
dts/framework/remote_session/ssh_session.py | 2 +-
dts/framework/testbed_model/__init__.py | 6 +-
dts/framework/testbed_model/node.py | 62 ---------
dts/framework/testbed_model/node/__init__.py | 7 +
dts/framework/testbed_model/node/node.py | 120 ++++++++++++++++++
dts/framework/testbed_model/node/sut_node.py | 13 ++
18 files changed, 591 insertions(+), 107 deletions(-)
create mode 100644 dts/framework/remote_session/factory.py
create mode 100644 dts/framework/remote_session/os/__init__.py
create mode 100644 dts/framework/remote_session/os/linux_session.py
create mode 100644 dts/framework/remote_session/os/os_session.py
create mode 100644 dts/framework/remote_session/os/posix_session.py
delete mode 100644 dts/framework/testbed_model/node.py
create mode 100644 dts/framework/testbed_model/node/__init__.py
create mode 100644 dts/framework/testbed_model/node/node.py
create mode 100644 dts/framework/testbed_model/node/sut_node.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1aaa593612..6b0bc5c2bf 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -2,8 +2,14 @@
# Copyright 2022 The DPDK contributors
executions:
- - system_under_test: "SUT 1"
+ - build_targets:
+ - arch: x86_64
+ os: linux
+ cpu: native
+ compiler: gcc
+ system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ os: linux
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 214be8e7f4..1b97dc3ab9 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -3,13 +3,14 @@
# Copyright(c) 2022 University of New Hampshire
"""
-Generic port and topology nodes configuration file load function
+Yaml config parsing methods
"""
import json
import os.path
import pathlib
from dataclasses import dataclass
+from enum import Enum, auto, unique
from typing import Any
import warlock # type: ignore
@@ -18,6 +19,47 @@
from framework.settings import SETTINGS
+class StrEnum(Enum):
+ @staticmethod
+ def _generate_next_value_(
+ name: str, start: int, count: int, last_values: object
+ ) -> str:
+ return name
+
+
+@unique
+class Architecture(StrEnum):
+ i686 = auto()
+ x86_64 = auto()
+ x86_32 = auto()
+ arm64 = auto()
+ ppc64le = auto()
+
+
+@unique
+class OS(StrEnum):
+ linux = auto()
+ freebsd = auto()
+ windows = auto()
+
+
+@unique
+class CPUType(StrEnum):
+ native = auto()
+ armv8a = auto()
+ dpaa2 = auto()
+ thunderx = auto()
+ xgene1 = auto()
+
+
+@unique
+class Compiler(StrEnum):
+ gcc = auto()
+ clang = auto()
+ icc = auto()
+ msvc = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -29,6 +71,7 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ os: OS
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -37,19 +80,44 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ os=OS(d["os"]),
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class BuildTargetConfiguration:
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+ name: str
+
+ @staticmethod
+ def from_dict(d: dict) -> "BuildTargetConfiguration":
+ return BuildTargetConfiguration(
+ arch=Architecture(d["arch"]),
+ os=OS(d["os"]),
+ cpu=CPUType(d["cpu"]),
+ compiler=Compiler(d["compiler"]),
+ name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
+ build_targets: list[BuildTargetConfiguration]
system_under_test: NodeConfiguration
@staticmethod
def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
+ build_targets: list[BuildTargetConfiguration] = list(
+ map(BuildTargetConfiguration.from_dict, d["build_targets"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
return ExecutionConfiguration(
+ build_targets=build_targets,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 6b8d6ccd05..409ce7ac74 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -5,6 +5,58 @@
"node_name": {
"type": "string",
"description": "A unique identifier for a node"
+ },
+ "OS": {
+ "type": "string",
+ "enum": [
+ "linux"
+ ]
+ },
+ "cpu": {
+ "type": "string",
+ "description": "Native should be the default on x86",
+ "enum": [
+ "native",
+ "armv8a",
+ "dpaa2",
+ "thunderx",
+ "xgene1"
+ ]
+ },
+ "compiler": {
+ "type": "string",
+ "enum": [
+ "gcc",
+ "clang",
+ "icc",
+ "mscv"
+ ]
+ },
+ "build_target": {
+ "type": "object",
+ "description": "Targets supported by DTS",
+ "properties": {
+ "arch": {
+ "type": "string",
+ "enum": [
+ "ALL",
+ "x86_64",
+ "arm64",
+ "ppc64le",
+ "other"
+ ]
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
+ },
+ "cpu": {
+ "$ref": "#/definitions/cpu"
+ },
+ "compiler": {
+ "$ref": "#/definitions/compiler"
+ }
+ },
+ "additionalProperties": false
}
},
"type": "object",
@@ -29,13 +81,17 @@
"password": {
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
}
},
"additionalProperties": false,
"required": [
"name",
"hostname",
- "user"
+ "user",
+ "os"
]
},
"minimum": 1
@@ -45,12 +101,20 @@
"items": {
"type": "object",
"properties": {
+ "build_targets": {
+ "type": "array",
+ "items": {
+ "$ref": "#/definitions/build_target"
+ },
+ "minimum": 1
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
},
"additionalProperties": false,
"required": [
+ "build_targets",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index d23cfc4526..262c392d8e 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,32 +3,38 @@
# Copyright(c) 2022 PANTHEON.tech s.r.o.
# Copyright(c) 2022 University of New Hampshire
+import os
import sys
import traceback
from collections.abc import Iterable
-from framework.testbed_model.node import Node
+from framework.testbed_model import Node, SutNode
-from .config import CONFIGURATION
+from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
+from .exception import DTSError, ReturnCode
from .logger import DTSLOG, getLogger
+from .settings import SETTINGS
from .utils import check_dts_python_version
-dts_logger: DTSLOG | None = None
+dts_logger: DTSLOG = getLogger("dts")
def run_all() -> None:
"""
- Main process of DTS, it will run all test suites in the config file.
+ The main process of DTS. Runs all build targets in all executions from the main
+ config file.
"""
-
+ return_code = ReturnCode.NO_ERR
global dts_logger
# check the python version of the server that run dts
check_dts_python_version()
- dts_logger = getLogger("dts")
+ # prepare the output folder
+ if not os.path.exists(SETTINGS.output_dir):
+ os.mkdir(SETTINGS.output_dir)
- nodes = {}
+ nodes: dict[str, Node] = {}
# This try/finally block means "Run the try block, if there is an exception,
# run the finally block before passing it upward. If there is not an exception,
# run the finally block after the try block is finished." This helps avoid the
@@ -38,30 +44,92 @@ def run_all() -> None:
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
- sut_config = execution.system_under_test
- if sut_config.name not in nodes:
- node = Node(sut_config)
- nodes[sut_config.name] = node
- node.send_command("echo Hello World")
-
- except Exception as e:
- # sys.exit() doesn't produce a stack trace, need to print it explicitly
- traceback.print_exc()
+ sut_node = init_nodes(execution, nodes)
+ run_execution(sut_node, execution)
+
+ except DTSError as e:
+ dts_logger.error(traceback.format_exc())
+ return_code = e.return_code
raise e
+ except Exception:
+ # sys.exit() doesn't produce a stack trace, need to produce it explicitly
+ dts_logger.error(traceback.format_exc())
+ return_code = ReturnCode.GENERIC_ERR
+ raise
+
+ finally:
+ quit_execution(nodes.values(), return_code)
+
+
+def init_nodes(
+ execution: ExecutionConfiguration, existing_nodes: dict[str, Node]
+) -> SutNode:
+ """
+ Create DTS SUT instance used in the given execution and initialize it. If already
+ initialized (in a previous execution), return the existing SUT.
+ """
+ if execution.system_under_test.name in existing_nodes:
+ # a Node with the same name already exists
+ sut_node = existing_nodes[execution.system_under_test.name]
+ else:
+ # the SUT has not been initialized yet
+ sut_node = SutNode(execution.system_under_test)
+ existing_nodes[sut_node.name] = sut_node
+
+ return sut_node
+
+
+def run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+ """
+ Run the given execution. This involves running the execution setup as well as
+ running all build targets in the given execution.
+ """
+ dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+ try:
+ sut_node.setup_execution(execution)
+ for build_target in execution.build_targets:
+ run_build_target(sut_node, build_target, execution)
+
finally:
- quit_execution(nodes.values())
+ sut_node.cleanup_execution()
+
+
+def run_build_target(
+ sut_node: SutNode,
+ build_target: BuildTargetConfiguration,
+ execution: ExecutionConfiguration,
+) -> None:
+ """
+ Run the given build target.
+ """
+ dts_logger.info(f"Running target '{build_target.name}'.")
+ try:
+ sut_node.setup_build_target(build_target)
+ run_suite(sut_node, build_target, execution)
+
+ finally:
+ sut_node.teardown_build_target()
+
+
+def run_suite(
+ sut_node: SutNode,
+ build_target: BuildTargetConfiguration,
+ execution: ExecutionConfiguration,
+) -> None:
+ """
+ Use the given build_target to run the test suite with possibly only a subset
+ of tests. If no subset is specified, run all tests.
+ """
-def quit_execution(sut_nodes: Iterable[Node]) -> None:
+def quit_execution(nodes: Iterable[Node], return_code: ReturnCode) -> None:
"""
- Close session to SUT and TG before quit.
- Return exit status when failure occurred.
+ Close all node resources before quitting.
"""
- for sut_node in sut_nodes:
- # close all session
- sut_node.node_exit()
+ for node in nodes:
+ node.close()
if dts_logger is not None:
dts_logger.info("DTS execution has ended.")
- sys.exit(0)
+ sys.exit(return_code)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 8b2f08a8f0..cac8d84416 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -7,14 +7,45 @@
User-defined exceptions used across the framework.
"""
+from enum import IntEnum, unique
+from typing import Callable, ClassVar
-class SSHTimeoutError(Exception):
+
+@unique
+class ReturnCode(IntEnum):
+ """
+ The various return codes that DTS exists with.
+ There are four categories of return codes:
+ 0-9 DTS Framework errors
+ 10-19 DPDK/Traffic Generator errors
+ 20-29 Node errors
+ 30-39 Test errors
+ """
+
+ NO_ERR = 0
+ GENERIC_ERR = 1
+ SSH_ERR = 2
+ NODE_SETUP_ERR = 20
+ NODE_CLEANUP_ERR = 21
+
+
+class DTSError(Exception):
+ """
+ The base exception from which all DTS exceptions are derived. Servers to hold
+ the return code with which DTS should exit.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.GENERIC_ERR
+
+
+class SSHTimeoutError(DTSError):
"""
Command execution timeout.
"""
command: str
output: str
+ return_code: ClassVar[ReturnCode] = ReturnCode.SSH_ERR
def __init__(self, command: str, output: str):
self.command = command
@@ -27,12 +58,13 @@ def get_output(self) -> str:
return self.output
-class SSHConnectionError(Exception):
+class SSHConnectionError(DTSError):
"""
SSH connection error.
"""
host: str
+ return_code: ClassVar[ReturnCode] = ReturnCode.SSH_ERR
def __init__(self, host: str):
self.host = host
@@ -41,16 +73,65 @@ def __str__(self) -> str:
return f"Error trying to connect with {self.host}"
-class SSHSessionDeadError(Exception):
+class SSHSessionDeadError(DTSError):
"""
SSH session is not alive.
It can no longer be used.
"""
host: str
+ return_code: ClassVar[ReturnCode] = ReturnCode.SSH_ERR
def __init__(self, host: str):
self.host = host
def __str__(self) -> str:
return f"SSH session with {self.host} has died"
+
+
+class NodeSetupError(DTSError):
+ """
+ Raised when setting up a node.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.NODE_SETUP_ERR
+
+ def __init__(self):
+ super(NodeSetupError, self).__init__(
+ "An error occurred during node execution setup."
+ )
+
+
+class NodeCleanupError(DTSError):
+ """
+ Raised when cleaning up node.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.NODE_CLEANUP_ERR
+
+ def __init__(self):
+ super(NodeCleanupError, self).__init__(
+ "An error occurred during node execution cleanup."
+ )
+
+
+def convert_exception(exception: type[DTSError]) -> Callable[..., Callable[..., None]]:
+ """
+ When a non-DTS exception is raised while executing the decorated function,
+ convert it to the supplied exception.
+ """
+
+ def convert_exception_wrapper(func) -> Callable[..., None]:
+ def convert(*args, **kwargs) -> None:
+ try:
+ func(*args, **kwargs)
+
+ except DTSError:
+ raise
+
+ except Exception as e:
+ raise exception() from e
+
+ return convert
+
+ return convert_exception_wrapper
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index a227d8db22..f2339b20bd 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -1,14 +1,14 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2022 PANTHEON.tech s.r.o.
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
+"""
+The package provides modules for managing remote connections to a remote host (node),
+differentiated by OS.
+The package provides a factory function, create_session, that returns the appropriate
+remote connection based on the passed configuration. The differences are in the
+underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""
-from .remote_session import RemoteSession
-from .ssh_session import SSHSession
+# pylama:ignore=W0611
-
-def create_remote_session(
- node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
- return SSHSession(node_config, name, logger)
+from .os import OSSession, create_session
diff --git a/dts/framework/remote_session/factory.py b/dts/framework/remote_session/factory.py
new file mode 100644
index 0000000000..a227d8db22
--- /dev/null
+++ b/dts/framework/remote_session/factory.py
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote_session import RemoteSession
+from .ssh_session import SSHSession
+
+
+def create_remote_session(
+ node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+ return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/os/__init__.py b/dts/framework/remote_session/os/__init__.py
new file mode 100644
index 0000000000..9d2ec7fca2
--- /dev/null
+++ b/dts/framework/remote_session/os/__init__.py
@@ -0,0 +1,17 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022 University of New Hampshire
+
+from framework.config import OS, NodeConfiguration
+from framework.logger import DTSLOG
+
+from .linux_session import LinuxSession
+from .os_session import OSSession
+
+
+def create_session(
+ node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> OSSession:
+ match node_config.os:
+ case OS.linux:
+ return LinuxSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/os/linux_session.py b/dts/framework/remote_session/os/linux_session.py
new file mode 100644
index 0000000000..39e80631dd
--- /dev/null
+++ b/dts/framework/remote_session/os/linux_session.py
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022 University of New Hampshire
+
+from .posix_session import PosixSession
+
+
+class LinuxSession(PosixSession):
+ """
+ The implementation of non-Posix compliant parts of Linux remote sessions.
+ """
diff --git a/dts/framework/remote_session/os/os_session.py b/dts/framework/remote_session/os/os_session.py
new file mode 100644
index 0000000000..2a72082628
--- /dev/null
+++ b/dts/framework/remote_session/os/os_session.py
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022 University of New Hampshire
+
+from abc import ABC
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+from framework.remote_session.factory import create_remote_session
+from framework.remote_session.remote_session import RemoteSession
+
+
+class OSSession(ABC):
+ """
+ The OS classes create a DTS node remote session and implement OS specific
+ behavior. There a few control methods implemented by the base class, the rest need
+ to be implemented by derived classes.
+ """
+
+ _config: NodeConfiguration
+ name: str
+ logger: DTSLOG
+ remote_session: RemoteSession
+
+ def __init__(
+ self,
+ node_config: NodeConfiguration,
+ name: str,
+ logger: DTSLOG,
+ ) -> None:
+ self._config = node_config
+ self.name = name
+ self.logger = logger
+ self.remote_session = create_remote_session(node_config, name, logger)
+
+ def close(self, force: bool = False) -> None:
+ """
+ Close the remote session.
+ """
+ self.remote_session.close(force)
+
+ def is_alive(self) -> bool:
+ """
+ Check whether the remote session is still responding.
+ """
+ return self.remote_session.is_alive()
diff --git a/dts/framework/remote_session/os/posix_session.py b/dts/framework/remote_session/os/posix_session.py
new file mode 100644
index 0000000000..9622a4ea30
--- /dev/null
+++ b/dts/framework/remote_session/os/posix_session.py
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022 University of New Hampshire
+
+from .os_session import OSSession
+
+
+class PosixSession(OSSession):
+ """
+ An intermediary class implementing the Posix compliant parts of
+ Linux and other OS remote sessions.
+ """
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index 33047d9d0a..4095e02c1b 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -19,6 +19,15 @@ class HistoryRecord:
class RemoteSession(ABC):
+ """
+ The base class for defining which methods must be implemented in order to connect
+ to a remote host (node) and maintain a remote session. The derived classes are
+ supposed to implement/use some underlying transport protocol (e.g. SSH) to
+ implement the methods. On top of that, it provides some basic services common to
+ all derived classes, such as keeping history and logging what's being executed
+ on the remote node.
+ """
+
name: str
hostname: str
ip: str
@@ -58,9 +67,11 @@ def _connect(self) -> None:
"""
Create connection to assigned node.
"""
- pass
def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
+ """
+ Send a command and return the output.
+ """
self.logger.info(f"Sending: {command}")
out = self._send_command(command, timeout)
self.logger.debug(f"Received from {command}: {out}")
@@ -70,7 +81,8 @@ def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
@abstractmethod
def _send_command(self, command: str, timeout: float) -> str:
"""
- Send a command and return the output.
+ Use the underlying protocol to execute the command and return the output
+ of the command.
"""
def _history_add(self, command: str, output: str) -> None:
@@ -79,17 +91,20 @@ def _history_add(self, command: str, output: str) -> None:
)
def close(self, force: bool = False) -> None:
+ """
+ Close the remote session and free all used resources.
+ """
self.logger.logger_exit()
self._close(force)
@abstractmethod
def _close(self, force: bool = False) -> None:
"""
- Close the remote session, freeing all used resources.
+ Execute protocol specific steps needed to close the session properly.
"""
@abstractmethod
def is_alive(self) -> bool:
"""
- Check whether the session is still responding.
+ Check whether the remote session is still responding.
"""
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
index 7ec327054d..5816b1ce6b 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -17,7 +17,7 @@
class SSHSession(RemoteSession):
"""
- Module for creating Pexpect SSH sessions to a node.
+ Module for creating Pexpect SSH remote sessions.
"""
session: pxssh.pxssh
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index c5512e5812..13c29c59c8 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -2,6 +2,10 @@
# Copyright(c) 2022 University of New Hampshire
"""
-This module contains the classes used to model the physical traffic generator,
+This package contains the classes used to model the physical traffic generator,
system under test and any other components that need to be interacted with.
"""
+
+# pylama:ignore=W0611
+
+from .node import Node, SutNode
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
deleted file mode 100644
index 8437975416..0000000000
--- a/dts/framework/testbed_model/node.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
-
-"""
-A node is a generic host that DTS connects to and manages.
-"""
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG, getLogger
-from framework.remote_session import RemoteSession, create_remote_session
-from framework.settings import SETTINGS
-
-
-class Node(object):
- """
- Basic module for node management. This module implements methods that
- manage a node, such as information gathering (of CPU/PCI/NIC) and
- environment setup.
- """
-
- name: str
- main_session: RemoteSession
- logger: DTSLOG
- _config: NodeConfiguration
- _other_sessions: list[RemoteSession]
-
- def __init__(self, node_config: NodeConfiguration):
- self._config = node_config
- self._other_sessions = []
-
- self.name = node_config.name
- self.logger = getLogger(self.name)
- self.logger.info(f"Created node: {self.name}")
- self.main_session = create_remote_session(self._config, self.name, self.logger)
-
- def send_command(self, cmds: str, timeout: float = SETTINGS.timeout) -> str:
- """
- Send commands to node and return string before timeout.
- """
-
- return self.main_session.send_command(cmds, timeout)
-
- def create_session(self, name: str) -> RemoteSession:
- connection = create_remote_session(
- self._config,
- name,
- getLogger(name, node=self.name),
- )
- self._other_sessions.append(connection)
- return connection
-
- def node_exit(self) -> None:
- """
- Recover all resource before node exit
- """
- if self.main_session:
- self.main_session.close()
- for session in self._other_sessions:
- session.close()
- self.logger.logger_exit()
diff --git a/dts/framework/testbed_model/node/__init__.py b/dts/framework/testbed_model/node/__init__.py
new file mode 100644
index 0000000000..a179056f1f
--- /dev/null
+++ b/dts/framework/testbed_model/node/__init__.py
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from .node import Node
+from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/node/node.py b/dts/framework/testbed_model/node/node.py
new file mode 100644
index 0000000000..86654e55ae
--- /dev/null
+++ b/dts/framework/testbed_model/node/node.py
@@ -0,0 +1,120 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022 University of New Hampshire
+
+"""
+A node is a generic host that DTS connects to and manages.
+"""
+
+from framework.config import (
+ BuildTargetConfiguration,
+ ExecutionConfiguration,
+ NodeConfiguration,
+)
+from framework.exception import NodeCleanupError, NodeSetupError, convert_exception
+from framework.logger import DTSLOG, getLogger
+from framework.remote_session import OSSession, create_session
+
+
+class Node(object):
+ """
+ Basic class for node management. This class implements methods that
+ manage a node, such as information gathering (of CPU/PCI/NIC) and
+ environment setup.
+ """
+
+ name: str
+ main_session: OSSession
+ logger: DTSLOG
+ config: NodeConfiguration
+ _other_sessions: list[OSSession]
+
+ def __init__(self, node_config: NodeConfiguration):
+ self.config = node_config
+ self._other_sessions = []
+
+ self.name = node_config.name
+ self.logger = getLogger(self.name)
+ self.logger.info(f"Created node: {self.name}")
+ self.main_session = create_session(self.config, self.name, self.logger)
+
+ @convert_exception(NodeSetupError)
+ def setup_execution(self, execution_config: ExecutionConfiguration) -> None:
+ """
+ Perform the execution setup that will be done for each execution
+ this node is part of.
+ """
+ self._setup_execution(execution_config)
+
+ def _setup_execution(self, execution_config: ExecutionConfiguration) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ @convert_exception(NodeSetupError)
+ def setup_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
+ """
+ Perform the build target setup that will be done for each build target
+ tested on this node.
+ """
+ self._setup_build_target(build_target_config)
+
+ def _setup_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ @convert_exception(NodeCleanupError)
+ def teardown_build_target(self) -> None:
+ """
+ Perform the build target cleanup that will be done after each build target
+ tested on this node.
+ """
+ self._cleanup_build_target()
+
+ def _cleanup_build_target(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ @convert_exception(NodeCleanupError)
+ def cleanup_execution(self) -> None:
+ """
+ Perform the execution cleanup that will be done after each execution
+ this node is part of concludes.
+ """
+ self._cleanup_execution()
+
+ def _cleanup_execution(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def create_session(self, name: str) -> OSSession:
+ """
+ Create and return a new OSSession tailored to the remote OS.
+ """
+ connection = create_session(
+ self.config,
+ name,
+ getLogger(name, node=self.name),
+ )
+ self._other_sessions.append(connection)
+ return connection
+
+ def close(self) -> None:
+ """
+ Close all connections and free other resources.
+ """
+ if self.main_session:
+ self.main_session.close()
+ for session in self._other_sessions:
+ session.close()
+ self.logger.logger_exit()
diff --git a/dts/framework/testbed_model/node/sut_node.py b/dts/framework/testbed_model/node/sut_node.py
new file mode 100644
index 0000000000..79d54585c9
--- /dev/null
+++ b/dts/framework/testbed_model/node/sut_node.py
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+from .node import Node
+
+
+class SutNode(Node):
+ """
+ A class for managing connections to the System under Test, providing
+ methods that retrieve the necessary information about the node (such as
+ cpu, memory and NIC details) and configuration capabilities.
+ """
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 02/10] dts: add ssh command verification
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 01/10] dts: add node and os abstractions Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 03/10] dts: add dpdk build on sut Juraj Linkeš
` (8 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
This is a basic capability needed to check whether the command execution
was successful or not. If not, raise a RemoteCommandExecutionError. When
a failure is expected, the caller is supposed to catch the exception.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/exception.py | 21 +++++++
.../remote_session/remote_session.py | 55 +++++++++++++------
dts/framework/remote_session/ssh_session.py | 11 +++-
3 files changed, 67 insertions(+), 20 deletions(-)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index cac8d84416..b282e48198 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -25,6 +25,7 @@ class ReturnCode(IntEnum):
NO_ERR = 0
GENERIC_ERR = 1
SSH_ERR = 2
+ REMOTE_CMD_EXEC_ERR = 3
NODE_SETUP_ERR = 20
NODE_CLEANUP_ERR = 21
@@ -89,6 +90,26 @@ def __str__(self) -> str:
return f"SSH session with {self.host} has died"
+class RemoteCommandExecutionError(DTSError):
+ """
+ Raised when a command executed on a Node returns a non-zero exit status.
+ """
+
+ command: str
+ command_return_code: int
+ return_code: ClassVar[ReturnCode] = ReturnCode.REMOTE_CMD_EXEC_ERR
+
+ def __init__(self, command: str, command_return_code: int) -> None:
+ self.command = command
+ self.command_return_code = command_return_code
+
+ def __str__(self) -> str:
+ return (
+ f"Command {self.command} returned a non-zero exit code: "
+ f"{self.command_return_code}"
+ )
+
+
class NodeSetupError(DTSError):
"""
Raised when setting up a node.
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index 4095e02c1b..fccd80a529 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -7,15 +7,29 @@
from abc import ABC, abstractmethod
from framework.config import NodeConfiguration
+from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
@dataclasses.dataclass(slots=True, frozen=True)
-class HistoryRecord:
+class CommandResult:
+ """
+ The result of remote execution of a command.
+ """
+
name: str
command: str
- output: str | int
+ stdout: str
+ stderr: str
+ return_code: int
+
+ def __str__(self) -> str:
+ return (
+ f"stdout: '{self.stdout}'\n"
+ f"stderr: '{self.stderr}'\n"
+ f"return_code: '{self.return_code}'"
+ )
class RemoteSession(ABC):
@@ -35,7 +49,7 @@ class RemoteSession(ABC):
username: str
password: str
logger: DTSLOG
- history: list[HistoryRecord]
+ history: list[CommandResult]
_node_config: NodeConfiguration
def __init__(
@@ -68,28 +82,33 @@ def _connect(self) -> None:
Create connection to assigned node.
"""
- def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
+ def send_command(
+ self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ ) -> CommandResult:
"""
- Send a command and return the output.
+ Send a command to the connected node and return CommandResult.
+ If verify is True, check the return code of the executed command
+ and raise a RemoteCommandExecutionError if the command failed.
"""
- self.logger.info(f"Sending: {command}")
- out = self._send_command(command, timeout)
- self.logger.debug(f"Received from {command}: {out}")
- self._history_add(command=command, output=out)
- return out
+ self.logger.info(f"Sending: '{command}'")
+ result = self._send_command(command, timeout)
+ if verify and result.return_code:
+ self.logger.debug(
+ f"Command '{command}' failed with return code '{result.return_code}'"
+ )
+ self.logger.debug(f"stdout: '{result.stdout}'")
+ self.logger.debug(f"stderr: '{result.stderr}'")
+ raise RemoteCommandExecutionError(command, result.return_code)
+ self.logger.debug(f"Received from '{command}':\n{result}")
+ self.history.append(result)
+ return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return the output
- of the command.
+ Use the underlying protocol to execute the command and return CommandResult.
"""
- def _history_add(self, command: str, output: str) -> None:
- self.history.append(
- HistoryRecord(name=self.name, command=command, output=output)
- )
-
def close(self, force: bool = False) -> None:
"""
Close the remote session and free all used resources.
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
index 5816b1ce6b..fb2f01dbc1 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -12,7 +12,7 @@
from framework.logger import DTSLOG
from framework.utils import GREEN, RED
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
class SSHSession(RemoteSession):
@@ -163,7 +163,14 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
+ output = self._send_command_get_output(command, timeout)
+ return_code = int(self._send_command_get_output("echo $?", timeout))
+
+ # we're capturing only stdout
+ return CommandResult(self.name, command, output, "", return_code)
+
+ def _send_command_get_output(self, command: str, timeout: float) -> str:
try:
self._clean_session()
self._send_line(command)
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 03/10] dts: add dpdk build on sut
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 01/10] dts: add node and os abstractions Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 02/10] dts: add ssh command verification Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-16 13:15 ` Owen Hilyard
2022-11-14 16:54 ` [RFC PATCH v2 04/10] dts: add dpdk execution handling Juraj Linkeš
` (7 subsequent siblings)
10 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Add the ability to build DPDK and apps, using a configured target.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/exception.py | 17 +++
dts/framework/remote_session/os/os_session.py | 90 +++++++++++-
.../remote_session/os/posix_session.py | 128 +++++++++++++++++
.../remote_session/remote_session.py | 34 ++++-
dts/framework/remote_session/ssh_session.py | 64 ++++++++-
dts/framework/settings.py | 40 +++++-
dts/framework/testbed_model/node/sut_node.py | 131 ++++++++++++++++++
dts/framework/utils.py | 15 ++
8 files changed, 505 insertions(+), 14 deletions(-)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index b282e48198..93d99432ae 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -26,6 +26,7 @@ class ReturnCode(IntEnum):
GENERIC_ERR = 1
SSH_ERR = 2
REMOTE_CMD_EXEC_ERR = 3
+ DPDK_BUILD_ERR = 10
NODE_SETUP_ERR = 20
NODE_CLEANUP_ERR = 21
@@ -110,6 +111,22 @@ def __str__(self) -> str:
)
+class RemoteDirectoryExistsError(DTSError):
+ """
+ Raised when a remote directory to be created already exists.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.REMOTE_CMD_EXEC_ERR
+
+
+class DPDKBuildError(DTSError):
+ """
+ Raised when DPDK build fails for any reason.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.DPDK_BUILD_ERR
+
+
class NodeSetupError(DTSError):
"""
Raised when setting up a node.
diff --git a/dts/framework/remote_session/os/os_session.py b/dts/framework/remote_session/os/os_session.py
index 2a72082628..57e2865282 100644
--- a/dts/framework/remote_session/os/os_session.py
+++ b/dts/framework/remote_session/os/os_session.py
@@ -2,12 +2,15 @@
# Copyright(c) 2022 PANTHEON.tech s.r.o.
# Copyright(c) 2022 University of New Hampshire
-from abc import ABC
+from abc import ABC, abstractmethod
+from pathlib import PurePath
-from framework.config import NodeConfiguration
+from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
from framework.remote_session.factory import create_remote_session
from framework.remote_session.remote_session import RemoteSession
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict
class OSSession(ABC):
@@ -44,3 +47,86 @@ def is_alive(self) -> bool:
Check whether the remote session is still responding.
"""
return self.remote_session.is_alive()
+
+ @abstractmethod
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+ """
+ Try to find DPDK remote dir in remote_dir.
+ """
+
+ @abstractmethod
+ def get_remote_tmp_dir(self) -> PurePath:
+ """
+ Get the path of the temporary directory of the remote OS.
+ """
+
+ @abstractmethod
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for the target architecture. Get
+ information from the node if needed.
+ """
+
+ @abstractmethod
+ def join_remote_path(self, *args: str | PurePath) -> PurePath:
+ """
+ Join path parts using the path separator that fits the remote OS.
+ """
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local storage to destination_file on the remote Node
+ associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated remote Node to destination_file on local storage.
+ """
+
+ @abstractmethod
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ """
+ Remove remote directory, by default remove recursively and forcefully.
+ """
+
+ @abstractmethod
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ """
+ Extract remote tarball in place. If expected_dir is a non-empty string, check
+ whether the dir exists after extracting the archive.
+ """
+
+ @abstractmethod
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: str,
+ remote_dpdk_dir: str | PurePath,
+ target_name: str,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> PurePath:
+ """
+ Build DPDK in the input dir with specified environment variables and meson
+ arguments.
+ Return the directory path where DPDK was built.
+ """
+
+ @abstractmethod
+ def get_dpdk_version(self, version_path: str | PurePath) -> str:
+ """
+ Inspect DPDK version on the remote node from version_path.
+ """
diff --git a/dts/framework/remote_session/os/posix_session.py b/dts/framework/remote_session/os/posix_session.py
index 9622a4ea30..a36b8e8c1a 100644
--- a/dts/framework/remote_session/os/posix_session.py
+++ b/dts/framework/remote_session/os/posix_session.py
@@ -2,6 +2,13 @@
# Copyright(c) 2022 PANTHEON.tech s.r.o.
# Copyright(c) 2022 University of New Hampshire
+from pathlib import PurePath, PurePosixPath
+
+from framework.config import Architecture
+from framework.exception import DPDKBuildError, RemoteCommandExecutionError
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict
+
from .os_session import OSSession
@@ -10,3 +17,124 @@ class PosixSession(OSSession):
An intermediary class implementing the Posix compliant parts of
Linux and other OS remote sessions.
"""
+
+ @staticmethod
+ def combine_short_options(**opts: [str, bool]) -> str:
+ ret_opts = ""
+ for opt, include in opts.items():
+ if include:
+ ret_opts = f"{ret_opts}{opt}"
+
+ if ret_opts:
+ ret_opts = f" -{ret_opts}"
+
+ return ret_opts
+
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+ remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
+ result = self.remote_session.send_command(f"ls -d {remote_guess} | tail -1")
+ return PurePosixPath(result.stdout)
+
+ def get_remote_tmp_dir(self) -> PurePosixPath:
+ return PurePosixPath("/tmp")
+
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for i686 arch build. Get information
+ from the node if needed.
+ """
+ env_vars = {}
+ if arch == Architecture.i686:
+ # find the pkg-config path and store it in PKG_CONFIG_LIBDIR
+ out = self.remote_session.send_command("find /usr -type d -name pkgconfig")
+ pkg_path = ""
+ res_path = out.stdout.split("\r\n")
+ for cur_path in res_path:
+ if "i386" in cur_path:
+ pkg_path = cur_path
+ break
+ assert pkg_path != "", "i386 pkg-config path not found"
+
+ env_vars["CFLAGS"] = "-m32"
+ env_vars["PKG_CONFIG_LIBDIR"] = pkg_path
+
+ return env_vars
+
+ def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+ return PurePosixPath(*args)
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ self.remote_session.copy_file(source_file, destination_file, source_remote)
+
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ opts = PosixSession.combine_short_options(r=recursive, f=force)
+ self.remote_session.send_command(f"rm{opts} {remote_dir_path}")
+
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ self.remote_session.send_command(
+ f"tar xfm {remote_tarball_path} "
+ f"-C {PurePosixPath(remote_tarball_path).parent}",
+ 60,
+ )
+ if expected_dir:
+ self.remote_session.send_command(f"ls {expected_dir}", verify=True)
+
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: str,
+ remote_dpdk_dir: str | PurePath,
+ target_name: str,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> PurePosixPath:
+ build_dir = self.join_remote_path(remote_dpdk_dir, target_name)
+ try:
+ if rebuild:
+ # reconfigure, then build
+ self.logger.info("Reconfiguring DPDK build.")
+ self.remote_session.send_command(
+ f"meson configure {meson_args} {build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+ else:
+ # fresh build - remove target dir first, then build from scratch
+ self.logger.info("Configuring DPDK build from scratch.")
+ self.remove_remote_dir(build_dir)
+ self.remote_session.send_command(
+ f"meson {meson_args} {remote_dpdk_dir} {build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+
+ self.logger.info("Building DPDK.")
+ self.remote_session.send_command(
+ f"ninja -C {build_dir}", timeout, verify=True, env=env_vars
+ )
+ except RemoteCommandExecutionError as e:
+ raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
+
+ return build_dir
+
+ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+ out = self.remote_session.send_command(
+ f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
+ )
+ return out.stdout
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index fccd80a529..f10b1023f8 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -10,6 +10,7 @@
from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict
@dataclasses.dataclass(slots=True, frozen=True)
@@ -83,15 +84,22 @@ def _connect(self) -> None:
"""
def send_command(
- self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ self,
+ command: str,
+ timeout: float = SETTINGS.timeout,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
) -> CommandResult:
"""
- Send a command to the connected node and return CommandResult.
+ Send a command to the connected node using optional env vars
+ and return CommandResult.
If verify is True, check the return code of the executed command
and raise a RemoteCommandExecutionError if the command failed.
"""
- self.logger.info(f"Sending: '{command}'")
- result = self._send_command(command, timeout)
+ self.logger.info(
+ f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
+ )
+ result = self._send_command(command, timeout, env)
if verify and result.return_code:
self.logger.debug(
f"Command '{command}' failed with return code '{result.return_code}'"
@@ -104,9 +112,12 @@ def send_command(
return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> CommandResult:
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return CommandResult.
+ Use the underlying protocol to execute the command using optional env vars
+ and return CommandResult.
"""
def close(self, force: bool = False) -> None:
@@ -127,3 +138,14 @@ def is_alive(self) -> bool:
"""
Check whether the remote session is still responding.
"""
+
+ @abstractmethod
+ def copy_file(
+ self, source_file: str, destination_file: str, source_remote: bool = False
+ ) -> None:
+ """
+ Copy source_file from local storage to destination_file on the remote Node
+ associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated Node to destination_file on local storage.
+ """
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
index fb2f01dbc1..d4a6714e6b 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -5,12 +5,13 @@
import time
+import pexpect # type: ignore
from pexpect import pxssh # type: ignore
from framework.config import NodeConfiguration
from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
from framework.logger import DTSLOG
-from framework.utils import GREEN, RED
+from framework.utils import GREEN, RED, EnvVarsDict
from .remote_session import CommandResult, RemoteSession
@@ -163,16 +164,22 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> CommandResult:
- output = self._send_command_get_output(command, timeout)
- return_code = int(self._send_command_get_output("echo $?", timeout))
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
+ output = self._send_command_get_output(command, timeout, env)
+ return_code = int(self._send_command_get_output("echo $?", timeout, None))
# we're capturing only stdout
return CommandResult(self.name, command, output, "", return_code)
- def _send_command_get_output(self, command: str, timeout: float) -> str:
+ def _send_command_get_output(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> str:
try:
self._clean_session()
+ if env:
+ command = f"{env} {command}"
self._send_line(command)
except Exception as e:
raise e
@@ -189,3 +196,50 @@ def _close(self, force: bool = False) -> None:
else:
if self.is_alive():
self.session.logout()
+
+ def copy_file(
+ self, source_file: str, destination_file: str, source_remote: bool = False
+ ) -> None:
+ """
+ Send a local file to a remote host.
+ """
+ if source_remote:
+ source_file = f"{self.username}@{self.ip}:{source_file}"
+ else:
+ destination_file = f"{self.username}@{self.ip}:{destination_file}"
+
+ port = ""
+ if self.port:
+ port = f" -P {self.port}"
+
+ # this is not OS agnostic, find a Pythonic (and thus OS agnostic) way
+ # TODO Fabric should handle this
+ command = (
+ f"scp -v{port} -o NoHostAuthenticationForLocalhost=yes"
+ f" {source_file} {destination_file}"
+ )
+
+ self._spawn_scp(command)
+
+ def _spawn_scp(self, scp_cmd: str) -> None:
+ """
+ Transfer a file with SCP
+ """
+ self.logger.info(scp_cmd)
+ p: pexpect.spawn = pexpect.spawn(scp_cmd)
+ time.sleep(0.5)
+ ssh_newkey: str = "Are you sure you want to continue connecting"
+ i: int = p.expect(
+ [ssh_newkey, "[pP]assword", "# ", pexpect.EOF, pexpect.TIMEOUT], 120
+ )
+ if i == 0: # add once in trust list
+ p.sendline("yes")
+ i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2)
+
+ if i == 1:
+ time.sleep(0.5)
+ p.sendline(self.password)
+ p.expect("Exit status 0", 60)
+ if i == 4:
+ self.logger.error("SCP TIMEOUT error %d" % i)
+ p.close()
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 800f2c7b7f..e2bf3d2ce4 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -7,6 +7,7 @@
import os
from collections.abc import Callable, Iterable, Sequence
from dataclasses import dataclass
+from pathlib import Path
from typing import Any, TypeVar
_T = TypeVar("_T")
@@ -60,6 +61,9 @@ class _Settings:
output_dir: str
timeout: float
verbose: bool
+ skip_setup: bool
+ dpdk_ref: Path
+ compile_timeout: float
def _get_parser() -> argparse.ArgumentParser:
@@ -88,6 +92,7 @@ def _get_parser() -> argparse.ArgumentParser:
"--timeout",
action=_env_arg("DTS_TIMEOUT"),
default=15,
+ type=float,
required=False,
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
"compiling DPDK.",
@@ -103,6 +108,36 @@ def _get_parser() -> argparse.ArgumentParser:
"to the console.",
)
+ parser.add_argument(
+ "-s",
+ "--skip-setup",
+ action=_env_arg("DTS_SKIP_SETUP"),
+ required=False,
+ help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+ )
+
+ parser.add_argument(
+ "--dpdk-ref",
+ "--git",
+ "--snapshot",
+ action=_env_arg("DTS_DPDK_REF"),
+ default="dpdk.tar.xz",
+ type=Path,
+ required=False,
+ help="[DTS_DPDK_REF] Reference to DPDK source code, "
+ "can be either a path to a tarball or a git refspec. "
+ "In case of a tarball, it will be extracted in the same directory.",
+ )
+
+ parser.add_argument(
+ "--compile-timeout",
+ action=_env_arg("DTS_COMPILE_TIMEOUT"),
+ default=1200,
+ type=float,
+ required=False,
+ help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
+ )
+
return parser
@@ -111,8 +146,11 @@ def _get_settings() -> _Settings:
return _Settings(
config_file_path=parsed_args.config_file,
output_dir=parsed_args.output_dir,
- timeout=float(parsed_args.timeout),
+ timeout=parsed_args.timeout,
verbose=(parsed_args.verbose == "Y"),
+ skip_setup=(parsed_args.skip_setup == "Y"),
+ dpdk_ref=parsed_args.dpdk_ref,
+ compile_timeout=parsed_args.compile_timeout,
)
diff --git a/dts/framework/testbed_model/node/sut_node.py b/dts/framework/testbed_model/node/sut_node.py
index 79d54585c9..53268a7565 100644
--- a/dts/framework/testbed_model/node/sut_node.py
+++ b/dts/framework/testbed_model/node/sut_node.py
@@ -2,6 +2,14 @@
# Copyright(c) 2010-2014 Intel Corporation
# Copyright(c) 2022 PANTHEON.tech s.r.o.
+import os
+import tarfile
+from pathlib import PurePath
+
+from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict, skip_setup
+
from .node import Node
@@ -10,4 +18,127 @@ class SutNode(Node):
A class for managing connections to the System under Test, providing
methods that retrieve the necessary information about the node (such as
cpu, memory and NIC details) and configuration capabilities.
+ Another key capability is building DPDK according to given build target.
"""
+
+ _build_target_config: BuildTargetConfiguration | None
+ _env_vars: EnvVarsDict
+ _remote_tmp_dir: PurePath
+ __remote_dpdk_dir: PurePath | None
+ _app_compile_timeout: float
+
+ def __init__(self, node_config: NodeConfiguration):
+ super(SutNode, self).__init__(node_config)
+ self._build_target_config = None
+ self._env_vars = EnvVarsDict()
+ self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
+ self.__remote_dpdk_dir = None
+ self._app_compile_timeout = 90
+
+ @property
+ def _remote_dpdk_dir(self) -> PurePath:
+ if self.__remote_dpdk_dir is None:
+ self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
+ return self.__remote_dpdk_dir
+
+ @_remote_dpdk_dir.setter
+ def _remote_dpdk_dir(self, value: PurePath) -> None:
+ self.__remote_dpdk_dir = value
+
+ def _guess_dpdk_remote_dir(self) -> PurePath:
+ return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
+
+ def _setup_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Setup DPDK on the SUT node.
+ """
+ self._configure_build_target(build_target_config)
+ self._copy_dpdk_tarball()
+ self._build_dpdk()
+
+ def _configure_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Populate common environment variables and set build target config.
+ """
+ self._build_target_config = build_target_config
+ self._env_vars.update(
+ self.main_session.get_dpdk_build_env_vars(build_target_config.arch)
+ )
+ self._env_vars["CC"] = build_target_config.compiler.name
+
+ @skip_setup
+ def _copy_dpdk_tarball(self) -> None:
+ """
+ Copy to and extract DPDK tarball on the SUT node.
+ """
+ # check local path
+ assert SETTINGS.dpdk_ref.exists(), f"Package {SETTINGS.dpdk_ref} doesn't exist."
+
+ self.logger.info("Copying DPDK tarball to SUT.")
+ self.main_session.copy_file(SETTINGS.dpdk_ref, self._remote_tmp_dir)
+
+ # construct remote tarball path
+ # the basename is the same on local host and on remote Node
+ remote_tarball_path = self.main_session.join_remote_path(
+ self._remote_tmp_dir, os.path.basename(SETTINGS.dpdk_ref)
+ )
+
+ # construct remote path after extracting
+ with tarfile.open(SETTINGS.dpdk_ref) as dpdk_tar:
+ dpdk_top_dir = dpdk_tar.getnames()[0]
+ self._remote_dpdk_dir = self.main_session.join_remote_path(
+ self._remote_tmp_dir, dpdk_top_dir
+ )
+
+ self.logger.info("Extracting DPDK tarball on SUT.")
+ # clean remote path where we're extracting
+ self.main_session.remove_remote_dir(self._remote_dpdk_dir)
+
+ # then extract to remote path
+ self.main_session.extract_remote_tarball(
+ remote_tarball_path, self._remote_dpdk_dir
+ )
+
+ @skip_setup
+ def _build_dpdk(self) -> None:
+ """
+ Build DPDK. Uses the already configured target. Assumes that the tarball has
+ already been copied to and extracted on the SUT node.
+ """
+ meson_args = "-Denable_kmods=True -Dlibdir=lib --default-library=static"
+ self.main_session.build_dpdk(
+ self._env_vars,
+ meson_args,
+ self._remote_dpdk_dir,
+ self._build_target_config.name if self._build_target_config else "build",
+ )
+ self.logger.info(
+ f"DPDK version: {self.main_session.get_dpdk_version(self._remote_dpdk_dir)}"
+ )
+
+ def build_dpdk_app(self, app_name: str) -> PurePath:
+ """
+ Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
+ When app_name is 'all', build all example apps.
+ When app_name is any other string, tries to build that example app.
+ Return the directory path of the built app. If building all apps, return
+ the path to the examples directory (where all apps reside).
+ """
+ meson_args = f"-Dexamples={app_name}"
+ build_dir = self.main_session.build_dpdk(
+ self._env_vars,
+ meson_args,
+ self._remote_dpdk_dir,
+ self._build_target_config.name if self._build_target_config else "build",
+ rebuild=True,
+ timeout=self._app_compile_timeout,
+ )
+ if app_name == "all":
+ return self.main_session.join_remote_path(build_dir, "examples")
+ return self.main_session.join_remote_path(
+ build_dir, "examples", f"dpdk-{app_name}"
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index c28c8f1082..91e58f3218 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -4,6 +4,9 @@
# Copyright(c) 2022 University of New Hampshire
import sys
+from typing import Callable
+
+from framework.settings import SETTINGS
def check_dts_python_version() -> None:
@@ -22,9 +25,21 @@ def check_dts_python_version() -> None:
print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+def skip_setup(func) -> Callable[..., None]:
+ if SETTINGS.skip_setup:
+ return lambda *args: None
+ else:
+ return func
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
def RED(text: str) -> str:
return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+
+class EnvVarsDict(dict):
+ def __str__(self) -> str:
+ return " ".join(["=".join(item) for item in self.items()])
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 04/10] dts: add dpdk execution handling
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
` (2 preceding siblings ...)
2022-11-14 16:54 ` [RFC PATCH v2 03/10] dts: add dpdk build on sut Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-16 13:28 ` Owen Hilyard
2022-11-14 16:54 ` [RFC PATCH v2 05/10] dts: add node memory setup Juraj Linkeš
` (6 subsequent siblings)
10 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Add methods for setting up and shutting down DPDK apps and for
constructing EAL parameters.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 4 +
dts/framework/config/__init__.py | 85 ++++++++-
dts/framework/config/conf_yaml_schema.json | 22 +++
.../remote_session/os/linux_session.py | 15 ++
dts/framework/remote_session/os/os_session.py | 16 +-
.../remote_session/os/posix_session.py | 80 ++++++++
dts/framework/testbed_model/hw/__init__.py | 17 ++
dts/framework/testbed_model/hw/cpu.py | 164 ++++++++++++++++
dts/framework/testbed_model/node/node.py | 36 ++++
dts/framework/testbed_model/node/sut_node.py | 178 +++++++++++++++++-
dts/framework/utils.py | 20 ++
11 files changed, 634 insertions(+), 3 deletions(-)
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 6b0bc5c2bf..976888a88e 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -12,4 +12,8 @@ nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ arch: x86_64
os: linux
+ bypass_core0: true
+ cpus: ""
+ memory_channels: 4
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 1b97dc3ab9..344d697a69 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -11,12 +11,13 @@
import pathlib
from dataclasses import dataclass
from enum import Enum, auto, unique
-from typing import Any
+from typing import Any, Iterable
import warlock # type: ignore
import yaml
from framework.settings import SETTINGS
+from framework.utils import expand_range
class StrEnum(Enum):
@@ -60,6 +61,80 @@ class Compiler(StrEnum):
msvc = auto()
+@dataclass(slots=True, frozen=True)
+class CPU:
+ cpu: int
+ core: int
+ socket: int
+ node: int
+
+ def __str__(self) -> str:
+ return str(self.cpu)
+
+
+class CPUList(object):
+ """
+ Convert these options into a list of int cpus
+ cpu_list=[CPU1, CPU2] - a list of CPUs
+ cpu_list=[0,1,2,3] - a list of int indices
+ cpu_list=['0','1','2-3'] - a list of str indices; ranges are supported
+ cpu_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
+
+ The class creates a unified format used across the framework and allows
+ the user to use either a str representation (using str(instance) or directly
+ in f-strings) or a list representation (by accessing instance.cpu_list).
+ Empty cpu_list is allowed.
+ """
+
+ _cpu_list: list[int]
+
+ def __init__(self, cpu_list: list[int | str | CPU] | str):
+ self._cpu_list = []
+ if isinstance(cpu_list, str):
+ self._from_str(cpu_list.split(","))
+ else:
+ self._from_str((str(cpu) for cpu in cpu_list))
+
+ # the input cpus may not be sorted
+ self._cpu_list.sort()
+
+ @property
+ def cpu_list(self) -> list[int]:
+ return self._cpu_list
+
+ def _from_str(self, cpu_list: Iterable[str]) -> None:
+ for cpu in cpu_list:
+ self._cpu_list.extend(expand_range(cpu))
+
+ def _get_consecutive_cpus_range(self, cpu_list: list[int]) -> list[str]:
+ formatted_core_list = []
+ tmp_cpus_list = list(sorted(cpu_list))
+ segment = tmp_cpus_list[:1]
+ for core_id in tmp_cpus_list[1:]:
+ if core_id - segment[-1] == 1:
+ segment.append(core_id)
+ else:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}"
+ if len(segment) > 1
+ else f"{segment[0]}"
+ )
+ current_core_index = tmp_cpus_list.index(core_id)
+ formatted_core_list.extend(
+ self._get_consecutive_cpus_range(tmp_cpus_list[current_core_index:])
+ )
+ segment.clear()
+ break
+ if len(segment) > 0:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
+ )
+ return formatted_core_list
+
+ def __str__(self) -> str:
+ return f'{",".join(self._get_consecutive_cpus_range(self._cpu_list))}'
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -71,7 +146,11 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ arch: Architecture
os: OS
+ bypass_core0: bool
+ cpus: CPUList
+ memory_channels: int
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -80,7 +159,11 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ arch=Architecture(d["arch"]),
os=OS(d["os"]),
+ bypass_core0=d.get("bypass_core0", False),
+ cpus=CPUList(d.get("cpus", "1")),
+ memory_channels=d.get("memory_channels", 1),
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 409ce7ac74..c59d3e30e6 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -6,6 +6,12 @@
"type": "string",
"description": "A unique identifier for a node"
},
+ "ARCH": {
+ "type": "string",
+ "enum": [
+ "x86_64"
+ ]
+ },
"OS": {
"type": "string",
"enum": [
@@ -82,8 +88,23 @@
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
},
+ "arch": {
+ "$ref": "#/definitions/ARCH"
+ },
"os": {
"$ref": "#/definitions/OS"
+ },
+ "bypass_core0": {
+ "type": "boolean",
+ "description": "Indicate that DPDK should omit using the first core."
+ },
+ "cpus": {
+ "type": "string",
+ "description": "Optional comma-separated list of cpus to use, e.g.: 1,2,3,4,5,18-22. Defaults to 1. An empty string means use all cpus."
+ },
+ "memory_channels": {
+ "type": "integer",
+ "description": "How many memory channels to use. Optional, defaults to 1."
}
},
"additionalProperties": false,
@@ -91,6 +112,7 @@
"name",
"hostname",
"user",
+ "arch",
"os"
]
},
diff --git a/dts/framework/remote_session/os/linux_session.py b/dts/framework/remote_session/os/linux_session.py
index 39e80631dd..21f117b714 100644
--- a/dts/framework/remote_session/os/linux_session.py
+++ b/dts/framework/remote_session/os/linux_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2022 PANTHEON.tech s.r.o.
# Copyright(c) 2022 University of New Hampshire
+from framework.config import CPU
+
from .posix_session import PosixSession
@@ -9,3 +11,16 @@ class LinuxSession(PosixSession):
"""
The implementation of non-Posix compliant parts of Linux remote sessions.
"""
+
+ def get_remote_cpus(self, bypass_core0: bool) -> list[CPU]:
+ cpu_info = self.remote_session.send_command(
+ "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#"
+ ).stdout
+ cpus = []
+ for cpu_line in cpu_info.splitlines():
+ cpu, core, socket, node = cpu_line.split(",")
+ if bypass_core0 and core == 0 and socket == 0:
+ self.logger.info("Core0 bypassed.")
+ continue
+ cpus.append(CPU(int(cpu), int(core), int(socket), int(node)))
+ return cpus
diff --git a/dts/framework/remote_session/os/os_session.py b/dts/framework/remote_session/os/os_session.py
index 57e2865282..6f6b6a979e 100644
--- a/dts/framework/remote_session/os/os_session.py
+++ b/dts/framework/remote_session/os/os_session.py
@@ -3,9 +3,10 @@
# Copyright(c) 2022 University of New Hampshire
from abc import ABC, abstractmethod
+from collections.abc import Iterable
from pathlib import PurePath
-from framework.config import Architecture, NodeConfiguration
+from framework.config import CPU, Architecture, NodeConfiguration
from framework.logger import DTSLOG
from framework.remote_session.factory import create_remote_session
from framework.remote_session.remote_session import RemoteSession
@@ -130,3 +131,16 @@ def get_dpdk_version(self, version_path: str | PurePath) -> str:
"""
Inspect DPDK version on the remote node from version_path.
"""
+
+ @abstractmethod
+ def get_remote_cpus(self, bypass_core0: bool) -> list[CPU]:
+ """
+ Compose a list of CPUs present on the remote node.
+ """
+
+ @abstractmethod
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ """
+ Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
+ dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+ """
diff --git a/dts/framework/remote_session/os/posix_session.py b/dts/framework/remote_session/os/posix_session.py
index a36b8e8c1a..7151263c7a 100644
--- a/dts/framework/remote_session/os/posix_session.py
+++ b/dts/framework/remote_session/os/posix_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2022 PANTHEON.tech s.r.o.
# Copyright(c) 2022 University of New Hampshire
+import re
+from collections.abc import Iterable
from pathlib import PurePath, PurePosixPath
from framework.config import Architecture
@@ -138,3 +140,81 @@ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
)
return out.stdout
+
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ self.logger.info("Cleaning up DPDK apps.")
+ dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
+ if dpdk_runtime_dirs:
+ # kill and cleanup only if DPDK is running
+ dpdk_pids = self._get_dpdk_pids(dpdk_runtime_dirs)
+ for dpdk_pid in dpdk_pids:
+ self.remote_session.send_command(f"kill -9 {dpdk_pid}", 20)
+ self._check_dpdk_hugepages(dpdk_runtime_dirs)
+ self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
+
+ def _get_dpdk_runtime_dirs(
+ self, dpdk_prefix_list: Iterable[str]
+ ) -> list[PurePosixPath]:
+ prefix = PurePosixPath("/var", "run", "dpdk")
+ if not dpdk_prefix_list:
+ remote_prefixes = self._list_remote_dirs(prefix)
+ if not remote_prefixes:
+ dpdk_prefix_list = []
+ else:
+ dpdk_prefix_list = remote_prefixes
+
+ return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
+
+ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
+ """
+ Return a list of directories of the remote_dir.
+ If remote_path doesn't exist, return None.
+ """
+ out = self.remote_session.send_command(
+ f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
+ ).stdout
+ if "No such file or directory" in out:
+ return None
+ else:
+ return out.splitlines()
+
+ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+ pids = []
+ pid_regex = r"p(\d+)"
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ dpdk_config_file = PurePosixPath(dpdk_runtime_dir, "config")
+ if self._remote_files_exists(dpdk_config_file):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {dpdk_config_file}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ for out_line in out.splitlines():
+ match = re.match(pid_regex, out_line)
+ if match:
+ pids.append(int(match.group(1)))
+ return pids
+
+ def _remote_files_exists(self, remote_path: PurePath) -> bool:
+ result = self.remote_session.send_command(f"test -e {remote_path}")
+ return not result.return_code
+
+ def _check_dpdk_hugepages(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
+ if self._remote_files_exists(hugepage_info):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {hugepage_info}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ self.logger.warning("Some DPDK processes did not free hugepages.")
+ self.logger.warning("*******************************************")
+ self.logger.warning(out)
+ self.logger.warning("*******************************************")
+
+ def _remove_dpdk_runtime_dirs(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ self.remove_remote_dir(dpdk_runtime_dir)
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
new file mode 100644
index 0000000000..7d79a7efd0
--- /dev/null
+++ b/dts/framework/testbed_model/hw/__init__.py
@@ -0,0 +1,17 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+from framework.config import CPU, CPUList
+
+from .cpu import CPUAmount, CPUAmountFilter, CPUFilter, CPUListFilter
+
+
+def cpu_filter(
+ core_list: list[CPU], filter_specifier: CPUAmount | CPUList, ascending: bool
+) -> CPUFilter:
+ if isinstance(filter_specifier, CPUList):
+ return CPUListFilter(core_list, filter_specifier, ascending)
+ elif isinstance(filter_specifier, CPUAmount):
+ return CPUAmountFilter(core_list, filter_specifier, ascending)
+ else:
+ raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/hw/cpu.py
new file mode 100644
index 0000000000..87e87bcb4e
--- /dev/null
+++ b/dts/framework/testbed_model/hw/cpu.py
@@ -0,0 +1,164 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+import dataclasses
+from abc import ABC, abstractmethod
+from collections.abc import Iterable
+
+from framework.config import CPU, CPUList
+
+
+@dataclasses.dataclass(slots=True, frozen=True)
+class CPUAmount:
+ """
+ Define the amounts of cpus to use. If sockets is not None, socket_amount
+ is ignored.
+ """
+
+ cpus_per_core: int = 1
+ cores_per_socket: int = 2
+ socket_amount: int = 1
+ sockets: list[int] | None = None
+
+
+class CPUFilter(ABC):
+ """
+ Filter according to the input filter specifier. Each filter needs to be
+ implemented in a derived class.
+ This class only implements operations common to all filters, such as sorting
+ the list to be filtered beforehand.
+ """
+
+ _filter_specifier: CPUAmount | CPUList
+ _cpus_to_filter: list[CPU]
+
+ def __init__(
+ self,
+ core_list: list[CPU],
+ filter_specifier: CPUAmount | CPUList,
+ ascending: bool = True,
+ ) -> None:
+ self._filter_specifier = filter_specifier
+
+ # sorting by core is needed in case hyperthreading is enabled
+ self._cpus_to_filter = sorted(
+ core_list, key=lambda x: x.core, reverse=not ascending
+ )
+ self.filter()
+
+ @abstractmethod
+ def filter(self) -> list[CPU]:
+ """
+ Use the input self._filter_specifier to filter self._cpus_to_filter
+ and return the list of filtered CPUs. self._cpus_to_filter is a
+ sorter copy of the original list, so it may be modified.
+ """
+
+
+class CPUAmountFilter(CPUFilter):
+ """
+ Filter the input list of CPUs according to specified rules:
+ Use cores from the specified amount of sockets or from the specified socket ids.
+ If sockets is specified, it takes precedence over socket_amount.
+ From each of those sockets, use only cores_per_socket of cores.
+ And for each core, use cpus_per_core of cpus. Hypertheading
+ must be enabled for this to take effect.
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+
+ _filter_specifier: CPUAmount
+
+ def filter(self) -> list[CPU]:
+ return self._filter_cpus(self._filter_sockets(self._cpus_to_filter))
+
+ def _filter_sockets(self, cpus_to_filter: Iterable[CPU]) -> list[CPU]:
+ allowed_sockets: set[int] = set()
+ socket_amount = self._filter_specifier.socket_amount
+ if self._filter_specifier.sockets:
+ socket_amount = len(self._filter_specifier.sockets)
+ allowed_sockets = set(self._filter_specifier.sockets)
+
+ filtered_cpus = []
+ for cpu in cpus_to_filter:
+ if not self._filter_specifier.sockets:
+ if len(allowed_sockets) < socket_amount:
+ allowed_sockets.add(cpu.socket)
+ if cpu.socket in allowed_sockets:
+ filtered_cpus.append(cpu)
+
+ if len(allowed_sockets) < socket_amount:
+ raise ValueError(
+ f"The amount of sockets from which to use cores "
+ f"({socket_amount}) exceeds the actual amount present "
+ f"on the node ({len(allowed_sockets)})"
+ )
+
+ return filtered_cpus
+
+ def _filter_cpus(self, cpus_to_filter: Iterable[CPU]) -> list[CPU]:
+ # no need to use ordered dict, from Python3.7 the dict
+ # insertion order is preserved (LIFO).
+ allowed_cpu_per_core_count_map: dict[int, int] = {}
+ filtered_cpus = []
+ for cpu in cpus_to_filter:
+ if cpu.core in allowed_cpu_per_core_count_map:
+ cpu_count = allowed_cpu_per_core_count_map[cpu.core]
+ if self._filter_specifier.cpus_per_core > cpu_count:
+ # only add cpus of the given core
+ allowed_cpu_per_core_count_map[cpu.core] += 1
+ filtered_cpus.append(cpu)
+ else:
+ raise ValueError(
+ f"The amount of CPUs per core to use "
+ f"({self._filter_specifier.cpus_per_core}) "
+ f"exceeds the actual amount present. Is hyperthreading enabled?"
+ )
+ elif self._filter_specifier.cores_per_socket > len(
+ allowed_cpu_per_core_count_map
+ ):
+ # only add cpus if we need more
+ allowed_cpu_per_core_count_map[cpu.core] = 1
+ filtered_cpus.append(cpu)
+ else:
+ # cpus are sorted by core, at this point we won't encounter new cores
+ break
+
+ cores_per_socket = len(allowed_cpu_per_core_count_map)
+ if cores_per_socket < self._filter_specifier.cores_per_socket:
+ raise ValueError(
+ f"The amount of cores per socket to use "
+ f"({self._filter_specifier.cores_per_socket}) "
+ f"exceeds the actual amount present ({cores_per_socket})"
+ )
+
+ return filtered_cpus
+
+
+class CPUListFilter(CPUFilter):
+ """
+ Filter the input list of CPUs according to the input list of
+ core indices.
+ An empty CPUList won't filter anything.
+ """
+
+ _filter_specifier: CPUList
+
+ def filter(self) -> list[CPU]:
+ if not len(self._filter_specifier.cpu_list):
+ return self._cpus_to_filter
+
+ filtered_cpus = []
+ for core in self._cpus_to_filter:
+ if core.cpu in self._filter_specifier.cpu_list:
+ filtered_cpus.append(core)
+
+ if len(filtered_cpus) != len(self._filter_specifier.cpu_list):
+ raise ValueError(
+ f"Not all cpus from {self._filter_specifier.cpu_list} were found"
+ f"among {self._cpus_to_filter}"
+ )
+
+ return filtered_cpus
diff --git a/dts/framework/testbed_model/node/node.py b/dts/framework/testbed_model/node/node.py
index 86654e55ae..5ee7023335 100644
--- a/dts/framework/testbed_model/node/node.py
+++ b/dts/framework/testbed_model/node/node.py
@@ -8,13 +8,16 @@
"""
from framework.config import (
+ CPU,
BuildTargetConfiguration,
+ CPUList,
ExecutionConfiguration,
NodeConfiguration,
)
from framework.exception import NodeCleanupError, NodeSetupError, convert_exception
from framework.logger import DTSLOG, getLogger
from framework.remote_session import OSSession, create_session
+from framework.testbed_model.hw import CPUAmount, cpu_filter
class Node(object):
@@ -28,6 +31,7 @@ class Node(object):
main_session: OSSession
logger: DTSLOG
config: NodeConfiguration
+ cpus: list[CPU]
_other_sessions: list[OSSession]
def __init__(self, node_config: NodeConfiguration):
@@ -38,6 +42,7 @@ def __init__(self, node_config: NodeConfiguration):
self.logger = getLogger(self.name)
self.logger.info(f"Created node: {self.name}")
self.main_session = create_session(self.config, self.name, self.logger)
+ self._get_remote_cpus()
@convert_exception(NodeSetupError)
def setup_execution(self, execution_config: ExecutionConfiguration) -> None:
@@ -109,6 +114,37 @@ def create_session(self, name: str) -> OSSession:
self._other_sessions.append(connection)
return connection
+ def filter_cpus(
+ self,
+ filter_specifier: CPUAmount | CPUList,
+ ascending: bool = True,
+ ) -> list[CPU]:
+ """
+ Filter the logical cpus found on the Node according to specified rules:
+ Use cores from the specified amount of sockets or from the specified
+ socket ids. If sockets is specified, it takes precedence over socket_amount.
+ From each of those sockets, use only cpus_per_socket of cores.
+ And for each core, use cpus_per_core of cpus. Hypertheading
+ must be enabled for this to take effect.
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+ self.logger.info("Filtering ")
+ return cpu_filter(
+ self.cpus,
+ filter_specifier,
+ ascending,
+ ).filter()
+
+ def _get_remote_cpus(self) -> None:
+ """
+ Scan cpus in the remote OS and store a list of CPUs.
+ """
+ self.logger.info("Getting CPU information.")
+ self.cpus = self.main_session.get_remote_cpus(self.config.bypass_core0)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/node/sut_node.py b/dts/framework/testbed_model/node/sut_node.py
index 53268a7565..ff3be845b4 100644
--- a/dts/framework/testbed_model/node/sut_node.py
+++ b/dts/framework/testbed_model/node/sut_node.py
@@ -4,10 +4,13 @@
import os
import tarfile
+import time
from pathlib import PurePath
-from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.config import CPU, BuildTargetConfiguration, CPUList, NodeConfiguration
+from framework.remote_session import OSSession
from framework.settings import SETTINGS
+from framework.testbed_model.hw import CPUAmount, CPUListFilter
from framework.utils import EnvVarsDict, skip_setup
from .node import Node
@@ -21,19 +24,31 @@ class SutNode(Node):
Another key capability is building DPDK according to given build target.
"""
+ cpus: list[CPU]
+ dpdk_prefix_list: list[str]
+ dpdk_prefix_subfix: str
_build_target_config: BuildTargetConfiguration | None
_env_vars: EnvVarsDict
_remote_tmp_dir: PurePath
__remote_dpdk_dir: PurePath | None
_app_compile_timeout: float
+ _dpdk_kill_session: OSSession | None
def __init__(self, node_config: NodeConfiguration):
super(SutNode, self).__init__(node_config)
+ self.dpdk_prefix_list = []
self._build_target_config = None
self._env_vars = EnvVarsDict()
self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
self.__remote_dpdk_dir = None
self._app_compile_timeout = 90
+ self._dpdk_kill_session = None
+
+ # filter the node cpus according to user config
+ self.cpus = CPUListFilter(self.cpus, self.config.cpus).filter()
+ self.dpdk_prefix_subfix = (
+ f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"
+ )
@property
def _remote_dpdk_dir(self) -> PurePath:
@@ -142,3 +157,164 @@ def build_dpdk_app(self, app_name: str) -> PurePath:
return self.main_session.join_remote_path(
build_dir, "examples", f"dpdk-{app_name}"
)
+
+ def kill_cleanup_dpdk_apps(self) -> None:
+ """
+ Kill all dpdk applications on the SUT. Cleanup hugepages.
+ """
+ if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
+ # we can use the session if it exists and responds
+ self._dpdk_kill_session.kill_cleanup_dpdk_apps(self.dpdk_prefix_list)
+ else:
+ # otherwise, we need to (re)create it
+ self._dpdk_kill_session = self.create_session("dpdk_kill")
+ self.dpdk_prefix_list = []
+
+ def create_eal_parameters(
+ self,
+ fixed_prefix: bool = False,
+ core_filter_specifier: CPUAmount | CPUList = CPUAmount(),
+ ascending_cores: bool = True,
+ prefix: str = "",
+ no_pci: bool = False,
+ vdevs: list[str] = None,
+ other_eal_param: str = "",
+ ) -> str:
+ """
+ Generate eal parameters character string;
+ :param fixed_prefix: use fixed file-prefix or not, when it is true,
+ the file-prefix will not be added a timestamp
+ :param core_filter_specifier: an amount of cpus/cores/sockets to use
+ or a list of cpu ids to use.
+ The default will select one cpu for each of two cores
+ on one socket, in ascending order of core ids.
+ :param ascending_cores: True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the
+ highest id and continue in descending order. This ordering
+ affects which sockets to consider first as well.
+ :param prefix: set file prefix string, eg:
+ prefix='vf';
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True;
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1'];
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments';
+ :return: eal param string, eg:
+ '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+ if DPDK version < 20.11-rc4, eal_str eg:
+ '-c 0xf -w 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+ """
+ if vdevs is None:
+ vdevs = []
+
+ config = {
+ "core_filter_specifier": core_filter_specifier,
+ "ascending_cores": ascending_cores,
+ "prefix": prefix,
+ "no_pci": no_pci,
+ "vdevs": vdevs,
+ "other_eal_param": other_eal_param,
+ }
+
+ eal_parameter_creator = _EalParameter(
+ sut_node=self, fixed_prefix=fixed_prefix, **config
+ )
+ eal_str = eal_parameter_creator.make_eal_param()
+
+ return eal_str
+
+
+class _EalParameter(object):
+ def __init__(
+ self,
+ sut_node: SutNode,
+ fixed_prefix: bool,
+ core_filter_specifier: CPUAmount | CPUList,
+ ascending_cores: bool,
+ prefix: str,
+ no_pci: bool,
+ vdevs: list[str],
+ other_eal_param: str,
+ ):
+ """
+ Generate eal parameters character string;
+ :param sut_node: SUT Node;
+ :param fixed_prefix: use fixed file-prefix or not, when it is true,
+ he file-prefix will not be added a timestamp
+ :param core_filter_specifier: an amount of cpus/cores/sockets to use
+ or a list of cpu ids to use.
+ :param ascending_cores: True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the
+ highest id and continue in descending order. This ordering
+ affects which sockets to consider first as well.
+ :param prefix: set file prefix string, eg:
+ prefix='vf';
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True;
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1'];
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments';
+ """
+ self.os = sut_node.config.os
+ self.fixed_prefix = fixed_prefix
+ self.sut_node = sut_node
+ self.core_filter_specifier = core_filter_specifier
+ self.ascending_cores = ascending_cores
+ self.prefix = prefix
+ self.no_pci = no_pci
+ self.vdevs = vdevs
+ self.other_eal_param = other_eal_param
+
+ def _make_lcores_param(self) -> str:
+ filtered_cpus = self.sut_node.filter_cpus(
+ self.core_filter_specifier, self.ascending_cores
+ )
+ return f"-l {CPUList(filtered_cpus)}"
+
+ def _make_memory_channels(self) -> str:
+ param_template = "-n {}"
+ return param_template.format(self.sut_node.config.memory_channels)
+
+ def _make_no_pci_param(self) -> str:
+ if self.no_pci is True:
+ return "--no-pci"
+ else:
+ return ""
+
+ def _make_prefix_param(self) -> str:
+ if self.prefix == "":
+ fixed_file_prefix = f"dpdk_{self.sut_node.dpdk_prefix_subfix}"
+ else:
+ fixed_file_prefix = self.prefix
+ if not self.fixed_prefix:
+ fixed_file_prefix = (
+ f"{fixed_file_prefix}_{self.sut_node.dpdk_prefix_subfix}"
+ )
+ fixed_file_prefix = self._do_os_handle_with_prefix_param(fixed_file_prefix)
+ return fixed_file_prefix
+
+ def _make_vdevs_param(self) -> str:
+ if len(self.vdevs) == 0:
+ return ""
+ else:
+ return " ".join(f"--vdev {vdev}" for vdev in self.vdevs)
+
+ def _do_os_handle_with_prefix_param(self, file_prefix: str) -> str:
+ self.sut_node.dpdk_prefix_list.append(file_prefix)
+ return f"--file-prefix={file_prefix}"
+
+ def make_eal_param(self) -> str:
+ _eal_str = " ".join(
+ [
+ self._make_lcores_param(),
+ self._make_memory_channels(),
+ self._make_prefix_param(),
+ self._make_no_pci_param(),
+ self._make_vdevs_param(),
+ # append user defined eal parameters
+ self.other_eal_param,
+ ]
+ )
+ return _eal_str
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 91e58f3218..3c2f0adff9 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -32,6 +32,26 @@ def skip_setup(func) -> Callable[..., None]:
return func
+def expand_range(range_str: str) -> list[int]:
+ """
+ Process range string into a list of integers. There are two possible formats:
+ n - a single integer
+ n-m - a range of integers
+
+ The returned range includes both n and m. Empty string returns an empty list.
+ """
+ expanded_range: list[int] = []
+ if range_str:
+ range_boundaries = range_str.split("-")
+ # will throw an exception when items in range_boundaries can't be converted,
+ # serving as type check
+ expanded_range.extend(
+ range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
+ )
+
+ return expanded_range
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 05/10] dts: add node memory setup
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
` (3 preceding siblings ...)
2022-11-14 16:54 ` [RFC PATCH v2 04/10] dts: add dpdk execution handling Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-16 13:47 ` Owen Hilyard
2022-11-14 16:54 ` [RFC PATCH v2 06/10] dts: add test results module Juraj Linkeš
` (5 subsequent siblings)
10 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Setup hugepages on nodes. This is useful not only on SUT nodes, but
also on TG nodes which use TGs that utilize hugepages.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/__init__.py | 1 +
dts/framework/remote_session/arch/__init__.py | 20 +++++
dts/framework/remote_session/arch/arch.py | 57 +++++++++++++
.../remote_session/os/linux_session.py | 85 +++++++++++++++++++
dts/framework/remote_session/os/os_session.py | 10 +++
dts/framework/testbed_model/node/node.py | 15 +++-
6 files changed, 187 insertions(+), 1 deletion(-)
create mode 100644 dts/framework/remote_session/arch/__init__.py
create mode 100644 dts/framework/remote_session/arch/arch.py
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index f2339b20bd..f0deeadac6 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -11,4 +11,5 @@
# pylama:ignore=W0611
+from .arch import Arch, create_arch
from .os import OSSession, create_session
diff --git a/dts/framework/remote_session/arch/__init__.py b/dts/framework/remote_session/arch/__init__.py
new file mode 100644
index 0000000000..d78ad42ac5
--- /dev/null
+++ b/dts/framework/remote_session/arch/__init__.py
@@ -0,0 +1,20 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+from framework.config import Architecture, NodeConfiguration
+
+from .arch import PPC64, Arch, Arm64, i686, x86_32, x86_64
+
+
+def create_arch(node_config: NodeConfiguration) -> Arch:
+ match node_config.arch:
+ case Architecture.x86_64:
+ return x86_64()
+ case Architecture.x86_32:
+ return x86_32()
+ case Architecture.i686:
+ return i686()
+ case Architecture.ppc64le:
+ return PPC64()
+ case Architecture.arm64:
+ return Arm64()
diff --git a/dts/framework/remote_session/arch/arch.py b/dts/framework/remote_session/arch/arch.py
new file mode 100644
index 0000000000..05c7602def
--- /dev/null
+++ b/dts/framework/remote_session/arch/arch.py
@@ -0,0 +1,57 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+
+class Arch(object):
+ """
+ Stores architecture-specific information.
+ """
+
+ @property
+ def default_hugepage_memory(self) -> int:
+ """
+ Return the default amount of memory allocated for hugepages DPDK will use.
+ The default is an amount equal to 256 2MB hugepages (512MB memory).
+ """
+ return 256 * 2048
+
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ """
+ An architecture may need to force configuration of hugepages to first socket.
+ """
+ return False
+
+
+class x86_64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 4096 * 2048
+
+
+class x86_32(Arch):
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ return True
+
+
+class i686(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 512 * 2048
+
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ return True
+
+
+class PPC64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 512 * 2048
+
+
+class Arm64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 2048 * 2048
diff --git a/dts/framework/remote_session/os/linux_session.py b/dts/framework/remote_session/os/linux_session.py
index 21f117b714..fad33d7613 100644
--- a/dts/framework/remote_session/os/linux_session.py
+++ b/dts/framework/remote_session/os/linux_session.py
@@ -3,6 +3,8 @@
# Copyright(c) 2022 University of New Hampshire
from framework.config import CPU
+from framework.exception import RemoteCommandExecutionError
+from framework.utils import expand_range
from .posix_session import PosixSession
@@ -24,3 +26,86 @@ def get_remote_cpus(self, bypass_core0: bool) -> list[CPU]:
continue
cpus.append(CPU(int(cpu), int(core), int(socket), int(node)))
return cpus
+
+ def setup_hugepages(
+ self, hugepage_amount: int = -1, force_first_numa: bool = False
+ ) -> None:
+ self.logger.info("Getting Hugepage information.")
+ hugepage_size = self._get_hugepage_size()
+ hugepages_total = self._get_hugepages_total()
+ self._numa_nodes = self._get_numa_nodes()
+
+ target_hugepages_total = int(hugepage_amount / hugepage_size)
+ if hugepage_amount % hugepage_size:
+ target_hugepages_total += 1
+ if force_first_numa or hugepages_total != target_hugepages_total:
+ # when forcing numa, we need to clear existing hugepages regardless
+ # of size, so they can be moved to the first numa node
+ self._configure_huge_pages(
+ target_hugepages_total, hugepage_size, force_first_numa
+ )
+ else:
+ self.logger.info("Hugepages already configured.")
+ self._mount_huge_pages()
+
+ def _get_hugepage_size(self) -> int:
+ hugepage_size = self.remote_session.send_command(
+ "awk '/Hugepagesize/ {print $2}' /proc/meminfo"
+ ).stdout
+ return int(hugepage_size)
+
+ def _get_hugepages_total(self) -> int:
+ hugepages_total = self.remote_session.send_command(
+ "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ ).stdout
+ return int(hugepages_total)
+
+ def _get_numa_nodes(self) -> list[int]:
+ try:
+ numa_range = self.remote_session.send_command(
+ "cat /sys/devices/system/node/online", verify=True
+ ).stdout
+ numa_range = expand_range(numa_range)
+ except RemoteCommandExecutionError:
+ # the file doesn't exist, meaning the node doesn't support numa
+ numa_range = []
+ return numa_range
+
+ def _mount_huge_pages(self) -> None:
+ self.logger.info("Re-mounting Hugepages.")
+ hugapge_fs_cmd = "awk '/hugetlbfs/ { print $2 }' /proc/mounts"
+ self.remote_session.send_command(f"umount $({hugapge_fs_cmd})")
+ result = self.remote_session.send_command(hugapge_fs_cmd)
+ if result.stdout == "":
+ remote_mount_path = "/mnt/huge"
+ self.remote_session.send_command(f"mkdir -p {remote_mount_path}")
+ self.remote_session.send_command(
+ f"mount -t hugetlbfs nodev {remote_mount_path}"
+ )
+
+ def _supports_numa(self) -> bool:
+ # the system supports numa if self._numa_nodes is non-empty and there are more
+ # than one numa node (in the latter case it may actually support numa, but
+ # there's no reason to do any numa specific configuration)
+ return len(self._numa_nodes) > 1
+
+ def _configure_huge_pages(
+ self, amount: int, size: int, force_first_numa: bool
+ ) -> None:
+ self.logger.info("Configuring Hugepages.")
+ hugepage_config_path = (
+ f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+ )
+ if force_first_numa and self._supports_numa():
+ # clear non-numa hugepages
+ self.remote_session.send_command(
+ f"echo 0 | sudo tee {hugepage_config_path}"
+ )
+ hugepage_config_path = (
+ f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages"
+ f"/hugepages-{size}kB/nr_hugepages"
+ )
+
+ self.remote_session.send_command(
+ f"echo {amount} | sudo tee {hugepage_config_path}"
+ )
diff --git a/dts/framework/remote_session/os/os_session.py b/dts/framework/remote_session/os/os_session.py
index 6f6b6a979e..f84f3ce63c 100644
--- a/dts/framework/remote_session/os/os_session.py
+++ b/dts/framework/remote_session/os/os_session.py
@@ -144,3 +144,13 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
"""
+
+ @abstractmethod
+ def setup_hugepages(
+ self, hugepage_amount: int = -1, force_first_numa: bool = False
+ ) -> None:
+ """
+ Get the node's Hugepage Size, configure the specified amount of hugepages
+ if needed and mount the hugepages if needed.
+ If force_first_numa is True, configure hugepages just on the first socket.
+ """
diff --git a/dts/framework/testbed_model/node/node.py b/dts/framework/testbed_model/node/node.py
index 5ee7023335..96a1724f4c 100644
--- a/dts/framework/testbed_model/node/node.py
+++ b/dts/framework/testbed_model/node/node.py
@@ -16,7 +16,7 @@
)
from framework.exception import NodeCleanupError, NodeSetupError, convert_exception
from framework.logger import DTSLOG, getLogger
-from framework.remote_session import OSSession, create_session
+from framework.remote_session import Arch, OSSession, create_arch, create_session
from framework.testbed_model.hw import CPUAmount, cpu_filter
@@ -33,6 +33,7 @@ class Node(object):
config: NodeConfiguration
cpus: list[CPU]
_other_sessions: list[OSSession]
+ _arch: Arch
def __init__(self, node_config: NodeConfiguration):
self.config = node_config
@@ -42,6 +43,7 @@ def __init__(self, node_config: NodeConfiguration):
self.logger = getLogger(self.name)
self.logger.info(f"Created node: {self.name}")
self.main_session = create_session(self.config, self.name, self.logger)
+ self._arch = create_arch(self.config)
self._get_remote_cpus()
@convert_exception(NodeSetupError)
@@ -50,6 +52,7 @@ def setup_execution(self, execution_config: ExecutionConfiguration) -> None:
Perform the execution setup that will be done for each execution
this node is part of.
"""
+ self._setup_hugepages()
self._setup_execution(execution_config)
def _setup_execution(self, execution_config: ExecutionConfiguration) -> None:
@@ -145,6 +148,16 @@ def _get_remote_cpus(self) -> None:
self.logger.info("Getting CPU information.")
self.cpus = self.main_session.get_remote_cpus(self.config.bypass_core0)
+ def _setup_hugepages(self):
+ """
+ Setup hugepages on the Node. Different architectures can supply different
+ amounts of memory for hugepages and numa-based hugepage allocation may need
+ to be considered.
+ """
+ self.main_session.setup_hugepages(
+ self._arch.default_hugepage_memory, self._arch.hugepage_force_first_numa
+ )
+
def close(self) -> None:
"""
Close all connections and free other resources.
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 06/10] dts: add test results module
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
` (4 preceding siblings ...)
2022-11-14 16:54 ` [RFC PATCH v2 05/10] dts: add node memory setup Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 07/10] dts: add simple stats report Juraj Linkeš
` (4 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The module keeps track of test case results along with miscellaneous
information, such as on which SUT's did a failure occur and during the
testing of which build target.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/dts.py | 5 +
dts/framework/test_result.py | 217 +++++++++++++++++++++++++++++++++++
2 files changed, 222 insertions(+)
create mode 100644 dts/framework/test_result.py
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 262c392d8e..d606f8de2e 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -14,9 +14,11 @@
from .exception import DTSError, ReturnCode
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
+from .test_result import Result
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("dts")
+result: Result = Result()
def run_all() -> None:
@@ -26,6 +28,7 @@ def run_all() -> None:
"""
return_code = ReturnCode.NO_ERR
global dts_logger
+ global result
# check the python version of the server that run dts
check_dts_python_version()
@@ -45,6 +48,7 @@ def run_all() -> None:
# for all Execution sections
for execution in CONFIGURATION.executions:
sut_node = init_nodes(execution, nodes)
+ result.sut = sut_node
run_execution(sut_node, execution)
except DTSError as e:
@@ -104,6 +108,7 @@ def run_build_target(
Run the given build target.
"""
dts_logger.info(f"Running target '{build_target.name}'.")
+ result.target = build_target
try:
sut_node.setup_build_target(build_target)
run_suite(sut_node, build_target, execution)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
new file mode 100644
index 0000000000..a12517b9bc
--- /dev/null
+++ b/dts/framework/test_result.py
@@ -0,0 +1,217 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+"""
+Generic result container and reporters
+"""
+
+
+class Result(object):
+ """
+ Generic result container. Useful to store/retrieve results during
+ a DTF execution.
+
+ It manages and hide an internal complex structure like the one shown below.
+ This is presented to the user with a property based interface.
+
+ internals = [
+ 'sut1', [
+ 'kdriver',
+ 'firmware',
+ 'pkg',
+ 'driver',
+ 'dpdk_version',
+ 'target1', 'nic1', [
+ 'suite1', [
+ 'case1', ['PASSED', ''],
+ 'case2', ['PASSED', ''],
+ ],
+ ],
+ 'target2', 'nic1', [
+ 'suite2', [
+ 'case3', ['PASSED', ''],
+ 'case4', ['FAILED', 'message'],
+ ],
+ 'suite3', [
+ 'case5', ['BLOCKED', 'message'],
+ ],
+ ]
+ ]
+ ]
+
+ """
+
+ def __init__(self):
+ self.__sut = 0
+ self.__target = 0
+ self.__test_suite = 0
+ self.__test_case = 0
+ self.__test_result = None
+ self.__message = None
+ self.__internals = []
+ self.__failed_suts = {}
+ self.__failed_targets = {}
+
+ def __set_sut(self, sut):
+ if sut not in self.__internals:
+ self.__internals.append(sut)
+ self.__internals.append([])
+ self.__sut = self.__internals.index(sut)
+
+ def __get_sut(self):
+ return self.__internals[self.__sut]
+
+ def current_dpdk_version(self, sut):
+ """
+ Returns the dpdk version for a given SUT
+ """
+ try:
+ sut_idx = self.__internals.index(sut)
+ return self.__internals[sut_idx + 1][4]
+ except:
+ return ""
+
+ def __set_dpdk_version(self, dpdk_version):
+ if dpdk_version not in self.internals[self.__sut + 1]:
+ dpdk_current = self.__get_dpdk_version()
+ if dpdk_current:
+ if dpdk_version not in dpdk_current:
+ self.internals[self.__sut + 1][4] = (
+ dpdk_current + "/" + dpdk_version
+ )
+ else:
+ self.internals[self.__sut + 1].append(dpdk_version)
+
+ def __get_dpdk_version(self):
+ try:
+ return self.internals[self.__sut + 1][4]
+ except:
+ return ""
+
+ def __current_targets(self):
+ return self.internals[self.__sut + 1]
+
+ def __set_target(self, target):
+ targets = self.__current_targets()
+ if target not in targets:
+ targets.append(target)
+ targets.append("_nic_")
+ targets.append([])
+ self.__target = targets.index(target)
+
+ def __get_target(self):
+ return self.__current_targets()[self.__target]
+
+ def __current_suites(self):
+ return self.__current_targets()[self.__target + 2]
+
+ def __set_test_suite(self, test_suite):
+ suites = self.__current_suites()
+ if test_suite not in suites:
+ suites.append(test_suite)
+ suites.append([])
+ self.__test_suite = suites.index(test_suite)
+
+ def __get_test_suite(self):
+ return self.__current_suites()[self.__test_suite]
+
+ def __current_cases(self):
+ return self.__current_suites()[self.__test_suite + 1]
+
+ def __set_test_case(self, test_case):
+ cases = self.__current_cases()
+ cases.append(test_case)
+ cases.append([])
+ self.__test_case = cases.index(test_case)
+
+ def __get_test_case(self):
+ return self.__current_cases()[self.__test_case]
+
+ def __get_internals(self):
+ return self.__internals
+
+ def __current_result(self):
+ return self.__current_cases()[self.__test_case + 1]
+
+ def __set_test_case_result(self, result, message):
+ test_case = self.__current_result()
+ test_case.append(result)
+ test_case.append(message)
+ self.__test_result = result
+ self.__message = message
+
+ def copy_suite(self, suite_result):
+ self.__current_suites()[self.__test_suite + 1] = suite_result.__current_cases()
+
+ def test_case_passed(self):
+ """
+ Set last test case added as PASSED
+ """
+ self.__set_test_case_result(result="PASSED", message="")
+
+ def test_case_failed(self, message):
+ """
+ Set last test case added as FAILED
+ """
+ self.__set_test_case_result(result="FAILED", message=message)
+
+ def test_case_blocked(self, message):
+ """
+ Set last test case added as BLOCKED
+ """
+ self.__set_test_case_result(result="BLOCKED", message=message)
+
+ def all_suts(self):
+ """
+ Returns all the SUTs it's aware of.
+ """
+ return self.__internals[::2]
+
+ def all_targets(self, sut):
+ """
+ Returns the targets for a given SUT
+ """
+ try:
+ sut_idx = self.__internals.index(sut)
+ except:
+ return None
+ return self.__internals[sut_idx + 1][5::3]
+
+ def add_failed_sut(self, sut, msg):
+ """
+ Sets the given SUT as failing due to msg
+ """
+ self.__failed_suts[sut] = msg
+
+ def remove_failed_sut(self, sut):
+ """
+ Remove the given SUT from failed SUTs collection
+ """
+ if sut in self.__failed_suts:
+ self.__failed_suts.pop(sut)
+
+ def add_failed_target(self, sut, target, msg):
+ """
+ Sets the given SUT, target as failing due to msg
+ """
+ self.__failed_targets[sut + target] = msg
+
+ def remove_failed_target(self, sut, target):
+ """
+ Remove the given SUT, target from failed targets collection
+ """
+ key_word = sut + target
+ if key_word in self.__failed_targets:
+ self.__failed_targets.pop(key_word)
+
+ """
+ Attributes defined as properties to hide the implementation from the
+ presented interface.
+ """
+ sut = property(__get_sut, __set_sut)
+ dpdk_version = property(__get_dpdk_version, __set_dpdk_version)
+ target = property(__get_target, __set_target)
+ test_suite = property(__get_test_suite, __set_test_suite)
+ test_case = property(__get_test_case, __set_test_case)
+ internals = property(__get_internals)
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 07/10] dts: add simple stats report
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
` (5 preceding siblings ...)
2022-11-14 16:54 ` [RFC PATCH v2 06/10] dts: add test results module Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-16 13:57 ` Owen Hilyard
2022-11-14 16:54 ` [RFC PATCH v2 08/10] dts: add testsuite class Juraj Linkeš
` (3 subsequent siblings)
10 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Provide a summary of testcase passed/failed/blocked counts.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/dts.py | 3 ++
dts/framework/stats_reporter.py | 65 +++++++++++++++++++++++++++++++++
2 files changed, 68 insertions(+)
create mode 100644 dts/framework/stats_reporter.py
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index d606f8de2e..a7c243a5c3 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -14,11 +14,13 @@
from .exception import DTSError, ReturnCode
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
+from .stats_reporter import TestStats
from .test_result import Result
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("dts")
result: Result = Result()
+test_stats: TestStats = TestStats(SETTINGS.output_dir + "/statistics.txt")
def run_all() -> None:
@@ -29,6 +31,7 @@ def run_all() -> None:
return_code = ReturnCode.NO_ERR
global dts_logger
global result
+ global test_stats
# check the python version of the server that run dts
check_dts_python_version()
diff --git a/dts/framework/stats_reporter.py b/dts/framework/stats_reporter.py
new file mode 100644
index 0000000000..a2735d0a1d
--- /dev/null
+++ b/dts/framework/stats_reporter.py
@@ -0,0 +1,65 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+"""
+Simple text file statistics generator
+"""
+
+
+class TestStats(object):
+ """
+ Generates a small statistics file containing the number of passing,
+ failing and blocked tests. It makes use of a Result instance as input.
+ """
+
+ def __init__(self, filename):
+ self.filename = filename
+
+ def __add_stat(self, test_result):
+ if test_result is not None:
+ if test_result[0] == "PASSED":
+ self.passed += 1
+ if test_result[0] == "FAILED":
+ self.failed += 1
+ if test_result[0] == "BLOCKED":
+ self.blocked += 1
+ self.total += 1
+
+ def __count_stats(self):
+ for sut in self.result.all_suts():
+ for target in self.result.all_targets(sut):
+ for suite in self.result.all_test_suites(sut, target):
+ for case in self.result.all_test_cases(sut, target, suite):
+ test_result = self.result.result_for(sut, target, suite, case)
+ if len(test_result):
+ self.__add_stat(test_result)
+
+ def __write_stats(self):
+ sut_nodes = self.result.all_suts()
+ if len(sut_nodes) == 1:
+ self.stats_file.write(
+ f"dpdk_version = {self.result.current_dpdk_version(sut_nodes[0])}\n"
+ )
+ else:
+ for sut in sut_nodes:
+ dpdk_version = self.result.current_dpdk_version(sut)
+ self.stats_file.write(f"{sut}.dpdk_version = {dpdk_version}\n")
+ self.__count_stats()
+ self.stats_file.write(f"Passed = {self.passed}\n")
+ self.stats_file.write(f"Failed = {self.failed}\n")
+ self.stats_file.write(f"Blocked = {self.blocked}\n")
+ rate = 0
+ if self.total > 0:
+ rate = self.passed * 100.0 / self.total
+ self.stats_file.write(f"Pass rate = {rate:.1f}\n")
+
+ def save(self, result):
+ self.passed = 0
+ self.failed = 0
+ self.blocked = 0
+ self.total = 0
+ self.stats_file = open(self.filename, "w+")
+ self.result = result
+ self.__write_stats()
+ self.stats_file.close()
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 08/10] dts: add testsuite class
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
` (6 preceding siblings ...)
2022-11-14 16:54 ` [RFC PATCH v2 07/10] dts: add simple stats report Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-16 15:15 ` Owen Hilyard
2022-11-14 16:54 ` [RFC PATCH v2 09/10] dts: add hello world testplan Juraj Linkeš
` (2 subsequent siblings)
10 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
This is the base class that all test suites inherit from. The base class
implements methods common to all test suites. The derived test suites
implement tests and any particular setup needed for the suite or tests.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 4 +
dts/framework/config/__init__.py | 33 ++-
dts/framework/config/conf_yaml_schema.json | 49 ++++
dts/framework/dts.py | 29 +++
dts/framework/exception.py | 65 ++++++
dts/framework/settings.py | 25 +++
dts/framework/test_case.py | 246 +++++++++++++++++++++
7 files changed, 450 insertions(+), 1 deletion(-)
create mode 100644 dts/framework/test_case.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 976888a88e..0b0f2c59b0 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -7,6 +7,10 @@ executions:
os: linux
cpu: native
compiler: gcc
+ perf: false
+ func: true
+ test_suites:
+ - hello_world
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 344d697a69..8874b10030 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -11,7 +11,7 @@
import pathlib
from dataclasses import dataclass
from enum import Enum, auto, unique
-from typing import Any, Iterable
+from typing import Any, Iterable, TypedDict
import warlock # type: ignore
import yaml
@@ -186,9 +186,34 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
)
+class TestSuiteConfigDict(TypedDict):
+ suite: str
+ cases: list[str]
+
+
+@dataclass(slots=True, frozen=True)
+class TestSuiteConfig:
+ test_suite: str
+ test_cases: list[str]
+
+ @staticmethod
+ def from_dict(
+ entry: str | TestSuiteConfigDict,
+ ) -> "TestSuiteConfig":
+ if isinstance(entry, str):
+ return TestSuiteConfig(test_suite=entry, test_cases=[])
+ elif isinstance(entry, dict):
+ return TestSuiteConfig(test_suite=entry["suite"], test_cases=entry["cases"])
+ else:
+ raise TypeError(f"{type(entry)} is not valid for a test suite config.")
+
+
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
+ perf: bool
+ func: bool
+ test_suites: list[TestSuiteConfig]
system_under_test: NodeConfiguration
@staticmethod
@@ -196,11 +221,17 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets: list[BuildTargetConfiguration] = list(
map(BuildTargetConfiguration.from_dict, d["build_targets"])
)
+ test_suites: list[TestSuiteConfig] = list(
+ map(TestSuiteConfig.from_dict, d["test_suites"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
return ExecutionConfiguration(
build_targets=build_targets,
+ perf=d["perf"],
+ func=d["func"],
+ test_suites=test_suites,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index c59d3e30e6..e37ced65fe 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -63,6 +63,31 @@
}
},
"additionalProperties": false
+ },
+ "test_suite": {
+ "type": "string",
+ "enum": [
+ "hello_world"
+ ]
+ },
+ "test_target": {
+ "type": "object",
+ "properties": {
+ "suite": {
+ "$ref": "#/definitions/test_suite"
+ },
+ "cases": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minimum": 1
+ }
+ },
+ "required": [
+ "suite"
+ ],
+ "additionalProperties": false
}
},
"type": "object",
@@ -130,6 +155,27 @@
},
"minimum": 1
},
+ "perf": {
+ "type": "boolean",
+ "description": "Enable performance testing"
+ },
+ "func": {
+ "type": "boolean",
+ "description": "Enable functional testing"
+ },
+ "test_suites": {
+ "type": "array",
+ "items": {
+ "oneOf": [
+ {
+ "$ref": "#/definitions/test_suite"
+ },
+ {
+ "$ref": "#/definitions/test_target"
+ }
+ ]
+ }
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -137,6 +183,9 @@
"additionalProperties": false,
"required": [
"build_targets",
+ "perf",
+ "func",
+ "test_suites",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index a7c243a5c3..ba3f4b4168 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -15,6 +15,7 @@
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
from .stats_reporter import TestStats
+from .test_case import TestCase
from .test_result import Result
from .utils import check_dts_python_version
@@ -129,6 +130,34 @@ def run_suite(
Use the given build_target to run the test suite with possibly only a subset
of tests. If no subset is specified, run all tests.
"""
+ for test_suite_config in execution.test_suites:
+ result.test_suite = test_suite_config.test_suite
+ full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
+ testcase_classes = TestCase.get_testcases(full_suite_path)
+ dts_logger.debug(
+ f"Found testcase classes '{testcase_classes}' in '{full_suite_path}'"
+ )
+ for testcase_class in testcase_classes:
+ testcase = testcase_class(
+ sut_node, test_suite_config.test_suite, build_target, execution
+ )
+
+ testcase.init_log()
+ testcase.set_requested_cases(SETTINGS.test_cases)
+ testcase.set_requested_cases(test_suite_config.test_cases)
+
+ dts_logger.info(f"Running test suite '{testcase_class.__name__}'")
+ try:
+ testcase.execute_setup_all()
+ testcase.execute_test_cases()
+ dts_logger.info(
+ f"Finished running test suite '{testcase_class.__name__}'"
+ )
+ result.copy_suite(testcase.get_result())
+ test_stats.save(result) # this was originally after teardown
+
+ finally:
+ testcase.execute_tear_downall()
def quit_execution(nodes: Iterable[Node], return_code: ReturnCode) -> None:
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 93d99432ae..a35eeff640 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -29,6 +29,10 @@ class ReturnCode(IntEnum):
DPDK_BUILD_ERR = 10
NODE_SETUP_ERR = 20
NODE_CLEANUP_ERR = 21
+ SUITE_SETUP_ERR = 30
+ SUITE_EXECUTION_ERR = 31
+ TESTCASE_VERIFY_ERR = 32
+ SUITE_CLEANUP_ERR = 33
class DTSError(Exception):
@@ -153,6 +157,67 @@ def __init__(self):
)
+class TestSuiteNotFound(DTSError):
+ """
+ Raised when a configured test suite cannot be imported.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.SUITE_SETUP_ERR
+
+
+class SuiteSetupError(DTSError):
+ """
+ Raised when an error occurs during suite setup.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.SUITE_SETUP_ERR
+
+ def __init__(self):
+ super(SuiteSetupError, self).__init__("An error occurred during suite setup.")
+
+
+class SuiteExecutionError(DTSError):
+ """
+ Raised when an error occurs during suite execution.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.SUITE_EXECUTION_ERR
+
+ def __init__(self):
+ super(SuiteExecutionError, self).__init__(
+ "An error occurred during suite execution."
+ )
+
+
+class VerifyError(DTSError):
+ """
+ To be used within the test cases to verify if a command output
+ is as it was expected.
+ """
+
+ value: str
+ return_code: ClassVar[ReturnCode] = ReturnCode.TESTCASE_VERIFY_ERR
+
+ def __init__(self, value: str):
+ self.value = value
+
+ def __str__(self) -> str:
+ return repr(self.value)
+
+
+class SuiteCleanupError(DTSError):
+ """
+ Raised when an error occurs during suite cleanup.
+ """
+
+ return_code: ClassVar[ReturnCode] = ReturnCode.SUITE_CLEANUP_ERR
+
+ def __init__(self):
+ super(SuiteCleanupError, self).__init__(
+ "An error occurred during suite cleanup."
+ )
+
+
def convert_exception(exception: type[DTSError]) -> Callable[..., Callable[..., None]]:
"""
When a non-DTS exception is raised while executing the decorated function,
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index e2bf3d2ce4..069f28ce81 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -64,6 +64,8 @@ class _Settings:
skip_setup: bool
dpdk_ref: Path
compile_timeout: float
+ test_cases: list
+ re_run: int
def _get_parser() -> argparse.ArgumentParser:
@@ -138,6 +140,25 @@ def _get_parser() -> argparse.ArgumentParser:
help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
)
+ parser.add_argument(
+ "--test-cases",
+ action=_env_arg("DTS_TESTCASES"),
+ default="",
+ required=False,
+ help="[DTS_TESTCASES] Comma-separated list of testcases to execute",
+ )
+
+ parser.add_argument(
+ "--re-run",
+ "--re_run",
+ action=_env_arg("DTS_RERUN"),
+ default=0,
+ type=int,
+ required=False,
+ help="[DTS_RERUN] Re-run tests the specified amount of times if a test failure "
+ "occurs",
+ )
+
return parser
@@ -151,6 +172,10 @@ def _get_settings() -> _Settings:
skip_setup=(parsed_args.skip_setup == "Y"),
dpdk_ref=parsed_args.dpdk_ref,
compile_timeout=parsed_args.compile_timeout,
+ test_cases=parsed_args.test_cases.split(",")
+ if parsed_args.test_cases != ""
+ else [],
+ re_run=parsed_args.re_run,
)
diff --git a/dts/framework/test_case.py b/dts/framework/test_case.py
new file mode 100644
index 0000000000..0479f795bb
--- /dev/null
+++ b/dts/framework/test_case.py
@@ -0,0 +1,246 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+"""
+A base class for creating DTS test cases.
+"""
+
+import importlib
+import inspect
+import re
+import time
+import traceback
+
+from .exception import (
+ SSHTimeoutError,
+ SuiteCleanupError,
+ SuiteExecutionError,
+ SuiteSetupError,
+ TestSuiteNotFound,
+ VerifyError,
+ convert_exception,
+)
+from .logger import getLogger
+from .settings import SETTINGS
+from .test_result import Result
+
+
+class TestCase(object):
+ def __init__(self, sut_node, suitename, target, execution):
+ self.sut_node = sut_node
+ self.suite_name = suitename
+ self.target = target
+
+ # local variable
+ self._requested_tests = []
+ self._subtitle = None
+
+ # result object for save suite result
+ self._suite_result = Result()
+ self._suite_result.sut = self.sut_node.config.hostname
+ self._suite_result.target = target
+ self._suite_result.test_suite = self.suite_name
+ if self._suite_result is None:
+ raise ValueError("Result object should not None")
+
+ self._enable_func = execution.func
+
+ # command history
+ self.setup_history = list()
+ self.test_history = list()
+
+ def init_log(self):
+ # get log handler
+ class_name = self.__class__.__name__
+ self.logger = getLogger(class_name)
+
+ def set_up_all(self):
+ pass
+
+ def set_up(self):
+ pass
+
+ def tear_down(self):
+ pass
+
+ def tear_down_all(self):
+ pass
+
+ def verify(self, passed, description):
+ if not passed:
+ raise VerifyError(description)
+
+ def _get_functional_cases(self):
+ """
+ Get all functional test cases.
+ """
+ return self._get_test_cases(r"test_(?!perf_)")
+
+ def _has_it_been_requested(self, test_case, test_name_regex):
+ """
+ Check whether test case has been requested for validation.
+ """
+ name_matches = re.match(test_name_regex, test_case.__name__)
+ if self._requested_tests:
+ return name_matches and test_case.__name__ in self._requested_tests
+
+ return name_matches
+
+ def set_requested_cases(self, case_list):
+ """
+ Pass down input cases list for check
+ """
+ self._requested_tests += case_list
+
+ def _get_test_cases(self, test_name_regex):
+ """
+ Return case list which name matched regex.
+ """
+ self.logger.debug(f"Searching for testcases in {self.__class__}")
+ for test_case_name in dir(self):
+ test_case = getattr(self, test_case_name)
+ if callable(test_case) and self._has_it_been_requested(
+ test_case, test_name_regex
+ ):
+ yield test_case
+
+ @convert_exception(SuiteSetupError)
+ def execute_setup_all(self):
+ """
+ Execute suite setup_all function before cases.
+ """
+ try:
+ self.set_up_all()
+ return True
+ except Exception as v:
+ self.logger.error("set_up_all failed:\n" + traceback.format_exc())
+ # record all cases blocked
+ if self._enable_func:
+ for case_obj in self._get_functional_cases():
+ self._suite_result.test_case = case_obj.__name__
+ self._suite_result.test_case_blocked(
+ "set_up_all failed: {}".format(str(v))
+ )
+ return False
+
+ def _execute_test_case(self, case_obj):
+ """
+ Execute specified test case in specified suite. If any exception occurred in
+ validation process, save the result and tear down this case.
+ """
+ case_name = case_obj.__name__
+ self._suite_result.test_case = case_obj.__name__
+
+ case_result = True
+ try:
+ self.logger.info("Test Case %s Begin" % case_name)
+
+ self.running_case = case_name
+ # run set_up function for each case
+ self.set_up()
+ # run test case
+ case_obj()
+
+ self._suite_result.test_case_passed()
+
+ self.logger.info("Test Case %s Result PASSED:" % case_name)
+
+ except VerifyError as v:
+ case_result = False
+ self._suite_result.test_case_failed(str(v))
+ self.logger.error("Test Case %s Result FAILED: " % (case_name) + str(v))
+ except KeyboardInterrupt:
+ self._suite_result.test_case_blocked("Skipped")
+ self.logger.error("Test Case %s SKIPPED: " % (case_name))
+ self.tear_down()
+ raise KeyboardInterrupt("Stop DTS")
+ except SSHTimeoutError as e:
+ case_result = False
+ self._suite_result.test_case_failed(str(e))
+ self.logger.error("Test Case %s Result FAILED: " % (case_name) + str(e))
+ self.logger.error("%s" % (e.get_output()))
+ except Exception:
+ case_result = False
+ trace = traceback.format_exc()
+ self._suite_result.test_case_failed(trace)
+ self.logger.error("Test Case %s Result ERROR: " % (case_name) + trace)
+ finally:
+ self.execute_tear_down()
+ return case_result
+
+ @convert_exception(SuiteExecutionError)
+ def execute_test_cases(self):
+ """
+ Execute all test cases in one suite.
+ """
+ # prepare debugger rerun case environment
+ if self._enable_func:
+ for case_obj in self._get_functional_cases():
+ for i in range(SETTINGS.re_run + 1):
+ ret = self.execute_test_case(case_obj)
+
+ if ret is False and SETTINGS.re_run:
+ self.sut_node.get_session_output(timeout=0.5 * (i + 1))
+ time.sleep(i + 1)
+ self.logger.info(
+ " Test case %s failed and re-run %d time"
+ % (case_obj.__name__, i + 1)
+ )
+ else:
+ break
+
+ def execute_test_case(self, case_obj):
+ """
+ Execute test case or enter into debug mode.
+ """
+ return self._execute_test_case(case_obj)
+
+ def get_result(self):
+ """
+ Return suite test result
+ """
+ return self._suite_result
+
+ @convert_exception(SuiteCleanupError)
+ def execute_tear_downall(self):
+ """
+ execute suite tear_down_all function
+ """
+ self.tear_down_all()
+
+ self.sut_node.kill_cleanup_dpdk_apps()
+
+ def execute_tear_down(self):
+ """
+ execute suite tear_down function
+ """
+ try:
+ self.tear_down()
+ except Exception:
+ self.logger.error("tear_down failed:\n" + traceback.format_exc())
+ self.logger.warning(
+ "tear down %s failed, might iterfere next case's result!"
+ % self.running_case
+ )
+
+ @staticmethod
+ def get_testcases(testsuite_module_path: str) -> list[type["TestCase"]]:
+ def is_testcase(object) -> bool:
+ try:
+ if issubclass(object, TestCase) and object != TestCase:
+ return True
+ except TypeError:
+ return False
+ return False
+
+ try:
+ testcase_module = importlib.import_module(testsuite_module_path)
+ except ModuleNotFoundError as e:
+ raise TestSuiteNotFound(
+ f"Testsuite '{testsuite_module_path}' not found."
+ ) from e
+ return [
+ testcase_class
+ for _, testcase_class in inspect.getmembers(testcase_module, is_testcase)
+ ]
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 09/10] dts: add hello world testplan
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
` (7 preceding siblings ...)
2022-11-14 16:54 ` [RFC PATCH v2 08/10] dts: add testsuite class Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 10/10] dts: add hello world testsuite Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The testplan describes the capabilities of the tested application along
with the description of testcases to test it.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/test_plans/hello_world_test_plan.rst | 68 ++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 dts/test_plans/hello_world_test_plan.rst
diff --git a/dts/test_plans/hello_world_test_plan.rst b/dts/test_plans/hello_world_test_plan.rst
new file mode 100644
index 0000000000..566a9bb10c
--- /dev/null
+++ b/dts/test_plans/hello_world_test_plan.rst
@@ -0,0 +1,68 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+
+=============================================
+Sample Application Tests: Hello World Example
+=============================================
+
+This example is one of the most simple RTE application that can be
+done. The program will just print a "helloworld" message on every
+enabled lcore.
+
+Command Usage::
+
+ ./dpdk-helloworld -c COREMASK [-m NB] [-r NUM] [-n NUM]
+
+ EAL option list:
+ -c COREMASK: hexadecimal bitmask of cores we are running on
+ -m MB : memory to allocate (default = size of hugemem)
+ -n NUM : force number of memory channels (don't detect)
+ -r NUM : force number of memory ranks (don't detect)
+ --huge-file: base filename for hugetlbfs entries
+ debug options:
+ --no-huge : use malloc instead of hugetlbfs
+ --no-pci : disable pci
+ --no-hpet : disable hpet
+ --no-shconf: no shared config (mmap'd files)
+
+
+Prerequisites
+=============
+
+Support igb_uio and vfio driver, if used vfio, kernel need 3.6+ and enable vt-d in bios.
+When used vfio , used "modprobe vfio" and "modprobe vfio-pci" insmod vfio driver, then used
+"./tools/dpdk_nic_bind.py --bind=vfio-pci device_bus_id" to bind vfio driver to test driver.
+
+To find out the mapping of lcores (processor) to core id and socket (physical
+id), the command below can be used::
+
+ $ grep "processor\|physical id\|core id\|^$" /proc/cpuinfo
+
+The total logical core number will be used as ``helloworld`` input parameters.
+
+
+Test Case: run hello world on single lcores
+===========================================
+
+To run example in single lcore ::
+
+ $ ./dpdk-helloworld -c 1
+ hello from core 0
+
+Check the output is exact the lcore 0
+
+
+Test Case: run hello world on every lcores
+==========================================
+
+To run the example in all the enabled lcore ::
+
+ $ ./dpdk-helloworld -cffffff
+ hello from core 1
+ hello from core 2
+ hello from core 3
+ ...
+ ...
+ hello from core 0
+
+Verify the output of according to all the core masks.
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [RFC PATCH v2 10/10] dts: add hello world testsuite
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
` (8 preceding siblings ...)
2022-11-14 16:54 ` [RFC PATCH v2 09/10] dts: add hello world testplan Juraj Linkeš
@ 2022-11-14 16:54 ` Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-14 16:54 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The testsuite implements the testcases defined in the corresponding test
plan.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/os/os_session.py | 16 +++++-
dts/framework/testbed_model/__init__.py | 1 +
dts/framework/testbed_model/node/sut_node.py | 11 ++++
dts/tests/TestSuite_hello_world.py | 53 +++++++++++++++++++
4 files changed, 80 insertions(+), 1 deletion(-)
create mode 100644 dts/tests/TestSuite_hello_world.py
diff --git a/dts/framework/remote_session/os/os_session.py b/dts/framework/remote_session/os/os_session.py
index f84f3ce63c..1548e3c6c8 100644
--- a/dts/framework/remote_session/os/os_session.py
+++ b/dts/framework/remote_session/os/os_session.py
@@ -9,7 +9,7 @@
from framework.config import CPU, Architecture, NodeConfiguration
from framework.logger import DTSLOG
from framework.remote_session.factory import create_remote_session
-from framework.remote_session.remote_session import RemoteSession
+from framework.remote_session.remote_session import CommandResult, RemoteSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict
@@ -49,6 +49,20 @@ def is_alive(self) -> bool:
"""
return self.remote_session.is_alive()
+ def send_command(
+ self,
+ command: str,
+ timeout: float,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
+ ) -> CommandResult:
+ """
+ An all-purpose API in case the command to be executed is already
+ OS-agnostic, such as when the path to the executed command has been
+ constructed beforehand.
+ """
+ return self.remote_session.send_command(command, timeout, verify, env)
+
@abstractmethod
def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
"""
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 13c29c59c8..0a4862d7d6 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -8,4 +8,5 @@
# pylama:ignore=W0611
+from .hw import CPUAmount, CPUAmountFilter, CPUListFilter, CPUList
from .node import Node, SutNode
diff --git a/dts/framework/testbed_model/node/sut_node.py b/dts/framework/testbed_model/node/sut_node.py
index ff3be845b4..d56f7467ba 100644
--- a/dts/framework/testbed_model/node/sut_node.py
+++ b/dts/framework/testbed_model/node/sut_node.py
@@ -9,6 +9,7 @@
from framework.config import CPU, BuildTargetConfiguration, CPUList, NodeConfiguration
from framework.remote_session import OSSession
+from framework.remote_session.remote_session import CommandResult
from framework.settings import SETTINGS
from framework.testbed_model.hw import CPUAmount, CPUListFilter
from framework.utils import EnvVarsDict, skip_setup
@@ -224,6 +225,16 @@ def create_eal_parameters(
return eal_str
+ def run_dpdk_app(
+ self, app_path: PurePath, eal_args: str, timeout: float = 30
+ ) -> CommandResult:
+ """
+ Run DPDK application on the remote node.
+ """
+ return self.main_session.send_command(
+ f"{app_path} {eal_args}", timeout, verify=True
+ )
+
class _EalParameter(object):
def __init__(
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
new file mode 100644
index 0000000000..d3661bb243
--- /dev/null
+++ b/dts/tests/TestSuite_hello_world.py
@@ -0,0 +1,53 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+
+"""
+DPDK Test suite.
+Test HelloWorld example.
+"""
+
+from framework.test_case import TestCase
+from framework.testbed_model import CPUAmount, CPUAmountFilter, CPUList
+
+
+class TestHelloWorld(TestCase):
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+ hello_world Prerequisites:
+ helloworld build pass
+ """
+ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
+
+ def test_hello_world_single_core(self):
+ """
+ Run hello world on single lcores
+ Only received hello message from core0
+ """
+
+ # get the mask for the first core
+ cpu_amount = CPUAmount(1, 1, 1)
+ cores = CPUAmountFilter(self.sut_node.cpus, cpu_amount).filter()
+ eal_para = self.sut_node.create_eal_parameters(core_filter_specifier=cpu_amount)
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para)
+ self.verify(
+ f"hello from core {str(cores[0]) in result.stdout}",
+ f"EAL not started on core{cores[0]}",
+ )
+
+ def test_hello_world_all_cores(self):
+ """
+ Run hello world on all lcores
+ Received hello message from all lcores
+ """
+
+ # get the maximum logical core number
+ eal_para = self.sut_node.create_eal_parameters(
+ core_filter_specifier=CPUList(self.sut_node.cpus)
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para, 50)
+ for core in self.sut_node.cpus:
+ self.verify(
+ f"hello from core {str(core) in result.stdout}",
+ f"EAL not started on core{core}",
+ )
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [RFC PATCH v2 03/10] dts: add dpdk build on sut
2022-11-14 16:54 ` [RFC PATCH v2 03/10] dts: add dpdk build on sut Juraj Linkeš
@ 2022-11-16 13:15 ` Owen Hilyard
[not found] ` <30ad4f7d087d4932845b6ca13934b1d2@pantheon.tech>
0 siblings, 1 reply; 97+ messages in thread
From: Owen Hilyard @ 2022-11-16 13:15 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 29429 bytes --]
On Mon, Nov 14, 2022 at 11:54 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Add the ability to build DPDK and apps, using a configured target.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/framework/exception.py | 17 +++
> dts/framework/remote_session/os/os_session.py | 90 +++++++++++-
> .../remote_session/os/posix_session.py | 128 +++++++++++++++++
> .../remote_session/remote_session.py | 34 ++++-
> dts/framework/remote_session/ssh_session.py | 64 ++++++++-
> dts/framework/settings.py | 40 +++++-
> dts/framework/testbed_model/node/sut_node.py | 131 ++++++++++++++++++
> dts/framework/utils.py | 15 ++
> 8 files changed, 505 insertions(+), 14 deletions(-)
>
> diff --git a/dts/framework/exception.py b/dts/framework/exception.py
> index b282e48198..93d99432ae 100644
> --- a/dts/framework/exception.py
> +++ b/dts/framework/exception.py
> @@ -26,6 +26,7 @@ class ReturnCode(IntEnum):
> GENERIC_ERR = 1
> SSH_ERR = 2
> REMOTE_CMD_EXEC_ERR = 3
> + DPDK_BUILD_ERR = 10
> NODE_SETUP_ERR = 20
> NODE_CLEANUP_ERR = 21
>
> @@ -110,6 +111,22 @@ def __str__(self) -> str:
> )
>
>
> +class RemoteDirectoryExistsError(DTSError):
> + """
> + Raised when a remote directory to be created already exists.
> + """
> +
> + return_code: ClassVar[ReturnCode] = ReturnCode.REMOTE_CMD_EXEC_ERR
> +
> +
> +class DPDKBuildError(DTSError):
> + """
> + Raised when DPDK build fails for any reason.
> + """
> +
> + return_code: ClassVar[ReturnCode] = ReturnCode.DPDK_BUILD_ERR
> +
> +
> class NodeSetupError(DTSError):
> """
> Raised when setting up a node.
> diff --git a/dts/framework/remote_session/os/os_session.py
> b/dts/framework/remote_session/os/os_session.py
> index 2a72082628..57e2865282 100644
> --- a/dts/framework/remote_session/os/os_session.py
> +++ b/dts/framework/remote_session/os/os_session.py
> @@ -2,12 +2,15 @@
> # Copyright(c) 2022 PANTHEON.tech s.r.o.
> # Copyright(c) 2022 University of New Hampshire
>
> -from abc import ABC
> +from abc import ABC, abstractmethod
> +from pathlib import PurePath
>
> -from framework.config import NodeConfiguration
> +from framework.config import Architecture, NodeConfiguration
> from framework.logger import DTSLOG
> from framework.remote_session.factory import create_remote_session
> from framework.remote_session.remote_session import RemoteSession
> +from framework.settings import SETTINGS
> +from framework.utils import EnvVarsDict
>
>
> class OSSession(ABC):
> @@ -44,3 +47,86 @@ def is_alive(self) -> bool:
> Check whether the remote session is still responding.
> """
> return self.remote_session.is_alive()
> +
> + @abstractmethod
> + def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
> + """
> + Try to find DPDK remote dir in remote_dir.
> + """
> +
> + @abstractmethod
> + def get_remote_tmp_dir(self) -> PurePath:
> + """
> + Get the path of the temporary directory of the remote OS.
> + """
> +
> + @abstractmethod
> + def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
> + """
> + Create extra environment variables needed for the target
> architecture. Get
> + information from the node if needed.
> + """
> +
> + @abstractmethod
> + def join_remote_path(self, *args: str | PurePath) -> PurePath:
> + """
> + Join path parts using the path separator that fits the remote OS.
> + """
> +
> + @abstractmethod
> + def copy_file(
> + self,
> + source_file: str | PurePath,
> + destination_file: str | PurePath,
> + source_remote: bool = False,
> + ) -> None:
> + """
> + Copy source_file from local storage to destination_file on the
> remote Node
> + associated with the remote session.
> + If source_remote is True, reverse the direction - copy
> source_file from the
> + associated remote Node to destination_file on local storage.
> + """
> +
> + @abstractmethod
> + def remove_remote_dir(
> + self,
> + remote_dir_path: str | PurePath,
> + recursive: bool = True,
> + force: bool = True,
> + ) -> None:
> + """
> + Remove remote directory, by default remove recursively and
> forcefully.
> + """
> +
> + @abstractmethod
> + def extract_remote_tarball(
> + self,
> + remote_tarball_path: str | PurePath,
> + expected_dir: str | PurePath | None = None,
> + ) -> None:
> + """
> + Extract remote tarball in place. If expected_dir is a non-empty
> string, check
> + whether the dir exists after extracting the archive.
> + """
> +
> + @abstractmethod
> + def build_dpdk(
> + self,
> + env_vars: EnvVarsDict,
> + meson_args: str,
> + remote_dpdk_dir: str | PurePath,
> + target_name: str,
> + rebuild: bool = False,
> + timeout: float = SETTINGS.compile_timeout,
> + ) -> PurePath:
>
I think that we should consider having a MesonArgs type which implements
the builder pattern. That way common things like static vs dynamic linking,
enabling lto, setting the optimization level, et can be handled via
dedicated methods, and then we can add a method on that which is "add this
string onto the end". This would also allow defining additional methods for
DPDK-specific meson arguments, like only enabling
certain drivers/applications/tests or forcing certain vector widths. I
would also like to see an option to make use of ccache, because currently
the only way I see to do that is via environment variables, which will make
creating a test matrix that includes multiple compilers difficult.
> + """
> + Build DPDK in the input dir with specified environment variables
> and meson
> + arguments.
> + Return the directory path where DPDK was built.
> + """
> +
> + @abstractmethod
> + def get_dpdk_version(self, version_path: str | PurePath) -> str:
> + """
> + Inspect DPDK version on the remote node from version_path.
> + """
> diff --git a/dts/framework/remote_session/os/posix_session.py
> b/dts/framework/remote_session/os/posix_session.py
> index 9622a4ea30..a36b8e8c1a 100644
> --- a/dts/framework/remote_session/os/posix_session.py
> +++ b/dts/framework/remote_session/os/posix_session.py
> @@ -2,6 +2,13 @@
> # Copyright(c) 2022 PANTHEON.tech s.r.o.
> # Copyright(c) 2022 University of New Hampshire
>
> +from pathlib import PurePath, PurePosixPath
> +
> +from framework.config import Architecture
> +from framework.exception import DPDKBuildError,
> RemoteCommandExecutionError
> +from framework.settings import SETTINGS
> +from framework.utils import EnvVarsDict
> +
> from .os_session import OSSession
>
>
> @@ -10,3 +17,124 @@ class PosixSession(OSSession):
> An intermediary class implementing the Posix compliant parts of
> Linux and other OS remote sessions.
> """
> +
> + @staticmethod
> + def combine_short_options(**opts: [str, bool]) -> str:
> + ret_opts = ""
> + for opt, include in opts.items():
> + if include:
> + ret_opts = f"{ret_opts}{opt}"
> +
> + if ret_opts:
> + ret_opts = f" -{ret_opts}"
> +
> + return ret_opts
> +
> + def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
> + remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
> + result = self.remote_session.send_command(f"ls -d {remote_guess}
> | tail -1")
> + return PurePosixPath(result.stdout)
> +
> + def get_remote_tmp_dir(self) -> PurePosixPath:
> + return PurePosixPath("/tmp")
> +
> + def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
> + """
> + Create extra environment variables needed for i686 arch build.
> Get information
> + from the node if needed.
> + """
> + env_vars = {}
> + if arch == Architecture.i686:
> + # find the pkg-config path and store it in PKG_CONFIG_LIBDIR
> + out = self.remote_session.send_command("find /usr -type d
> -name pkgconfig")
> + pkg_path = ""
> + res_path = out.stdout.split("\r\n")
> + for cur_path in res_path:
> + if "i386" in cur_path:
> + pkg_path = cur_path
> + break
> + assert pkg_path != "", "i386 pkg-config path not found"
> +
> + env_vars["CFLAGS"] = "-m32"
> + env_vars["PKG_CONFIG_LIBDIR"] = pkg_path
> +
> + return env_vars
> +
> + def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
> + return PurePosixPath(*args)
> +
> + def copy_file(
> + self,
> + source_file: str | PurePath,
> + destination_file: str | PurePath,
> + source_remote: bool = False,
> + ) -> None:
> + self.remote_session.copy_file(source_file, destination_file,
> source_remote)
> +
> + def remove_remote_dir(
> + self,
> + remote_dir_path: str | PurePath,
> + recursive: bool = True,
> + force: bool = True,
> + ) -> None:
> + opts = PosixSession.combine_short_options(r=recursive, f=force)
> + self.remote_session.send_command(f"rm{opts} {remote_dir_path}")
> +
> + def extract_remote_tarball(
> + self,
> + remote_tarball_path: str | PurePath,
> + expected_dir: str | PurePath | None = None,
> + ) -> None:
> + self.remote_session.send_command(
> + f"tar xfm {remote_tarball_path} "
> + f"-C {PurePosixPath(remote_tarball_path).parent}",
> + 60,
> + )
> + if expected_dir:
> + self.remote_session.send_command(f"ls {expected_dir}",
> verify=True)
> +
> + def build_dpdk(
> + self,
> + env_vars: EnvVarsDict,
> + meson_args: str,
> + remote_dpdk_dir: str | PurePath,
> + target_name: str,
> + rebuild: bool = False,
> + timeout: float = SETTINGS.compile_timeout,
> + ) -> PurePosixPath:
> + build_dir = self.join_remote_path(remote_dpdk_dir, target_name)
> + try:
> + if rebuild:
> + # reconfigure, then build
> + self.logger.info("Reconfiguring DPDK build.")
> + self.remote_session.send_command(
> + f"meson configure {meson_args} {build_dir}",
> + timeout,
> + verify=True,
> + env=env_vars,
> + )
> + else:
> + # fresh build - remove target dir first, then build from
> scratch
> + self.logger.info("Configuring DPDK build from scratch.")
> + self.remove_remote_dir(build_dir)
> + self.remote_session.send_command(
> + f"meson {meson_args} {remote_dpdk_dir} {build_dir}",
> + timeout,
> + verify=True,
> + env=env_vars,
> + )
> +
> + self.logger.info("Building DPDK.")
> + self.remote_session.send_command(
> + f"ninja -C {build_dir}", timeout, verify=True,
> env=env_vars
> + )
> + except RemoteCommandExecutionError as e:
> + raise DPDKBuildError(f"DPDK build failed when doing
> '{e.command}'.")
> +
> + return build_dir
> +
> + def get_dpdk_version(self, build_dir: str | PurePath) -> str:
> + out = self.remote_session.send_command(
> + f"cat {self.join_remote_path(build_dir, 'VERSION')}",
> verify=True
> + )
> + return out.stdout
> diff --git a/dts/framework/remote_session/remote_session.py
> b/dts/framework/remote_session/remote_session.py
> index fccd80a529..f10b1023f8 100644
> --- a/dts/framework/remote_session/remote_session.py
> +++ b/dts/framework/remote_session/remote_session.py
> @@ -10,6 +10,7 @@
> from framework.exception import RemoteCommandExecutionError
> from framework.logger import DTSLOG
> from framework.settings import SETTINGS
> +from framework.utils import EnvVarsDict
>
>
> @dataclasses.dataclass(slots=True, frozen=True)
> @@ -83,15 +84,22 @@ def _connect(self) -> None:
> """
>
> def send_command(
> - self, command: str, timeout: float = SETTINGS.timeout, verify:
> bool = False
> + self,
> + command: str,
> + timeout: float = SETTINGS.timeout,
> + verify: bool = False,
> + env: EnvVarsDict | None = None,
) -> CommandResult:
> """
> - Send a command to the connected node and return CommandResult.
> + Send a command to the connected node using optional env vars
> + and return CommandResult.
> If verify is True, check the return code of the executed command
> and raise a RemoteCommandExecutionError if the command failed.
> """
> - self.logger.info(f"Sending: '{command}'")
> - result = self._send_command(command, timeout)
> + self.logger.info(
> + f"Sending: '{command}'" + (f" with env vars: '{env}'" if env
> else "")
> + )
> + result = self._send_command(command, timeout, env)
> if verify and result.return_code:
> self.logger.debug(
> f"Command '{command}' failed with return code
> '{result.return_code}'"
> @@ -104,9 +112,12 @@ def send_command(
> return result
>
> @abstractmethod
> - def _send_command(self, command: str, timeout: float) ->
> CommandResult:
> + def _send_command(
> + self, command: str, timeout: float, env: EnvVarsDict | None
> + ) -> CommandResult:
> """
> - Use the underlying protocol to execute the command and return
> CommandResult.
> + Use the underlying protocol to execute the command using optional
> env vars
> + and return CommandResult.
> """
>
> def close(self, force: bool = False) -> None:
> @@ -127,3 +138,14 @@ def is_alive(self) -> bool:
> """
> Check whether the remote session is still responding.
> """
> +
> + @abstractmethod
> + def copy_file(
> + self, source_file: str, destination_file: str, source_remote:
> bool = False
> + ) -> None:
> + """
> + Copy source_file from local storage to destination_file on the
> remote Node
>
This should clarify that local storage means inside of the DTS container,
not the system it is running on.
> + associated with the remote session.
> + If source_remote is True, reverse the direction - copy
> source_file from the
> + associated Node to destination_file on local storage.
> + """
> diff --git a/dts/framework/remote_session/ssh_session.py
> b/dts/framework/remote_session/ssh_session.py
> index fb2f01dbc1..d4a6714e6b 100644
> --- a/dts/framework/remote_session/ssh_session.py
> +++ b/dts/framework/remote_session/ssh_session.py
> @@ -5,12 +5,13 @@
>
> import time
>
> +import pexpect # type: ignore
> from pexpect import pxssh # type: ignore
>
> from framework.config import NodeConfiguration
> from framework.exception import SSHConnectionError, SSHSessionDeadError,
> SSHTimeoutError
> from framework.logger import DTSLOG
> -from framework.utils import GREEN, RED
> +from framework.utils import GREEN, RED, EnvVarsDict
>
> from .remote_session import CommandResult, RemoteSession
>
> @@ -163,16 +164,22 @@ def _flush(self) -> None:
> def is_alive(self) -> bool:
> return self.session.isalive()
>
> - def _send_command(self, command: str, timeout: float) ->
> CommandResult:
> - output = self._send_command_get_output(command, timeout)
> - return_code = int(self._send_command_get_output("echo $?",
> timeout))
> + def _send_command(
> + self, command: str, timeout: float, env: EnvVarsDict | None
> + ) -> CommandResult:
> + output = self._send_command_get_output(command, timeout, env)
> + return_code = int(self._send_command_get_output("echo $?",
> timeout, None))
>
> # we're capturing only stdout
> return CommandResult(self.name, command, output, "", return_code)
>
> - def _send_command_get_output(self, command: str, timeout: float) ->
> str:
> + def _send_command_get_output(
> + self, command: str, timeout: float, env: EnvVarsDict | None
> + ) -> str:
> try:
> self._clean_session()
> + if env:
> + command = f"{env} {command}"
> self._send_line(command)
> except Exception as e:
> raise e
> @@ -189,3 +196,50 @@ def _close(self, force: bool = False) -> None:
> else:
> if self.is_alive():
> self.session.logout()
> +
> + def copy_file(
> + self, source_file: str, destination_file: str, source_remote:
> bool = False
> + ) -> None:
> + """
> + Send a local file to a remote host.
> + """
> + if source_remote:
> + source_file = f"{self.username}@{self.ip}:{source_file}"
> + else:
> + destination_file = f"{self.username}@
> {self.ip}:{destination_file}"
> +
> + port = ""
> + if self.port:
> + port = f" -P {self.port}"
> +
> + # this is not OS agnostic, find a Pythonic (and thus OS agnostic)
> way
> + # TODO Fabric should handle this
> + command = (
> + f"scp -v{port} -o NoHostAuthenticationForLocalhost=yes"
> + f" {source_file} {destination_file}"
> + )
> +
> + self._spawn_scp(command)
> +
> + def _spawn_scp(self, scp_cmd: str) -> None:
> + """
> + Transfer a file with SCP
> + """
> + self.logger.info(scp_cmd)
> + p: pexpect.spawn = pexpect.spawn(scp_cmd)
> + time.sleep(0.5)
> + ssh_newkey: str = "Are you sure you want to continue connecting"
> + i: int = p.expect(
> + [ssh_newkey, "[pP]assword", "# ", pexpect.EOF,
> pexpect.TIMEOUT], 120
> + )
> + if i == 0: # add once in trust list
> + p.sendline("yes")
> + i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2)
> +
> + if i == 1:
> + time.sleep(0.5)
> + p.sendline(self.password)
> + p.expect("Exit status 0", 60)
> + if i == 4:
> + self.logger.error("SCP TIMEOUT error %d" % i)
> + p.close()
> diff --git a/dts/framework/settings.py b/dts/framework/settings.py
> index 800f2c7b7f..e2bf3d2ce4 100644
> --- a/dts/framework/settings.py
> +++ b/dts/framework/settings.py
> @@ -7,6 +7,7 @@
> import os
> from collections.abc import Callable, Iterable, Sequence
> from dataclasses import dataclass
> +from pathlib import Path
> from typing import Any, TypeVar
>
> _T = TypeVar("_T")
> @@ -60,6 +61,9 @@ class _Settings:
> output_dir: str
> timeout: float
> verbose: bool
> + skip_setup: bool
> + dpdk_ref: Path
> + compile_timeout: float
>
>
> def _get_parser() -> argparse.ArgumentParser:
> @@ -88,6 +92,7 @@ def _get_parser() -> argparse.ArgumentParser:
> "--timeout",
> action=_env_arg("DTS_TIMEOUT"),
> default=15,
> + type=float,
> required=False,
> help="[DTS_TIMEOUT] The default timeout for all DTS operations
> except for "
> "compiling DPDK.",
> @@ -103,6 +108,36 @@ def _get_parser() -> argparse.ArgumentParser:
> "to the console.",
> )
>
> + parser.add_argument(
> + "-s",
> + "--skip-setup",
> + action=_env_arg("DTS_SKIP_SETUP"),
> + required=False,
> + help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT
> and TG nodes.",
> + )
> +
> + parser.add_argument(
> + "--dpdk-ref",
> + "--git",
> + "--snapshot",
> + action=_env_arg("DTS_DPDK_REF"),
> + default="dpdk.tar.xz",
> + type=Path,
> + required=False,
> + help="[DTS_DPDK_REF] Reference to DPDK source code, "
> + "can be either a path to a tarball or a git refspec. "
> + "In case of a tarball, it will be extracted in the same
> directory.",
> + )
> +
> + parser.add_argument(
> + "--compile-timeout",
> + action=_env_arg("DTS_COMPILE_TIMEOUT"),
> + default=1200,
> + type=float,
> + required=False,
> + help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
> + )
> +
> return parser
>
>
> @@ -111,8 +146,11 @@ def _get_settings() -> _Settings:
> return _Settings(
> config_file_path=parsed_args.config_file,
> output_dir=parsed_args.output_dir,
> - timeout=float(parsed_args.timeout),
> + timeout=parsed_args.timeout,
> verbose=(parsed_args.verbose == "Y"),
> + skip_setup=(parsed_args.skip_setup == "Y"),
> + dpdk_ref=parsed_args.dpdk_ref,
> + compile_timeout=parsed_args.compile_timeout,
> )
>
>
> diff --git a/dts/framework/testbed_model/node/sut_node.py
> b/dts/framework/testbed_model/node/sut_node.py
> index 79d54585c9..53268a7565 100644
> --- a/dts/framework/testbed_model/node/sut_node.py
> +++ b/dts/framework/testbed_model/node/sut_node.py
> @@ -2,6 +2,14 @@
> # Copyright(c) 2010-2014 Intel Corporation
> # Copyright(c) 2022 PANTHEON.tech s.r.o.
>
> +import os
> +import tarfile
> +from pathlib import PurePath
> +
> +from framework.config import BuildTargetConfiguration, NodeConfiguration
> +from framework.settings import SETTINGS
> +from framework.utils import EnvVarsDict, skip_setup
> +
> from .node import Node
>
>
> @@ -10,4 +18,127 @@ class SutNode(Node):
> A class for managing connections to the System under Test, providing
> methods that retrieve the necessary information about the node (such
> as
> cpu, memory and NIC details) and configuration capabilities.
> + Another key capability is building DPDK according to given build
> target.
> """
> +
> + _build_target_config: BuildTargetConfiguration | None
> + _env_vars: EnvVarsDict
> + _remote_tmp_dir: PurePath
> + __remote_dpdk_dir: PurePath | None
> + _app_compile_timeout: float
> +
> + def __init__(self, node_config: NodeConfiguration):
> + super(SutNode, self).__init__(node_config)
> + self._build_target_config = None
> + self._env_vars = EnvVarsDict()
> + self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
> + self.__remote_dpdk_dir = None
> + self._app_compile_timeout = 90
> +
> + @property
> + def _remote_dpdk_dir(self) -> PurePath:
> + if self.__remote_dpdk_dir is None:
> + self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
> + return self.__remote_dpdk_dir
> +
> + @_remote_dpdk_dir.setter
> + def _remote_dpdk_dir(self, value: PurePath) -> None:
> + self.__remote_dpdk_dir = value
> +
> + def _guess_dpdk_remote_dir(self) -> PurePath:
> + return
> self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
> +
> + def _setup_build_target(
> + self, build_target_config: BuildTargetConfiguration
> + ) -> None:
> + """
> + Setup DPDK on the SUT node.
> + """
> + self._configure_build_target(build_target_config)
> + self._copy_dpdk_tarball()
> + self._build_dpdk()
> +
> + def _configure_build_target(
> + self, build_target_config: BuildTargetConfiguration
> + ) -> None:
> + """
> + Populate common environment variables and set build target config.
> + """
> + self._build_target_config = build_target_config
> + self._env_vars.update(
> +
> self.main_session.get_dpdk_build_env_vars(build_target_config.arch)
> + )
> + self._env_vars["CC"] = build_target_config.compiler.name
> +
> + @skip_setup
> + def _copy_dpdk_tarball(self) -> None:
> + """
> + Copy to and extract DPDK tarball on the SUT node.
> + """
> + # check local path
> + assert SETTINGS.dpdk_ref.exists(), f"Package {SETTINGS.dpdk_ref}
> doesn't exist."
> +
> + self.logger.info("Copying DPDK tarball to SUT.")
> + self.main_session.copy_file(SETTINGS.dpdk_ref,
> self._remote_tmp_dir)
> +
> + # construct remote tarball path
> + # the basename is the same on local host and on remote Node
> + remote_tarball_path = self.main_session.join_remote_path(
> + self._remote_tmp_dir, os.path.basename(SETTINGS.dpdk_ref)
> + )
> +
> + # construct remote path after extracting
> + with tarfile.open(SETTINGS.dpdk_ref) as dpdk_tar:
> + dpdk_top_dir = dpdk_tar.getnames()[0]
> + self._remote_dpdk_dir = self.main_session.join_remote_path(
> + self._remote_tmp_dir, dpdk_top_dir
> + )
> +
> + self.logger.info("Extracting DPDK tarball on SUT.")
>
Can we add a path to this log message?
> + # clean remote path where we're extracting
> + self.main_session.remove_remote_dir(self._remote_dpdk_dir)
> +
> + # then extract to remote path
> + self.main_session.extract_remote_tarball(
> + remote_tarball_path, self._remote_dpdk_dir
> + )
> +
> + @skip_setup
> + def _build_dpdk(self) -> None:
> + """
> + Build DPDK. Uses the already configured target. Assumes that the
> tarball has
> + already been copied to and extracted on the SUT node.
> + """
> + meson_args = "-Denable_kmods=True -Dlibdir=lib
> --default-library=static"
> + self.main_session.build_dpdk(
> + self._env_vars,
> + meson_args,
> + self._remote_dpdk_dir,
> + self._build_target_config.name if self._build_target_config
> else "build",
> + )
> + self.logger.info(
> + f"DPDK version:
> {self.main_session.get_dpdk_version(self._remote_dpdk_dir)}"
> + )
> +
> + def build_dpdk_app(self, app_name: str) -> PurePath:
> + """
> + Build one or all DPDK apps. Requires DPDK to be already built on
> the SUT node.
> + When app_name is 'all', build all example apps.
> + When app_name is any other string, tries to build that example
> app.
> + Return the directory path of the built app. If building all apps,
> return
> + the path to the examples directory (where all apps reside).
> + """
> + meson_args = f"-Dexamples={app_name}"
> + build_dir = self.main_session.build_dpdk(
> + self._env_vars,
> + meson_args,
> + self._remote_dpdk_dir,
> + self._build_target_config.name if self._build_target_config
> else "build",
> + rebuild=True,
> + timeout=self._app_compile_timeout,
> + )
> + if app_name == "all":
> + return self.main_session.join_remote_path(build_dir,
> "examples")
> + return self.main_session.join_remote_path(
> + build_dir, "examples", f"dpdk-{app_name}"
> + )
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index c28c8f1082..91e58f3218 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -4,6 +4,9 @@
> # Copyright(c) 2022 University of New Hampshire
>
> import sys
> +from typing import Callable
> +
> +from framework.settings import SETTINGS
>
>
> def check_dts_python_version() -> None:
> @@ -22,9 +25,21 @@ def check_dts_python_version() -> None:
> print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
>
>
> +def skip_setup(func) -> Callable[..., None]:
> + if SETTINGS.skip_setup:
> + return lambda *args: None
> + else:
> + return func
> +
> +
> def GREEN(text: str) -> str:
> return f"\u001B[32;1m{str(text)}\u001B[0m"
>
>
> def RED(text: str) -> str:
> return f"\u001B[31;1m{str(text)}\u001B[0m"
> +
> +
> +class EnvVarsDict(dict):
> + def __str__(self) -> str:
> + return " ".join(["=".join(item) for item in self.items()])
>
This needs to make sure it doesn't silently run over the line length
limitations in posix sh/bash (4096 chars) or cmd (8191 chars). That would
be a VERY frustrating bug to track down and it can easily be stopped by
checking that this is a reasonable length (< 2k characters) and emitting a
warning if something goes over that.
> --
> 2.30.2
>
>
[-- Attachment #2: Type: text/html, Size: 36783 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [RFC PATCH v2 04/10] dts: add dpdk execution handling
2022-11-14 16:54 ` [RFC PATCH v2 04/10] dts: add dpdk execution handling Juraj Linkeš
@ 2022-11-16 13:28 ` Owen Hilyard
[not found] ` <df13ee41efb64e7bb37791f21ae5bac1@pantheon.tech>
0 siblings, 1 reply; 97+ messages in thread
From: Owen Hilyard @ 2022-11-16 13:28 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 36579 bytes --]
On Mon, Nov 14, 2022 at 11:54 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Add methods for setting up and shutting down DPDK apps and for
> constructing EAL parameters.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/conf.yaml | 4 +
> dts/framework/config/__init__.py | 85 ++++++++-
> dts/framework/config/conf_yaml_schema.json | 22 +++
> .../remote_session/os/linux_session.py | 15 ++
> dts/framework/remote_session/os/os_session.py | 16 +-
> .../remote_session/os/posix_session.py | 80 ++++++++
> dts/framework/testbed_model/hw/__init__.py | 17 ++
> dts/framework/testbed_model/hw/cpu.py | 164 ++++++++++++++++
> dts/framework/testbed_model/node/node.py | 36 ++++
> dts/framework/testbed_model/node/sut_node.py | 178 +++++++++++++++++-
> dts/framework/utils.py | 20 ++
> 11 files changed, 634 insertions(+), 3 deletions(-)
> create mode 100644 dts/framework/testbed_model/hw/__init__.py
> create mode 100644 dts/framework/testbed_model/hw/cpu.py
>
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> index 6b0bc5c2bf..976888a88e 100644
> --- a/dts/conf.yaml
> +++ b/dts/conf.yaml
> @@ -12,4 +12,8 @@ nodes:
> - name: "SUT 1"
> hostname: sut1.change.me.localhost
> user: root
> + arch: x86_64
> os: linux
> + bypass_core0: true
> + cpus: ""
> + memory_channels: 4
> diff --git a/dts/framework/config/__init__.py
> b/dts/framework/config/__init__.py
> index 1b97dc3ab9..344d697a69 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -11,12 +11,13 @@
> import pathlib
> from dataclasses import dataclass
> from enum import Enum, auto, unique
> -from typing import Any
> +from typing import Any, Iterable
>
> import warlock # type: ignore
> import yaml
>
> from framework.settings import SETTINGS
> +from framework.utils import expand_range
>
>
> class StrEnum(Enum):
> @@ -60,6 +61,80 @@ class Compiler(StrEnum):
> msvc = auto()
>
>
> +@dataclass(slots=True, frozen=True)
> +class CPU:
> + cpu: int
> + core: int
> + socket: int
> + node: int
> +
> + def __str__(self) -> str:
> + return str(self.cpu)
> +
> +
> +class CPUList(object):
> + """
> + Convert these options into a list of int cpus
> + cpu_list=[CPU1, CPU2] - a list of CPUs
> + cpu_list=[0,1,2,3] - a list of int indices
> + cpu_list=['0','1','2-3'] - a list of str indices; ranges are supported
> + cpu_list='0,1,2-3' - a comma delimited str of indices; ranges are
> supported
> +
> + The class creates a unified format used across the framework and
> allows
> + the user to use either a str representation (using str(instance) or
> directly
> + in f-strings) or a list representation (by accessing
> instance.cpu_list).
> + Empty cpu_list is allowed.
> + """
> +
> + _cpu_list: list[int]
> +
> + def __init__(self, cpu_list: list[int | str | CPU] | str):
> + self._cpu_list = []
> + if isinstance(cpu_list, str):
> + self._from_str(cpu_list.split(","))
> + else:
> + self._from_str((str(cpu) for cpu in cpu_list))
> +
> + # the input cpus may not be sorted
> + self._cpu_list.sort()
> +
> + @property
> + def cpu_list(self) -> list[int]:
> + return self._cpu_list
> +
> + def _from_str(self, cpu_list: Iterable[str]) -> None:
> + for cpu in cpu_list:
> + self._cpu_list.extend(expand_range(cpu))
> +
> + def _get_consecutive_cpus_range(self, cpu_list: list[int]) ->
> list[str]:
> + formatted_core_list = []
> + tmp_cpus_list = list(sorted(cpu_list))
> + segment = tmp_cpus_list[:1]
> + for core_id in tmp_cpus_list[1:]:
> + if core_id - segment[-1] == 1:
> + segment.append(core_id)
> + else:
> + formatted_core_list.append(
> + f"{segment[0]}-{segment[-1]}"
> + if len(segment) > 1
> + else f"{segment[0]}"
> + )
> + current_core_index = tmp_cpus_list.index(core_id)
> + formatted_core_list.extend(
> +
> self._get_consecutive_cpus_range(tmp_cpus_list[current_core_index:])
> + )
> + segment.clear()
> + break
> + if len(segment) > 0:
> + formatted_core_list.append(
> + f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else
> f"{segment[0]}"
> + )
> + return formatted_core_list
> +
> + def __str__(self) -> str:
> + return
> f'{",".join(self._get_consecutive_cpus_range(self._cpu_list))}'
> +
> +
> # Slots enables some optimizations, by pre-allocating space for the
> defined
> # attributes in the underlying data structure.
> #
> @@ -71,7 +146,11 @@ class NodeConfiguration:
> hostname: str
> user: str
> password: str | None
> + arch: Architecture
> os: OS
> + bypass_core0: bool
> + cpus: CPUList
> + memory_channels: int
>
> @staticmethod
> def from_dict(d: dict) -> "NodeConfiguration":
> @@ -80,7 +159,11 @@ def from_dict(d: dict) -> "NodeConfiguration":
> hostname=d["hostname"],
> user=d["user"],
> password=d.get("password"),
> + arch=Architecture(d["arch"]),
> os=OS(d["os"]),
> + bypass_core0=d.get("bypass_core0", False),
> + cpus=CPUList(d.get("cpus", "1")),
> + memory_channels=d.get("memory_channels", 1),
> )
>
>
> diff --git a/dts/framework/config/conf_yaml_schema.json
> b/dts/framework/config/conf_yaml_schema.json
> index 409ce7ac74..c59d3e30e6 100644
> --- a/dts/framework/config/conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_schema.json
> @@ -6,6 +6,12 @@
> "type": "string",
> "description": "A unique identifier for a node"
> },
> + "ARCH": {
> + "type": "string",
> + "enum": [
> + "x86_64"
>
arm64 and ppc64le should probably be included here. I think that we can
focus on 64 bit arches for now.
> + ]
> + },
> "OS": {
> "type": "string",
> "enum": [
> @@ -82,8 +88,23 @@
> "type": "string",
> "description": "The password to use on this node. Use only as
> a last resort. SSH keys are STRONGLY preferred."
> },
> + "arch": {
> + "$ref": "#/definitions/ARCH"
> + },
> "os": {
> "$ref": "#/definitions/OS"
> + },
> + "bypass_core0": {
> + "type": "boolean",
> + "description": "Indicate that DPDK should omit using the
> first core."
> + },
> + "cpus": {
> + "type": "string",
> + "description": "Optional comma-separated list of cpus to use,
> e.g.: 1,2,3,4,5,18-22. Defaults to 1. An empty string means use all cpus."
> + },
> + "memory_channels": {
> + "type": "integer",
> + "description": "How many memory channels to use. Optional,
> defaults to 1."
> }
> },
> "additionalProperties": false,
> @@ -91,6 +112,7 @@
> "name",
> "hostname",
> "user",
> + "arch",
> "os"
> ]
> },
> diff --git a/dts/framework/remote_session/os/linux_session.py
> b/dts/framework/remote_session/os/linux_session.py
> index 39e80631dd..21f117b714 100644
> --- a/dts/framework/remote_session/os/linux_session.py
> +++ b/dts/framework/remote_session/os/linux_session.py
> @@ -2,6 +2,8 @@
> # Copyright(c) 2022 PANTHEON.tech s.r.o.
> # Copyright(c) 2022 University of New Hampshire
>
> +from framework.config import CPU
> +
> from .posix_session import PosixSession
>
>
> @@ -9,3 +11,16 @@ class LinuxSession(PosixSession):
> """
> The implementation of non-Posix compliant parts of Linux remote
> sessions.
> """
> +
> + def get_remote_cpus(self, bypass_core0: bool) -> list[CPU]:
> + cpu_info = self.remote_session.send_command(
> + "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#"
> + ).stdout
> + cpus = []
> + for cpu_line in cpu_info.splitlines():
> + cpu, core, socket, node = cpu_line.split(",")
> + if bypass_core0 and core == 0 and socket == 0:
> + self.logger.info("Core0 bypassed.")
> + continue
> + cpus.append(CPU(int(cpu), int(core), int(socket), int(node)))
> + return cpus
> diff --git a/dts/framework/remote_session/os/os_session.py
> b/dts/framework/remote_session/os/os_session.py
> index 57e2865282..6f6b6a979e 100644
> --- a/dts/framework/remote_session/os/os_session.py
> +++ b/dts/framework/remote_session/os/os_session.py
> @@ -3,9 +3,10 @@
> # Copyright(c) 2022 University of New Hampshire
>
> from abc import ABC, abstractmethod
> +from collections.abc import Iterable
> from pathlib import PurePath
>
> -from framework.config import Architecture, NodeConfiguration
> +from framework.config import CPU, Architecture, NodeConfiguration
> from framework.logger import DTSLOG
> from framework.remote_session.factory import create_remote_session
> from framework.remote_session.remote_session import RemoteSession
> @@ -130,3 +131,16 @@ def get_dpdk_version(self, version_path: str |
> PurePath) -> str:
> """
> Inspect DPDK version on the remote node from version_path.
> """
> +
> + @abstractmethod
> + def get_remote_cpus(self, bypass_core0: bool) -> list[CPU]:
> + """
> + Compose a list of CPUs present on the remote node.
> + """
> +
> + @abstractmethod
> + def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) ->
> None:
> + """
> + Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
> + dpdk_prefix_list is empty, attempt to find running DPDK apps to
> kill and clean.
> + """
> diff --git a/dts/framework/remote_session/os/posix_session.py
> b/dts/framework/remote_session/os/posix_session.py
> index a36b8e8c1a..7151263c7a 100644
> --- a/dts/framework/remote_session/os/posix_session.py
> +++ b/dts/framework/remote_session/os/posix_session.py
> @@ -2,6 +2,8 @@
> # Copyright(c) 2022 PANTHEON.tech s.r.o.
> # Copyright(c) 2022 University of New Hampshire
>
> +import re
> +from collections.abc import Iterable
> from pathlib import PurePath, PurePosixPath
>
> from framework.config import Architecture
> @@ -138,3 +140,81 @@ def get_dpdk_version(self, build_dir: str | PurePath)
> -> str:
> f"cat {self.join_remote_path(build_dir, 'VERSION')}",
> verify=True
> )
> return out.stdout
> +
> + def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) ->
> None:
> + self.logger.info("Cleaning up DPDK apps.")
> + dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
> + if dpdk_runtime_dirs:
> + # kill and cleanup only if DPDK is running
> + dpdk_pids = self._get_dpdk_pids(dpdk_runtime_dirs)
> + for dpdk_pid in dpdk_pids:
> + self.remote_session.send_command(f"kill -9 {dpdk_pid}",
> 20)
> + self._check_dpdk_hugepages(dpdk_runtime_dirs)
> + self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
> +
> + def _get_dpdk_runtime_dirs(
> + self, dpdk_prefix_list: Iterable[str]
> + ) -> list[PurePosixPath]:
> + prefix = PurePosixPath("/var", "run", "dpdk")
> + if not dpdk_prefix_list:
> + remote_prefixes = self._list_remote_dirs(prefix)
> + if not remote_prefixes:
> + dpdk_prefix_list = []
> + else:
> + dpdk_prefix_list = remote_prefixes
> +
> + return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in
> dpdk_prefix_list]
> +
> + def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str]
> | None:
> + """
> + Return a list of directories of the remote_dir.
> + If remote_path doesn't exist, return None.
> + """
> + out = self.remote_session.send_command(
> + f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
> + ).stdout
> + if "No such file or directory" in out:
> + return None
> + else:
> + return out.splitlines()
> +
> + def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath])
> -> list[int]:
> + pids = []
> + pid_regex = r"p(\d+)"
> + for dpdk_runtime_dir in dpdk_runtime_dirs:
> + dpdk_config_file = PurePosixPath(dpdk_runtime_dir, "config")
> + if self._remote_files_exists(dpdk_config_file):
> + out = self.remote_session.send_command(
> + f"lsof -Fp {dpdk_config_file}"
> + ).stdout
> + if out and "No such file or directory" not in out:
> + for out_line in out.splitlines():
> + match = re.match(pid_regex, out_line)
> + if match:
> + pids.append(int(match.group(1)))
> + return pids
> +
> + def _remote_files_exists(self, remote_path: PurePath) -> bool:
> + result = self.remote_session.send_command(f"test -e
> {remote_path}")
> + return not result.return_code
> +
> + def _check_dpdk_hugepages(
> + self, dpdk_runtime_dirs: Iterable[str | PurePath]
> + ) -> None:
> + for dpdk_runtime_dir in dpdk_runtime_dirs:
> + hugepage_info = PurePosixPath(dpdk_runtime_dir,
> "hugepage_info")
> + if self._remote_files_exists(hugepage_info):
> + out = self.remote_session.send_command(
> + f"lsof -Fp {hugepage_info}"
> + ).stdout
> + if out and "No such file or directory" not in out:
> + self.logger.warning("Some DPDK processes did not free
> hugepages.")
> +
> self.logger.warning("*******************************************")
> + self.logger.warning(out)
> +
> self.logger.warning("*******************************************")
> +
> + def _remove_dpdk_runtime_dirs(
> + self, dpdk_runtime_dirs: Iterable[str | PurePath]
> + ) -> None:
> + for dpdk_runtime_dir in dpdk_runtime_dirs:
> + self.remove_remote_dir(dpdk_runtime_dir)
> diff --git a/dts/framework/testbed_model/hw/__init__.py
> b/dts/framework/testbed_model/hw/__init__.py
> new file mode 100644
> index 0000000000..7d79a7efd0
> --- /dev/null
> +++ b/dts/framework/testbed_model/hw/__init__.py
> @@ -0,0 +1,17 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +
> +from framework.config import CPU, CPUList
> +
> +from .cpu import CPUAmount, CPUAmountFilter, CPUFilter, CPUListFilter
> +
> +
> +def cpu_filter(
> + core_list: list[CPU], filter_specifier: CPUAmount | CPUList,
> ascending: bool
> +) -> CPUFilter:
> + if isinstance(filter_specifier, CPUList):
> + return CPUListFilter(core_list, filter_specifier, ascending)
> + elif isinstance(filter_specifier, CPUAmount):
> + return CPUAmountFilter(core_list, filter_specifier, ascending)
> + else:
> + raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/testbed_model/hw/cpu.py
> b/dts/framework/testbed_model/hw/cpu.py
> new file mode 100644
> index 0000000000..87e87bcb4e
> --- /dev/null
> +++ b/dts/framework/testbed_model/hw/cpu.py
> @@ -0,0 +1,164 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +
> +import dataclasses
> +from abc import ABC, abstractmethod
> +from collections.abc import Iterable
> +
> +from framework.config import CPU, CPUList
> +
> +
> +@dataclasses.dataclass(slots=True, frozen=True)
> +class CPUAmount:
> + """
> + Define the amounts of cpus to use. If sockets is not None,
> socket_amount
> + is ignored.
> + """
> +
> + cpus_per_core: int = 1
> + cores_per_socket: int = 2
> + socket_amount: int = 1
> + sockets: list[int] | None = None
> +
> +
> +class CPUFilter(ABC):
> + """
> + Filter according to the input filter specifier. Each filter needs to
> be
> + implemented in a derived class.
> + This class only implements operations common to all filters, such as
> sorting
> + the list to be filtered beforehand.
> + """
> +
> + _filter_specifier: CPUAmount | CPUList
> + _cpus_to_filter: list[CPU]
> +
> + def __init__(
> + self,
> + core_list: list[CPU],
> + filter_specifier: CPUAmount | CPUList,
> + ascending: bool = True,
> + ) -> None:
> + self._filter_specifier = filter_specifier
> +
> + # sorting by core is needed in case hyperthreading is enabled
> + self._cpus_to_filter = sorted(
> + core_list, key=lambda x: x.core, reverse=not ascending
> + )
> + self.filter()
> +
> + @abstractmethod
> + def filter(self) -> list[CPU]:
> + """
> + Use the input self._filter_specifier to filter
> self._cpus_to_filter
> + and return the list of filtered CPUs. self._cpus_to_filter is a
> + sorter copy of the original list, so it may be modified.
> + """
> +
> +
> +class CPUAmountFilter(CPUFilter):
> + """
> + Filter the input list of CPUs according to specified rules:
> + Use cores from the specified amount of sockets or from the specified
> socket ids.
> + If sockets is specified, it takes precedence over socket_amount.
> + From each of those sockets, use only cores_per_socket of cores.
> + And for each core, use cpus_per_core of cpus. Hypertheading
> + must be enabled for this to take effect.
> + If ascending is True, use cores with the lowest numerical id first
> + and continue in ascending order. If False, start with the highest
> + id and continue in descending order. This ordering affects which
> + sockets to consider first as well.
> + """
> +
> + _filter_specifier: CPUAmount
> +
> + def filter(self) -> list[CPU]:
> + return
> self._filter_cpus(self._filter_sockets(self._cpus_to_filter))
> +
> + def _filter_sockets(self, cpus_to_filter: Iterable[CPU]) -> list[CPU]:
> + allowed_sockets: set[int] = set()
> + socket_amount = self._filter_specifier.socket_amount
> + if self._filter_specifier.sockets:
> + socket_amount = len(self._filter_specifier.sockets)
> + allowed_sockets = set(self._filter_specifier.sockets)
> +
> + filtered_cpus = []
> + for cpu in cpus_to_filter:
> + if not self._filter_specifier.sockets:
> + if len(allowed_sockets) < socket_amount:
> + allowed_sockets.add(cpu.socket)
> + if cpu.socket in allowed_sockets:
> + filtered_cpus.append(cpu)
> +
> + if len(allowed_sockets) < socket_amount:
> + raise ValueError(
> + f"The amount of sockets from which to use cores "
> + f"({socket_amount}) exceeds the actual amount present "
> + f"on the node ({len(allowed_sockets)})"
> + )
> +
> + return filtered_cpus
> +
> + def _filter_cpus(self, cpus_to_filter: Iterable[CPU]) -> list[CPU]:
> + # no need to use ordered dict, from Python3.7 the dict
> + # insertion order is preserved (LIFO).
> + allowed_cpu_per_core_count_map: dict[int, int] = {}
> + filtered_cpus = []
> + for cpu in cpus_to_filter:
> + if cpu.core in allowed_cpu_per_core_count_map:
> + cpu_count = allowed_cpu_per_core_count_map[cpu.core]
> + if self._filter_specifier.cpus_per_core > cpu_count:
> + # only add cpus of the given core
> + allowed_cpu_per_core_count_map[cpu.core] += 1
> + filtered_cpus.append(cpu)
> + else:
> + raise ValueError(
> + f"The amount of CPUs per core to use "
> + f"({self._filter_specifier.cpus_per_core}) "
> + f"exceeds the actual amount present. Is
> hyperthreading enabled?"
> + )
> + elif self._filter_specifier.cores_per_socket > len(
> + allowed_cpu_per_core_count_map
> + ):
> + # only add cpus if we need more
> + allowed_cpu_per_core_count_map[cpu.core] = 1
> + filtered_cpus.append(cpu)
> + else:
> + # cpus are sorted by core, at this point we won't
> encounter new cores
> + break
> +
> + cores_per_socket = len(allowed_cpu_per_core_count_map)
> + if cores_per_socket < self._filter_specifier.cores_per_socket:
> + raise ValueError(
> + f"The amount of cores per socket to use "
> + f"({self._filter_specifier.cores_per_socket}) "
> + f"exceeds the actual amount present ({cores_per_socket})"
> + )
> +
> + return filtered_cpus
> +
> +
> +class CPUListFilter(CPUFilter):
> + """
> + Filter the input list of CPUs according to the input list of
> + core indices.
> + An empty CPUList won't filter anything.
> + """
> +
> + _filter_specifier: CPUList
> +
> + def filter(self) -> list[CPU]:
> + if not len(self._filter_specifier.cpu_list):
> + return self._cpus_to_filter
> +
> + filtered_cpus = []
> + for core in self._cpus_to_filter:
> + if core.cpu in self._filter_specifier.cpu_list:
> + filtered_cpus.append(core)
> +
> + if len(filtered_cpus) != len(self._filter_specifier.cpu_list):
> + raise ValueError(
> + f"Not all cpus from {self._filter_specifier.cpu_list}
> were found"
> + f"among {self._cpus_to_filter}"
> + )
> +
> + return filtered_cpus
> diff --git a/dts/framework/testbed_model/node/node.py
> b/dts/framework/testbed_model/node/node.py
> index 86654e55ae..5ee7023335 100644
> --- a/dts/framework/testbed_model/node/node.py
> +++ b/dts/framework/testbed_model/node/node.py
> @@ -8,13 +8,16 @@
> """
>
> from framework.config import (
> + CPU,
> BuildTargetConfiguration,
> + CPUList,
> ExecutionConfiguration,
> NodeConfiguration,
> )
> from framework.exception import NodeCleanupError, NodeSetupError,
> convert_exception
> from framework.logger import DTSLOG, getLogger
> from framework.remote_session import OSSession, create_session
> +from framework.testbed_model.hw import CPUAmount, cpu_filter
>
>
> class Node(object):
> @@ -28,6 +31,7 @@ class Node(object):
> main_session: OSSession
> logger: DTSLOG
> config: NodeConfiguration
> + cpus: list[CPU]
> _other_sessions: list[OSSession]
>
> def __init__(self, node_config: NodeConfiguration):
> @@ -38,6 +42,7 @@ def __init__(self, node_config: NodeConfiguration):
> self.logger = getLogger(self.name)
> self.logger.info(f"Created node: {self.name}")
> self.main_session = create_session(self.config, self.name,
> self.logger)
> + self._get_remote_cpus()
>
> @convert_exception(NodeSetupError)
> def setup_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> @@ -109,6 +114,37 @@ def create_session(self, name: str) -> OSSession:
> self._other_sessions.append(connection)
> return connection
>
> + def filter_cpus(
> + self,
> + filter_specifier: CPUAmount | CPUList,
> + ascending: bool = True,
> + ) -> list[CPU]:
> + """
> + Filter the logical cpus found on the Node according to specified
> rules:
> + Use cores from the specified amount of sockets or from the
> specified
> + socket ids. If sockets is specified, it takes precedence over
> socket_amount.
> + From each of those sockets, use only cpus_per_socket of cores.
> + And for each core, use cpus_per_core of cpus. Hypertheading
> + must be enabled for this to take effect.
> + If ascending is True, use cores with the lowest numerical id first
> + and continue in ascending order. If False, start with the highest
> + id and continue in descending order. This ordering affects which
> + sockets to consider first as well.
> + """
> + self.logger.info("Filtering ")
> + return cpu_filter(
> + self.cpus,
> + filter_specifier,
> + ascending,
> + ).filter()
> +
> + def _get_remote_cpus(self) -> None:
> + """
> + Scan cpus in the remote OS and store a list of CPUs.
> + """
> + self.logger.info("Getting CPU information.")
> + self.cpus =
> self.main_session.get_remote_cpus(self.config.bypass_core0)
> +
> def close(self) -> None:
> """
> Close all connections and free other resources.
> diff --git a/dts/framework/testbed_model/node/sut_node.py
> b/dts/framework/testbed_model/node/sut_node.py
> index 53268a7565..ff3be845b4 100644
> --- a/dts/framework/testbed_model/node/sut_node.py
> +++ b/dts/framework/testbed_model/node/sut_node.py
> @@ -4,10 +4,13 @@
>
> import os
> import tarfile
> +import time
> from pathlib import PurePath
>
> -from framework.config import BuildTargetConfiguration, NodeConfiguration
> +from framework.config import CPU, BuildTargetConfiguration, CPUList,
> NodeConfiguration
> +from framework.remote_session import OSSession
> from framework.settings import SETTINGS
> +from framework.testbed_model.hw import CPUAmount, CPUListFilter
> from framework.utils import EnvVarsDict, skip_setup
>
> from .node import Node
> @@ -21,19 +24,31 @@ class SutNode(Node):
> Another key capability is building DPDK according to given build
> target.
> """
>
> + cpus: list[CPU]
> + dpdk_prefix_list: list[str]
> + dpdk_prefix_subfix: str
> _build_target_config: BuildTargetConfiguration | None
> _env_vars: EnvVarsDict
> _remote_tmp_dir: PurePath
> __remote_dpdk_dir: PurePath | None
> _app_compile_timeout: float
> + _dpdk_kill_session: OSSession | None
>
> def __init__(self, node_config: NodeConfiguration):
> super(SutNode, self).__init__(node_config)
> + self.dpdk_prefix_list = []
> self._build_target_config = None
> self._env_vars = EnvVarsDict()
> self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
> self.__remote_dpdk_dir = None
> self._app_compile_timeout = 90
> + self._dpdk_kill_session = None
> +
> + # filter the node cpus according to user config
> + self.cpus = CPUListFilter(self.cpus, self.config.cpus).filter()
> + self.dpdk_prefix_subfix = (
> + f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S',
> time.localtime())}"
> + )
>
> @property
> def _remote_dpdk_dir(self) -> PurePath:
> @@ -142,3 +157,164 @@ def build_dpdk_app(self, app_name: str) -> PurePath:
> return self.main_session.join_remote_path(
> build_dir, "examples", f"dpdk-{app_name}"
> )
> +
> + def kill_cleanup_dpdk_apps(self) -> None:
> + """
> + Kill all dpdk applications on the SUT. Cleanup hugepages.
> + """
> + if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
> + # we can use the session if it exists and responds
> +
> self._dpdk_kill_session.kill_cleanup_dpdk_apps(self.dpdk_prefix_list)
> + else:
> + # otherwise, we need to (re)create it
> + self._dpdk_kill_session = self.create_session("dpdk_kill")
> + self.dpdk_prefix_list = []
> +
> + def create_eal_parameters(
> + self,
> + fixed_prefix: bool = False,
> + core_filter_specifier: CPUAmount | CPUList = CPUAmount(),
> + ascending_cores: bool = True,
> + prefix: str = "",
> + no_pci: bool = False,
> + vdevs: list[str] = None,
>
I would prefer to have vdevs be a list of objects, even if for now that
class just takes a string in its constructor. Later on we can add
subclasses for specific vdevs that might see heavy use, such
as librte_net_pcap and crypto_openssl.
> + other_eal_param: str = "",
> + ) -> str:
> + """
> + Generate eal parameters character string;
> + :param fixed_prefix: use fixed file-prefix or not, when it is
> true,
> + the file-prefix will not be added a timestamp
> + :param core_filter_specifier: an amount of cpus/cores/sockets to
> use
> + or a list of cpu ids to use.
> + The default will select one cpu for each of two
> cores
> + on one socket, in ascending order of core ids.
> + :param ascending_cores: True, use cores with the lowest numerical
> id first
> + and continue in ascending order. If False, start
> with the
> + highest id and continue in descending order. This
> ordering
> + affects which sockets to consider first as well.
> + :param prefix: set file prefix string, eg:
> + prefix='vf';
> + :param no_pci: switch of disable PCI bus eg:
> + no_pci=True;
> + :param vdevs: virtual device list, eg:
> + vdevs=['net_ring0', 'net_ring1'];
> + :param other_eal_param: user defined DPDK eal parameters, eg:
> + other_eal_param='--single-file-segments';
> + :return: eal param string, eg:
> + '-c 0xf -a 0000:88:00.0
> --file-prefix=dpdk_1112_20190809143420';
> + if DPDK version < 20.11-rc4, eal_str eg:
> + '-c 0xf -w 0000:88:00.0
> --file-prefix=dpdk_1112_20190809143420';
> + """
> + if vdevs is None:
> + vdevs = []
> +
> + config = {
> + "core_filter_specifier": core_filter_specifier,
> + "ascending_cores": ascending_cores,
> + "prefix": prefix,
> + "no_pci": no_pci,
> + "vdevs": vdevs,
> + "other_eal_param": other_eal_param,
> + }
> +
> + eal_parameter_creator = _EalParameter(
> + sut_node=self, fixed_prefix=fixed_prefix, **config
> + )
> + eal_str = eal_parameter_creator.make_eal_param()
> +
> + return eal_str
> +
> +
> +class _EalParameter(object):
> + def __init__(
> + self,
> + sut_node: SutNode,
> + fixed_prefix: bool,
> + core_filter_specifier: CPUAmount | CPUList,
> + ascending_cores: bool,
> + prefix: str,
> + no_pci: bool,
> + vdevs: list[str],
> + other_eal_param: str,
> + ):
> + """
> + Generate eal parameters character string;
> + :param sut_node: SUT Node;
> + :param fixed_prefix: use fixed file-prefix or not, when it is
> true,
> + he file-prefix will not be added a timestamp
> + :param core_filter_specifier: an amount of cpus/cores/sockets to
> use
> + or a list of cpu ids to use.
> + :param ascending_cores: True, use cores with the lowest numerical
> id first
> + and continue in ascending order. If False, start
> with the
> + highest id and continue in descending order. This
> ordering
> + affects which sockets to consider first as well.
> + :param prefix: set file prefix string, eg:
> + prefix='vf';
> + :param no_pci: switch of disable PCI bus eg:
> + no_pci=True;
> + :param vdevs: virtual device list, eg:
> + vdevs=['net_ring0', 'net_ring1'];
> + :param other_eal_param: user defined DPDK eal parameters, eg:
> + other_eal_param='--single-file-segments';
> + """
> + self.os = sut_node.config.os
> + self.fixed_prefix = fixed_prefix
> + self.sut_node = sut_node
> + self.core_filter_specifier = core_filter_specifier
> + self.ascending_cores = ascending_cores
> + self.prefix = prefix
> + self.no_pci = no_pci
> + self.vdevs = vdevs
> + self.other_eal_param = other_eal_param
> +
> + def _make_lcores_param(self) -> str:
> + filtered_cpus = self.sut_node.filter_cpus(
> + self.core_filter_specifier, self.ascending_cores
> + )
> + return f"-l {CPUList(filtered_cpus)}"
> +
> + def _make_memory_channels(self) -> str:
> + param_template = "-n {}"
> + return param_template.format(self.sut_node.config.memory_channels)
> +
> + def _make_no_pci_param(self) -> str:
> + if self.no_pci is True:
> + return "--no-pci"
> + else:
> + return ""
> +
> + def _make_prefix_param(self) -> str:
> + if self.prefix == "":
> + fixed_file_prefix = f"dpdk_{self.sut_node.dpdk_prefix_subfix}"
> + else:
> + fixed_file_prefix = self.prefix
> + if not self.fixed_prefix:
> + fixed_file_prefix = (
> +
> f"{fixed_file_prefix}_{self.sut_node.dpdk_prefix_subfix}"
> + )
> + fixed_file_prefix =
> self._do_os_handle_with_prefix_param(fixed_file_prefix)
> + return fixed_file_prefix
> +
> + def _make_vdevs_param(self) -> str:
> + if len(self.vdevs) == 0:
> + return ""
> + else:
> + return " ".join(f"--vdev {vdev}" for vdev in self.vdevs)
> +
> + def _do_os_handle_with_prefix_param(self, file_prefix: str) -> str:
> + self.sut_node.dpdk_prefix_list.append(file_prefix)
> + return f"--file-prefix={file_prefix}"
> +
> + def make_eal_param(self) -> str:
> + _eal_str = " ".join(
> + [
> + self._make_lcores_param(),
> + self._make_memory_channels(),
> + self._make_prefix_param(),
> + self._make_no_pci_param(),
> + self._make_vdevs_param(),
> + # append user defined eal parameters
> + self.other_eal_param,
> + ]
> + )
> + return _eal_str
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index 91e58f3218..3c2f0adff9 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -32,6 +32,26 @@ def skip_setup(func) -> Callable[..., None]:
> return func
>
>
> +def expand_range(range_str: str) -> list[int]:
> + """
> + Process range string into a list of integers. There are two possible
> formats:
> + n - a single integer
> + n-m - a range of integers
> +
> + The returned range includes both n and m. Empty string returns an
> empty list.
> + """
> + expanded_range: list[int] = []
> + if range_str:
> + range_boundaries = range_str.split("-")
> + # will throw an exception when items in range_boundaries can't be
> converted,
> + # serving as type check
> + expanded_range.extend(
> + range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
> + )
> +
> + return expanded_range
> +
> +
> def GREEN(text: str) -> str:
> return f"\u001B[32;1m{str(text)}\u001B[0m"
>
> --
> 2.30.2
>
>
[-- Attachment #2: Type: text/html, Size: 44466 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [RFC PATCH v2 05/10] dts: add node memory setup
2022-11-14 16:54 ` [RFC PATCH v2 05/10] dts: add node memory setup Juraj Linkeš
@ 2022-11-16 13:47 ` Owen Hilyard
2022-11-23 13:58 ` Juraj Linkeš
0 siblings, 1 reply; 97+ messages in thread
From: Owen Hilyard @ 2022-11-16 13:47 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 24115 bytes --]
On Mon, Nov 14, 2022 at 11:54 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Setup hugepages on nodes. This is useful not only on SUT nodes, but
> also on TG nodes which use TGs that utilize hugepages.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/framework/remote_session/__init__.py | 1 +
> dts/framework/remote_session/arch/__init__.py | 20 +++++
> dts/framework/remote_session/arch/arch.py | 57 +++++++++++++
> .../remote_session/os/linux_session.py | 85 +++++++++++++++++++
> dts/framework/remote_session/os/os_session.py | 10 +++
> dts/framework/testbed_model/node/node.py | 15 +++-
> 6 files changed, 187 insertions(+), 1 deletion(-)
> create mode 100644 dts/framework/remote_session/arch/__init__.py
> create mode 100644 dts/framework/remote_session/arch/arch.py
>
> diff --git a/dts/framework/remote_session/__init__.py
> b/dts/framework/remote_session/__init__.py
> index f2339b20bd..f0deeadac6 100644
> --- a/dts/framework/remote_session/__init__.py
> +++ b/dts/framework/remote_session/__init__.py
> @@ -11,4 +11,5 @@
>
> # pylama:ignore=W0611
>
> +from .arch import Arch, create_arch
> from .os import OSSession, create_session
> diff --git a/dts/framework/remote_session/arch/__init__.py
> b/dts/framework/remote_session/arch/__init__.py
> new file mode 100644
> index 0000000000..d78ad42ac5
> --- /dev/null
> +++ b/dts/framework/remote_session/arch/__init__.py
> @@ -0,0 +1,20 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +
> +from framework.config import Architecture, NodeConfiguration
> +
> +from .arch import PPC64, Arch, Arm64, i686, x86_32, x86_64
> +
> +
> +def create_arch(node_config: NodeConfiguration) -> Arch:
> + match node_config.arch:
> + case Architecture.x86_64:
> + return x86_64()
> + case Architecture.x86_32:
> + return x86_32()
> + case Architecture.i686:
> + return i686()
> + case Architecture.ppc64le:
> + return PPC64()
> + case Architecture.arm64:
> + return Arm64()
> diff --git a/dts/framework/remote_session/arch/arch.py
> b/dts/framework/remote_session/arch/arch.py
> new file mode 100644
> index 0000000000..05c7602def
> --- /dev/null
> +++ b/dts/framework/remote_session/arch/arch.py
> @@ -0,0 +1,57 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +
> +
> +class Arch(object):
> + """
> + Stores architecture-specific information.
> + """
> +
> + @property
> + def default_hugepage_memory(self) -> int:
> + """
> + Return the default amount of memory allocated for hugepages DPDK
> will use.
> + The default is an amount equal to 256 2MB hugepages (512MB
> memory).
> + """
> + return 256 * 2048
> +
> + @property
> + def hugepage_force_first_numa(self) -> bool:
> + """
> + An architecture may need to force configuration of hugepages to
> first socket.
> + """
> + return False
> +
> +
> +class x86_64(Arch):
> + @property
> + def default_hugepage_memory(self) -> int:
> + return 4096 * 2048
> +
> +
> +class x86_32(Arch):
> + @property
> + def hugepage_force_first_numa(self) -> bool:
> + return True
> +
> +
> +class i686(Arch):
> + @property
> + def default_hugepage_memory(self) -> int:
> + return 512 * 2048
> +
> + @property
> + def hugepage_force_first_numa(self) -> bool:
> + return True
> +
> +
> +class PPC64(Arch):
> + @property
> + def default_hugepage_memory(self) -> int:
> + return 512 * 2048
> +
> +
> +class Arm64(Arch):
> + @property
> + def default_hugepage_memory(self) -> int:
> + return 2048 * 2048
> diff --git a/dts/framework/remote_session/os/linux_session.py
> b/dts/framework/remote_session/os/linux_session.py
> index 21f117b714..fad33d7613 100644
> --- a/dts/framework/remote_session/os/linux_session.py
> +++ b/dts/framework/remote_session/os/linux_session.py
> @@ -3,6 +3,8 @@
> # Copyright(c) 2022 University of New Hampshire
>
> from framework.config import CPU
> +from framework.exception import RemoteCommandExecutionError
> +from framework.utils import expand_range
>
> from .posix_session import PosixSession
>
> @@ -24,3 +26,86 @@ def get_remote_cpus(self, bypass_core0: bool) ->
> list[CPU]:
> continue
> cpus.append(CPU(int(cpu), int(core), int(socket), int(node)))
> return cpus
> +
> + def setup_hugepages(
> + self, hugepage_amount: int = -1, force_first_numa: bool = False
>
I think that hugepage_amount: int | None = None is better, since it
expresses it is an optional argument and the type checker will force anyone
using the value to check if it is none, whereas that will not happen with
-1.
> + ) -> None:
> + self.logger.info("Getting Hugepage information.")
> + hugepage_size = self._get_hugepage_size()
> + hugepages_total = self._get_hugepages_total()
> + self._numa_nodes = self._get_numa_nodes()
> +
> + target_hugepages_total = int(hugepage_amount / hugepage_size)
> + if hugepage_amount % hugepage_size:
> + target_hugepages_total += 1
> + if force_first_numa or hugepages_total != target_hugepages_total:
> + # when forcing numa, we need to clear existing hugepages
> regardless
> + # of size, so they can be moved to the first numa node
> + self._configure_huge_pages(
> + target_hugepages_total, hugepage_size, force_first_numa
> + )
> + else:
> + self.logger.info("Hugepages already configured.")
> + self._mount_huge_pages()
> +
> + def _get_hugepage_size(self) -> int:
> + hugepage_size = self.remote_session.send_command(
> + "awk '/Hugepagesize/ {print $2}' /proc/meminfo"
+ ).stdout
> + return int(hugepage_size)
> +
> + def _get_hugepages_total(self) -> int:
> + hugepages_total = self.remote_session.send_command(
> + "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ ).stdout
> + return int(hugepages_total)
> +
> + def _get_numa_nodes(self) -> list[int]:
> + try:
> + numa_range = self.remote_session.send_command(
> + "cat /sys/devices/system/node/online", verify=True
+ ).stdout
> + numa_range = expand_range(numa_range)
> + except RemoteCommandExecutionError:
> + # the file doesn't exist, meaning the node doesn't support
> numa
> + numa_range = []
> + return numa_range
> +
> + def _mount_huge_pages(self) -> None:
> + self.logger.info("Re-mounting Hugepages.")
> + hugapge_fs_cmd = "awk '/hugetlbfs/ { print $2 }' /proc/mounts"
+ self.remote_session.send_command(f"umount $({hugapge_fs_cmd})")
> + result = self.remote_session.send_command(hugapge_fs_cmd)
> + if result.stdout == "":
> + remote_mount_path = "/mnt/huge"
> + self.remote_session.send_command(f"mkdir -p
> {remote_mount_path}")
> + self.remote_session.send_command(
> + f"mount -t hugetlbfs nodev {remote_mount_path}"
> + )
> +
> + def _supports_numa(self) -> bool:
> + # the system supports numa if self._numa_nodes is non-empty and
> there are more
> + # than one numa node (in the latter case it may actually support
> numa, but
> + # there's no reason to do any numa specific configuration)
> + return len(self._numa_nodes) > 1
> +
> + def _configure_huge_pages(
> + self, amount: int, size: int, force_first_numa: bool
> + ) -> None:
+ self.logger.info("Configuring Hugepages.")
> + hugepage_config_path = (
> + f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
> + )
> + if force_first_numa and self._supports_numa():
> + # clear non-numa hugepages
> + self.remote_session.send_command(
> + f"echo 0 | sudo tee {hugepage_config_path}"
> + )
> + hugepage_config_path = (
> +
> f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages"
> + f"/hugepages-{size}kB/nr_hugepages"
> + )
> +
> + self.remote_session.send_command(
> + f"echo {amount} | sudo tee {hugepage_config_path}"
> + )
> diff --git a/dts/framework/remote_session/os/os_session.py
> b/dts/framework/remote_session/os/os_session.py
> index 6f6b6a979e..f84f3ce63c 100644
> --- a/dts/framework/remote_session/os/os_session.py
> +++ b/dts/framework/remote_session/os/os_session.py
> @@ -144,3 +144,13 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list:
> Iterable[str]) -> None:
> Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
> dpdk_prefix_list is empty, attempt to find running DPDK apps to
> kill and clean.
> """
> +
> + @abstractmethod
> + def setup_hugepages(
> + self, hugepage_amount: int = -1, force_first_numa: bool = False
> + ) -> None:
> + """
> + Get the node's Hugepage Size, configure the specified amount of
> hugepages
> + if needed and mount the hugepages if needed.
> + If force_first_numa is True, configure hugepages just on the
> first socket.
> + """
> diff --git a/dts/framework/testbed_model/node/node.py
> b/dts/framework/testbed_model/node/node.py
> index 5ee7023335..96a1724f4c 100644
> --- a/dts/framework/testbed_model/node/node.py
> +++ b/dts/framework/testbed_model/node/node.py
> @@ -16,7 +16,7 @@
> )
> from framework.exception import NodeCleanupError, NodeSetupError,
> convert_exception
> from framework.logger import DTSLOG, getLogger
> -from framework.remote_session import OSSession, create_session
> +from framework.remote_session import Arch, OSSession, create_arch,
> create_session
> from framework.testbed_model.hw import CPUAmount, cpu_filter
>
>
> @@ -33,6 +33,7 @@ class Node(object):
> config: NodeConfiguration
> cpus: list[CPU]
> _other_sessions: list[OSSession]
> + _arch: Arch
>
> def __init__(self, node_config: NodeConfiguration):
> self.config = node_config
> @@ -42,6 +43,7 @@ def __init__(self, node_config: NodeConfiguration):
> self.logger = getLogger(self.name)
> self.logger.info(f"Created node: {self.name}")
> self.main_session = create_session(self.config, self.name,
> self.logger)
> + self._arch = create_arch(self.config)
> self._get_remote_cpus()
>
> @convert_exception(NodeSetupError)
> @@ -50,6 +52,7 @@ def setup_execution(self, execution_config:
> ExecutionConfiguration) -> None:
> Perform the execution setup that will be done for each execution
> this node is part of.
> """
> + self._setup_hugepages()
> self._setup_execution(execution_config)
>
> def _setup_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> @@ -145,6 +148,16 @@ def _get_remote_cpus(self) -> None:
> self.logger.info("Getting CPU information.")
> self.cpus =
> self.main_session.get_remote_cpus(self.config.bypass_core0)
>
> + def _setup_hugepages(self):
> + """
> + Setup hugepages on the Node. Different architectures can supply
> different
> + amounts of memory for hugepages and numa-based hugepage
> allocation may need
> + to be considered.
> + """
> + self.main_session.setup_hugepages(
> + self._arch.default_hugepage_memory,
> self._arch.hugepage_force_first_numa
> + )
> +
> def close(self) -> None:
> """
> Close all connections and free other resources.
> --
> 2.30.2
>
>
On Mon, Nov 14, 2022 at 11:54 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Setup hugepages on nodes. This is useful not only on SUT nodes, but
> also on TG nodes which use TGs that utilize hugepages.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/framework/remote_session/__init__.py | 1 +
> dts/framework/remote_session/arch/__init__.py | 20 +++++
> dts/framework/remote_session/arch/arch.py | 57 +++++++++++++
> .../remote_session/os/linux_session.py | 85 +++++++++++++++++++
> dts/framework/remote_session/os/os_session.py | 10 +++
> dts/framework/testbed_model/node/node.py | 15 +++-
> 6 files changed, 187 insertions(+), 1 deletion(-)
> create mode 100644 dts/framework/remote_session/arch/__init__.py
> create mode 100644 dts/framework/remote_session/arch/arch.py
>
> diff --git a/dts/framework/remote_session/__init__.py
> b/dts/framework/remote_session/__init__.py
> index f2339b20bd..f0deeadac6 100644
> --- a/dts/framework/remote_session/__init__.py
> +++ b/dts/framework/remote_session/__init__.py
> @@ -11,4 +11,5 @@
>
> # pylama:ignore=W0611
>
> +from .arch import Arch, create_arch
> from .os import OSSession, create_session
> diff --git a/dts/framework/remote_session/arch/__init__.py
> b/dts/framework/remote_session/arch/__init__.py
> new file mode 100644
> index 0000000000..d78ad42ac5
> --- /dev/null
> +++ b/dts/framework/remote_session/arch/__init__.py
> @@ -0,0 +1,20 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +
> +from framework.config import Architecture, NodeConfiguration
> +
> +from .arch import PPC64, Arch, Arm64, i686, x86_32, x86_64
> +
> +
> +def create_arch(node_config: NodeConfiguration) -> Arch:
> + match node_config.arch:
> + case Architecture.x86_64:
> + return x86_64()
> + case Architecture.x86_32:
> + return x86_32()
> + case Architecture.i686:
> + return i686()
> + case Architecture.ppc64le:
> + return PPC64()
> + case Architecture.arm64:
> + return Arm64()
> diff --git a/dts/framework/remote_session/arch/arch.py
> b/dts/framework/remote_session/arch/arch.py
> new file mode 100644
> index 0000000000..05c7602def
> --- /dev/null
> +++ b/dts/framework/remote_session/arch/arch.py
> @@ -0,0 +1,57 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +
> +
> +class Arch(object):
> + """
> + Stores architecture-specific information.
> + """
> +
> + @property
> + def default_hugepage_memory(self) -> int:
> + """
> + Return the default amount of memory allocated for hugepages DPDK
> will use.
> + The default is an amount equal to 256 2MB hugepages (512MB
> memory).
> + """
> + return 256 * 2048
> +
> + @property
> + def hugepage_force_first_numa(self) -> bool:
> + """
> + An architecture may need to force configuration of hugepages to
> first socket.
> + """
> + return False
> +
> +
> +class x86_64(Arch):
> + @property
> + def default_hugepage_memory(self) -> int:
> + return 4096 * 2048
> +
> +
> +class x86_32(Arch):
> + @property
> + def hugepage_force_first_numa(self) -> bool:
> + return True
> +
> +
> +class i686(Arch):
> + @property
> + def default_hugepage_memory(self) -> int:
> + return 512 * 2048
> +
> + @property
> + def hugepage_force_first_numa(self) -> bool:
> + return True
> +
> +
> +class PPC64(Arch):
> + @property
> + def default_hugepage_memory(self) -> int:
> + return 512 * 2048
> +
> +
> +class Arm64(Arch):
> + @property
> + def default_hugepage_memory(self) -> int:
> + return 2048 * 2048
> diff --git a/dts/framework/remote_session/os/linux_session.py
> b/dts/framework/remote_session/os/linux_session.py
> index 21f117b714..fad33d7613 100644
> --- a/dts/framework/remote_session/os/linux_session.py
> +++ b/dts/framework/remote_session/os/linux_session.py
> @@ -3,6 +3,8 @@
> # Copyright(c) 2022 University of New Hampshire
>
> from framework.config import CPU
> +from framework.exception import RemoteCommandExecutionError
> +from framework.utils import expand_range
>
> from .posix_session import PosixSession
>
> @@ -24,3 +26,86 @@ def get_remote_cpus(self, bypass_core0: bool) ->
> list[CPU]:
> continue
> cpus.append(CPU(int(cpu), int(core), int(socket), int(node)))
> return cpus
> +
> + def setup_hugepages(
> + self, hugepage_amount: int = -1, force_first_numa: bool = False
> + ) -> None:
> + self.logger.info("Getting Hugepage information.")
> + hugepage_size = self._get_hugepage_size()
> + hugepages_total = self._get_hugepages_total()
> + self._numa_nodes = self._get_numa_nodes()
> +
> + target_hugepages_total = int(hugepage_amount / hugepage_size)
> + if hugepage_amount % hugepage_size:
> + target_hugepages_total += 1
> + if force_first_numa or hugepages_total != target_hugepages_total:
> + # when forcing numa, we need to clear existing hugepages
> regardless
> + # of size, so they can be moved to the first numa node
> + self._configure_huge_pages(
> + target_hugepages_total, hugepage_size, force_first_numa
> + )
> + else:
> + self.logger.info("Hugepages already configured.")
> + self._mount_huge_pages()
> +
> + def _get_hugepage_size(self) -> int:
> + hugepage_size = self.remote_session.send_command(
> + "awk '/Hugepagesize/ {print $2}' /proc/meminfo"
> + ).stdout
> + return int(hugepage_size)
> +
> + def _get_hugepages_total(self) -> int:
> + hugepages_total = self.remote_session.send_command(
> + "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
> + ).stdout
> + return int(hugepages_total)
> +
> + def _get_numa_nodes(self) -> list[int]:
> + try:
> + numa_range = self.remote_session.send_command(
> + "cat /sys/devices/system/node/online", verify=True
> + ).stdout
> + numa_range = expand_range(numa_range)
> + except RemoteCommandExecutionError:
> + # the file doesn't exist, meaning the node doesn't support
> numa
> + numa_range = []
> + return numa_range
> +
> + def _mount_huge_pages(self) -> None:
> + self.logger.info("Re-mounting Hugepages.")
> + hugapge_fs_cmd = "awk '/hugetlbfs/ { print $2 }' /proc/mounts"
> + self.remote_session.send_command(f"umount $({hugapge_fs_cmd})")
> + result = self.remote_session.send_command(hugapge_fs_cmd)
> + if result.stdout == "":
> + remote_mount_path = "/mnt/huge"
> + self.remote_session.send_command(f"mkdir -p
> {remote_mount_path}")
> + self.remote_session.send_command(
> + f"mount -t hugetlbfs nodev {remote_mount_path}"
> + )
> +
> + def _supports_numa(self) -> bool:
> + # the system supports numa if self._numa_nodes is non-empty and
> there are more
> + # than one numa node (in the latter case it may actually support
> numa, but
> + # there's no reason to do any numa specific configuration)
> + return len(self._numa_nodes) > 1
> +
> + def _configure_huge_pages(
> + self, amount: int, size: int, force_first_numa: bool
> + ) -> None:
> + self.logger.info("Configuring Hugepages.")
> + hugepage_config_path = (
> + f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
> + )
> + if force_first_numa and self._supports_numa():
> + # clear non-numa hugepages
> + self.remote_session.send_command(
> + f"echo 0 | sudo tee {hugepage_config_path}"
> + )
> + hugepage_config_path = (
> +
> f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages"
> + f"/hugepages-{size}kB/nr_hugepages"
> + )
> +
> + self.remote_session.send_command(
> + f"echo {amount} | sudo tee {hugepage_config_path}"
> + )
> diff --git a/dts/framework/remote_session/os/os_session.py
> b/dts/framework/remote_session/os/os_session.py
> index 6f6b6a979e..f84f3ce63c 100644
> --- a/dts/framework/remote_session/os/os_session.py
> +++ b/dts/framework/remote_session/os/os_session.py
> @@ -144,3 +144,13 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list:
> Iterable[str]) -> None:
> Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
> dpdk_prefix_list is empty, attempt to find running DPDK apps to
> kill and clean.
> """
> +
> + @abstractmethod
> + def setup_hugepages(
> + self, hugepage_amount: int = -1, force_first_numa: bool = False
> + ) -> None:
> + """
> + Get the node's Hugepage Size, configure the specified amount of
> hugepages
> + if needed and mount the hugepages if needed.
> + If force_first_numa is True, configure hugepages just on the
> first socket.
> + """
> diff --git a/dts/framework/testbed_model/node/node.py
> b/dts/framework/testbed_model/node/node.py
> index 5ee7023335..96a1724f4c 100644
> --- a/dts/framework/testbed_model/node/node.py
> +++ b/dts/framework/testbed_model/node/node.py
> @@ -16,7 +16,7 @@
> )
> from framework.exception import NodeCleanupError, NodeSetupError,
> convert_exception
> from framework.logger import DTSLOG, getLogger
> -from framework.remote_session import OSSession, create_session
> +from framework.remote_session import Arch, OSSession, create_arch,
> create_session
> from framework.testbed_model.hw import CPUAmount, cpu_filter
>
>
> @@ -33,6 +33,7 @@ class Node(object):
> config: NodeConfiguration
> cpus: list[CPU]
> _other_sessions: list[OSSession]
> + _arch: Arch
>
> def __init__(self, node_config: NodeConfiguration):
> self.config = node_config
> @@ -42,6 +43,7 @@ def __init__(self, node_config: NodeConfiguration):
> self.logger = getLogger(self.name)
> self.logger.info(f"Created node: {self.name}")
> self.main_session = create_session(self.config, self.name,
> self.logger)
> + self._arch = create_arch(self.config)
> self._get_remote_cpus()
>
> @convert_exception(NodeSetupError)
> @@ -50,6 +52,7 @@ def setup_execution(self, execution_config:
> ExecutionConfiguration) -> None:
> Perform the execution setup that will be done for each execution
> this node is part of.
> """
> + self._setup_hugepages()
> self._setup_execution(execution_config)
>
> def _setup_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> @@ -145,6 +148,16 @@ def _get_remote_cpus(self) -> None:
> self.logger.info("Getting CPU information.")
> self.cpus =
> self.main_session.get_remote_cpus(self.config.bypass_core0)
>
> + def _setup_hugepages(self):
> + """
> + Setup hugepages on the Node. Different architectures can supply
> different
> + amounts of memory for hugepages and numa-based hugepage
> allocation may need
> + to be considered.
> + """
> + self.main_session.setup_hugepages(
> + self._arch.default_hugepage_memory,
> self._arch.hugepage_force_first_numa
> + )
> +
> def close(self) -> None:
> """
> Close all connections and free other resources.
> --
> 2.30.2
>
>
[-- Attachment #2: Type: text/html, Size: 30166 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [RFC PATCH v2 07/10] dts: add simple stats report
2022-11-14 16:54 ` [RFC PATCH v2 07/10] dts: add simple stats report Juraj Linkeš
@ 2022-11-16 13:57 ` Owen Hilyard
0 siblings, 0 replies; 97+ messages in thread
From: Owen Hilyard @ 2022-11-16 13:57 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 4531 bytes --]
You are missing type annotations throughout this.
On Mon, Nov 14, 2022 at 11:54 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Provide a summary of testcase passed/failed/blocked counts.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/framework/dts.py | 3 ++
> dts/framework/stats_reporter.py | 65 +++++++++++++++++++++++++++++++++
> 2 files changed, 68 insertions(+)
> create mode 100644 dts/framework/stats_reporter.py
>
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index d606f8de2e..a7c243a5c3 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -14,11 +14,13 @@
> from .exception import DTSError, ReturnCode
> from .logger import DTSLOG, getLogger
> from .settings import SETTINGS
> +from .stats_reporter import TestStats
> from .test_result import Result
> from .utils import check_dts_python_version
>
> dts_logger: DTSLOG = getLogger("dts")
> result: Result = Result()
> +test_stats: TestStats = TestStats(SETTINGS.output_dir + "/statistics.txt")
>
>
> def run_all() -> None:
> @@ -29,6 +31,7 @@ def run_all() -> None:
> return_code = ReturnCode.NO_ERR
> global dts_logger
> global result
> + global test_stats
>
> # check the python version of the server that run dts
> check_dts_python_version()
> diff --git a/dts/framework/stats_reporter.py
> b/dts/framework/stats_reporter.py
> new file mode 100644
> index 0000000000..a2735d0a1d
> --- /dev/null
> +++ b/dts/framework/stats_reporter.py
> @@ -0,0 +1,65 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2010-2014 Intel Corporation
> +# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +
> +"""
> +Simple text file statistics generator
> +"""
> +
> +
> +class TestStats(object):
> + """
> + Generates a small statistics file containing the number of passing,
> + failing and blocked tests. It makes use of a Result instance as input.
> + """
> +
> + def __init__(self, filename):
> + self.filename = filename
> +
> + def __add_stat(self, test_result):
I think that this should probably be an option of an enum that gets matched
over. ex:
match test_result:
case None:
pass
case PASSED:
self.passed += 1
case FAILED:
self.failed += 1
case BLOCKED:
self.blocked += 1
case unknown:
# log this and throw an error.
> + if test_result is not None:
> + if test_result[0] == "PASSED":
> + self.passed += 1
> + if test_result[0] == "FAILED":
> + self.failed += 1
> + if test_result[0] == "BLOCKED":
> + self.blocked += 1
> + self.total += 1
> +
> + def __count_stats(self):
> + for sut in self.result.all_suts():
> + for target in self.result.all_targets(sut):
> + for suite in self.result.all_test_suites(sut, target):
> + for case in self.result.all_test_cases(sut, target,
> suite):
> + test_result = self.result.result_for(sut, target,
> suite, case)
> + if len(test_result):
> + self.__add_stat(test_result)
> +
> + def __write_stats(self):
> + sut_nodes = self.result.all_suts()
> + if len(sut_nodes) == 1:
> + self.stats_file.write(
> + f"dpdk_version =
> {self.result.current_dpdk_version(sut_nodes[0])}\n"
> + )
> + else:
> + for sut in sut_nodes:
> + dpdk_version = self.result.current_dpdk_version(sut)
> + self.stats_file.write(f"{sut}.dpdk_version =
> {dpdk_version}\n")
> + self.__count_stats()
> + self.stats_file.write(f"Passed = {self.passed}\n")
> + self.stats_file.write(f"Failed = {self.failed}\n")
> + self.stats_file.write(f"Blocked = {self.blocked}\n")
> + rate = 0
> + if self.total > 0:
> + rate = self.passed * 100.0 / self.total
> + self.stats_file.write(f"Pass rate = {rate:.1f}\n")
> +
> + def save(self, result):
> + self.passed = 0
> + self.failed = 0
> + self.blocked = 0
> + self.total = 0
> + self.stats_file = open(self.filename, "w+")
> + self.result = result
> + self.__write_stats()
> + self.stats_file.close()
> --
> 2.30.2
>
>
[-- Attachment #2: Type: text/html, Size: 5726 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [RFC PATCH v2 08/10] dts: add testsuite class
2022-11-14 16:54 ` [RFC PATCH v2 08/10] dts: add testsuite class Juraj Linkeš
@ 2022-11-16 15:15 ` Owen Hilyard
0 siblings, 0 replies; 97+ messages in thread
From: Owen Hilyard @ 2022-11-16 15:15 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 20880 bytes --]
On Mon, Nov 14, 2022 at 11:54 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> This is the base class that all test suites inherit from. The base class
> implements methods common to all test suites. The derived test suites
> implement tests and any particular setup needed for the suite or tests.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/conf.yaml | 4 +
> dts/framework/config/__init__.py | 33 ++-
> dts/framework/config/conf_yaml_schema.json | 49 ++++
> dts/framework/dts.py | 29 +++
> dts/framework/exception.py | 65 ++++++
> dts/framework/settings.py | 25 +++
> dts/framework/test_case.py | 246 +++++++++++++++++++++
> 7 files changed, 450 insertions(+), 1 deletion(-)
> create mode 100644 dts/framework/test_case.py
>
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> index 976888a88e..0b0f2c59b0 100644
> --- a/dts/conf.yaml
> +++ b/dts/conf.yaml
> @@ -7,6 +7,10 @@ executions:
> os: linux
> cpu: native
> compiler: gcc
> + perf: false
> + func: true
> + test_suites:
> + - hello_world
> system_under_test: "SUT 1"
> nodes:
> - name: "SUT 1"
> diff --git a/dts/framework/config/__init__.py
> b/dts/framework/config/__init__.py
> index 344d697a69..8874b10030 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -11,7 +11,7 @@
> import pathlib
> from dataclasses import dataclass
> from enum import Enum, auto, unique
> -from typing import Any, Iterable
> +from typing import Any, Iterable, TypedDict
>
> import warlock # type: ignore
> import yaml
> @@ -186,9 +186,34 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
> )
>
>
> +class TestSuiteConfigDict(TypedDict):
> + suite: str
> + cases: list[str]
> +
> +
> +@dataclass(slots=True, frozen=True)
> +class TestSuiteConfig:
> + test_suite: str
> + test_cases: list[str]
> +
> + @staticmethod
> + def from_dict(
> + entry: str | TestSuiteConfigDict,
> + ) -> "TestSuiteConfig":
> + if isinstance(entry, str):
> + return TestSuiteConfig(test_suite=entry, test_cases=[])
> + elif isinstance(entry, dict):
> + return TestSuiteConfig(test_suite=entry["suite"],
> test_cases=entry["cases"])
> + else:
> + raise TypeError(f"{type(entry)} is not valid for a test suite
> config.")
> +
> +
> @dataclass(slots=True, frozen=True)
> class ExecutionConfiguration:
> build_targets: list[BuildTargetConfiguration]
> + perf: bool
> + func: bool
> + test_suites: list[TestSuiteConfig]
> system_under_test: NodeConfiguration
>
> @staticmethod
> @@ -196,11 +221,17 @@ def from_dict(d: dict, node_map: dict) ->
> "ExecutionConfiguration":
> build_targets: list[BuildTargetConfiguration] = list(
> map(BuildTargetConfiguration.from_dict, d["build_targets"])
> )
> + test_suites: list[TestSuiteConfig] = list(
> + map(TestSuiteConfig.from_dict, d["test_suites"])
> + )
> sut_name = d["system_under_test"]
> assert sut_name in node_map, f"Unknown SUT {sut_name} in
> execution {d}"
>
> return ExecutionConfiguration(
> build_targets=build_targets,
> + perf=d["perf"],
> + func=d["func"],
> + test_suites=test_suites,
> system_under_test=node_map[sut_name],
> )
>
> diff --git a/dts/framework/config/conf_yaml_schema.json
> b/dts/framework/config/conf_yaml_schema.json
> index c59d3e30e6..e37ced65fe 100644
> --- a/dts/framework/config/conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_schema.json
> @@ -63,6 +63,31 @@
> }
> },
> "additionalProperties": false
> + },
> + "test_suite": {
> + "type": "string",
> + "enum": [
> + "hello_world"
> + ]
> + },
> + "test_target": {
> + "type": "object",
> + "properties": {
> + "suite": {
> + "$ref": "#/definitions/test_suite"
> + },
> + "cases": {
> + "type": "array",
> + "items": {
> + "type": "string"
> + },
> + "minimum": 1
> + }
> + },
> + "required": [
> + "suite"
> + ],
> + "additionalProperties": false
> }
> },
> "type": "object",
> @@ -130,6 +155,27 @@
> },
> "minimum": 1
> },
> + "perf": {
> + "type": "boolean",
> + "description": "Enable performance testing"
> + },
> + "func": {
> + "type": "boolean",
> + "description": "Enable functional testing"
> + },
> + "test_suites": {
> + "type": "array",
> + "items": {
> + "oneOf": [
> + {
> + "$ref": "#/definitions/test_suite"
> + },
> + {
> + "$ref": "#/definitions/test_target"
> + }
> + ]
> + }
> + },
> "system_under_test": {
> "$ref": "#/definitions/node_name"
> }
> @@ -137,6 +183,9 @@
> "additionalProperties": false,
> "required": [
> "build_targets",
> + "perf",
> + "func",
> + "test_suites",
> "system_under_test"
> ]
> },
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index a7c243a5c3..ba3f4b4168 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -15,6 +15,7 @@
> from .logger import DTSLOG, getLogger
> from .settings import SETTINGS
> from .stats_reporter import TestStats
> +from .test_case import TestCase
> from .test_result import Result
> from .utils import check_dts_python_version
>
> @@ -129,6 +130,34 @@ def run_suite(
> Use the given build_target to run the test suite with possibly only a
> subset
> of tests. If no subset is specified, run all tests.
> """
> + for test_suite_config in execution.test_suites:
> + result.test_suite = test_suite_config.test_suite
> + full_suite_path =
> f"tests.TestSuite_{test_suite_config.test_suite}"
> + testcase_classes = TestCase.get_testcases(full_suite_path)
> + dts_logger.debug(
> + f"Found testcase classes '{testcase_classes}' in
> '{full_suite_path}'"
> + )
> + for testcase_class in testcase_classes:
> + testcase = testcase_class(
> + sut_node, test_suite_config.test_suite, build_target,
> execution
> + )
> +
> + testcase.init_log()
> + testcase.set_requested_cases(SETTINGS.test_cases)
> + testcase.set_requested_cases(test_suite_config.test_cases)
> +
> + dts_logger.info(f"Running test suite
> '{testcase_class.__name__}'")
> + try:
> + testcase.execute_setup_all()
> + testcase.execute_test_cases()
> + dts_logger.info(
> + f"Finished running test suite
> '{testcase_class.__name__}'"
> + )
> + result.copy_suite(testcase.get_result())
> + test_stats.save(result) # this was originally after
> teardown
> +
> + finally:
>
You should probably move the "finished" log message down here, so that it
always runs.
> + testcase.execute_tear_downall()
>
>
> def quit_execution(nodes: Iterable[Node], return_code: ReturnCode) ->
> None:
> diff --git a/dts/framework/exception.py b/dts/framework/exception.py
> index 93d99432ae..a35eeff640 100644
> --- a/dts/framework/exception.py
> +++ b/dts/framework/exception.py
> @@ -29,6 +29,10 @@ class ReturnCode(IntEnum):
> DPDK_BUILD_ERR = 10
> NODE_SETUP_ERR = 20
> NODE_CLEANUP_ERR = 21
> + SUITE_SETUP_ERR = 30
> + SUITE_EXECUTION_ERR = 31
> + TESTCASE_VERIFY_ERR = 32
> + SUITE_CLEANUP_ERR = 33
>
>
> class DTSError(Exception):
> @@ -153,6 +157,67 @@ def __init__(self):
> )
>
>
> +class TestSuiteNotFound(DTSError):
> + """
> + Raised when a configured test suite cannot be imported.
> + """
> +
> + return_code: ClassVar[ReturnCode] = ReturnCode.SUITE_SETUP_ERR
> +
> +
> +class SuiteSetupError(DTSError):
> + """
> + Raised when an error occurs during suite setup.
> + """
> +
> + return_code: ClassVar[ReturnCode] = ReturnCode.SUITE_SETUP_ERR
> +
> + def __init__(self):
> + super(SuiteSetupError, self).__init__("An error occurred during
> suite setup.")
> +
> +
> +class SuiteExecutionError(DTSError):
> + """
> + Raised when an error occurs during suite execution.
> + """
> +
> + return_code: ClassVar[ReturnCode] = ReturnCode.SUITE_EXECUTION_ERR
> +
> + def __init__(self):
> + super(SuiteExecutionError, self).__init__(
> + "An error occurred during suite execution."
> + )
> +
> +
> +class VerifyError(DTSError):
> + """
> + To be used within the test cases to verify if a command output
> + is as it was expected.
> + """
> +
> + value: str
> + return_code: ClassVar[ReturnCode] = ReturnCode.TESTCASE_VERIFY_ERR
> +
> + def __init__(self, value: str):
> + self.value = value
> +
> + def __str__(self) -> str:
> + return repr(self.value)
> +
> +
> +class SuiteCleanupError(DTSError):
> + """
> + Raised when an error occurs during suite cleanup.
> + """
> +
> + return_code: ClassVar[ReturnCode] = ReturnCode.SUITE_CLEANUP_ERR
> +
> + def __init__(self):
> + super(SuiteCleanupError, self).__init__(
> + "An error occurred during suite cleanup."
> + )
> +
> +
> def convert_exception(exception: type[DTSError]) -> Callable[...,
> Callable[..., None]]:
> """
> When a non-DTS exception is raised while executing the decorated
> function,
> diff --git a/dts/framework/settings.py b/dts/framework/settings.py
> index e2bf3d2ce4..069f28ce81 100644
> --- a/dts/framework/settings.py
> +++ b/dts/framework/settings.py
> @@ -64,6 +64,8 @@ class _Settings:
> skip_setup: bool
> dpdk_ref: Path
> compile_timeout: float
> + test_cases: list
> + re_run: int
>
>
> def _get_parser() -> argparse.ArgumentParser:
> @@ -138,6 +140,25 @@ def _get_parser() -> argparse.ArgumentParser:
> help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
> )
>
> + parser.add_argument(
> + "--test-cases",
> + action=_env_arg("DTS_TESTCASES"),
> + default="",
> + required=False,
> + help="[DTS_TESTCASES] Comma-separated list of testcases to
> execute",
> + )
> +
> + parser.add_argument(
> + "--re-run",
> + "--re_run",
> + action=_env_arg("DTS_RERUN"),
> + default=0,
> + type=int,
> + required=False,
> + help="[DTS_RERUN] Re-run tests the specified amount of times if a
> test failure "
> + "occurs",
> + )
> +
> return parser
>
>
> @@ -151,6 +172,10 @@ def _get_settings() -> _Settings:
> skip_setup=(parsed_args.skip_setup == "Y"),
> dpdk_ref=parsed_args.dpdk_ref,
> compile_timeout=parsed_args.compile_timeout,
> + test_cases=parsed_args.test_cases.split(",")
> + if parsed_args.test_cases != ""
> + else [],
> + re_run=parsed_args.re_run,
> )
>
>
> diff --git a/dts/framework/test_case.py b/dts/framework/test_case.py
> new file mode 100644
> index 0000000000..0479f795bb
> --- /dev/null
> +++ b/dts/framework/test_case.py
> @@ -0,0 +1,246 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2010-2014 Intel Corporation
> +# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +
> +"""
> +A base class for creating DTS test cases.
> +"""
> +
> +import importlib
> +import inspect
> +import re
> +import time
> +import traceback
> +
> +from .exception import (
> + SSHTimeoutError,
> + SuiteCleanupError,
> + SuiteExecutionError,
> + SuiteSetupError,
> + TestSuiteNotFound,
> + VerifyError,
> + convert_exception,
> +)
> +from .logger import getLogger
> +from .settings import SETTINGS
> +from .test_result import Result
> +
> +
> +class TestCase(object):
> + def __init__(self, sut_node, suitename, target, execution):
> + self.sut_node = sut_node
> + self.suite_name = suitename
> + self.target = target
> +
> + # local variable
> + self._requested_tests = []
> + self._subtitle = None
> +
> + # result object for save suite result
> + self._suite_result = Result()
> + self._suite_result.sut = self.sut_node.config.hostname
> + self._suite_result.target = target
> + self._suite_result.test_suite = self.suite_name
> + if self._suite_result is None:
> + raise ValueError("Result object should not None")
> +
> + self._enable_func = execution.func
> +
> + # command history
> + self.setup_history = list()
> + self.test_history = list()
> +
> + def init_log(self):
> + # get log handler
> + class_name = self.__class__.__name__
> + self.logger = getLogger(class_name)
> +
> + def set_up_all(self):
> + pass
> +
> + def set_up(self):
> + pass
> +
> + def tear_down(self):
> + pass
> +
> + def tear_down_all(self):
> + pass
> +
> + def verify(self, passed, description):
> + if not passed:
> + raise VerifyError(description)
> +
> + def _get_functional_cases(self):
> + """
> + Get all functional test cases.
> + """
> + return self._get_test_cases(r"test_(?!perf_)")
> +
> + def _has_it_been_requested(self, test_case, test_name_regex):
> + """
> + Check whether test case has been requested for validation.
> + """
> + name_matches = re.match(test_name_regex, test_case.__name__)
> + if self._requested_tests:
> + return name_matches and test_case.__name__ in
> self._requested_tests
> +
> + return name_matches
> +
> + def set_requested_cases(self, case_list):
> + """
> + Pass down input cases list for check
> + """
> + self._requested_tests += case_list
> +
> + def _get_test_cases(self, test_name_regex):
> + """
> + Return case list which name matched regex.
> + """
> + self.logger.debug(f"Searching for testcases in {self.__class__}")
> + for test_case_name in dir(self):
> + test_case = getattr(self, test_case_name)
> + if callable(test_case) and self._has_it_been_requested(
> + test_case, test_name_regex
> + ):
> + yield test_case
> +
> + @convert_exception(SuiteSetupError)
> + def execute_setup_all(self):
> + """
> + Execute suite setup_all function before cases.
> + """
> + try:
> + self.set_up_all()
> + return True
> + except Exception as v:
> + self.logger.error("set_up_all failed:\n" +
> traceback.format_exc())
> + # record all cases blocked
> + if self._enable_func:
> + for case_obj in self._get_functional_cases():
> + self._suite_result.test_case = case_obj.__name__
> + self._suite_result.test_case_blocked(
> + "set_up_all failed: {}".format(str(v))
> + )
> + return False
> +
> + def _execute_test_case(self, case_obj):
> + """
> + Execute specified test case in specified suite. If any exception
> occurred in
> + validation process, save the result and tear down this case.
> + """
> + case_name = case_obj.__name__
> + self._suite_result.test_case = case_obj.__name__
> +
> + case_result = True
> + try:
> + self.logger.info("Test Case %s Begin" % case_name)
> +
> + self.running_case = case_name
> + # run set_up function for each case
> + self.set_up()
> + # run test case
> + case_obj()
> +
> + self._suite_result.test_case_passed()
> +
> + self.logger.info("Test Case %s Result PASSED:" % case_name)
> +
> + except VerifyError as v:
> + case_result = False
> + self._suite_result.test_case_failed(str(v))
> + self.logger.error("Test Case %s Result FAILED: " %
> (case_name) + str(v))
> + except KeyboardInterrupt:
> + self._suite_result.test_case_blocked("Skipped")
> + self.logger.error("Test Case %s SKIPPED: " % (case_name))
> + self.tear_down()
> + raise KeyboardInterrupt("Stop DTS")
> + except SSHTimeoutError as e:
> + case_result = False
> + self._suite_result.test_case_failed(str(e))
> + self.logger.error("Test Case %s Result FAILED: " %
> (case_name) + str(e))
> + self.logger.error("%s" % (e.get_output()))
> + except Exception:
> + case_result = False
> + trace = traceback.format_exc()
> + self._suite_result.test_case_failed(trace)
> + self.logger.error("Test Case %s Result ERROR: " % (case_name)
> + trace)
> + finally:
> + self.execute_tear_down()
> + return case_result
> +
> + @convert_exception(SuiteExecutionError)
> + def execute_test_cases(self):
> + """
> + Execute all test cases in one suite.
> + """
> + # prepare debugger rerun case environment
> + if self._enable_func:
> + for case_obj in self._get_functional_cases():
> + for i in range(SETTINGS.re_run + 1):
> + ret = self.execute_test_case(case_obj)
> +
> + if ret is False and SETTINGS.re_run:
> + self.sut_node.get_session_output(timeout=0.5 * (i
> + 1))
> + time.sleep(i + 1)
> + self.logger.info(
> + " Test case %s failed and re-run %d time"
> + % (case_obj.__name__, i + 1)
> + )
> + else:
> + break
> +
> + def execute_test_case(self, case_obj):
> + """
> + Execute test case or enter into debug mode.
> + """
> + return self._execute_test_case(case_obj)
> +
> + def get_result(self):
> + """
> + Return suite test result
> + """
> + return self._suite_result
> +
> + @convert_exception(SuiteCleanupError)
> + def execute_tear_downall(self):
> + """
> + execute suite tear_down_all function
> + """
> + self.tear_down_all()
> +
> + self.sut_node.kill_cleanup_dpdk_apps()
> +
> + def execute_tear_down(self):
> + """
> + execute suite tear_down function
> + """
> + try:
> + self.tear_down()
> + except Exception:
> + self.logger.error("tear_down failed:\n" +
> traceback.format_exc())
> + self.logger.warning(
> + "tear down %s failed, might iterfere next case's result!"
> + % self.running_case
> + )
> +
> + @staticmethod
> + def get_testcases(testsuite_module_path: str) ->
> list[type["TestCase"]]:
> + def is_testcase(object) -> bool:
> + try:
> + if issubclass(object, TestCase) and object != TestCase:
> + return True
> + except TypeError:
> + return False
> + return False
> +
> + try:
> + testcase_module =
> importlib.import_module(testsuite_module_path)
> + except ModuleNotFoundError as e:
> + raise TestSuiteNotFound(
> + f"Testsuite '{testsuite_module_path}' not found."
> + ) from e
> + return [
> + testcase_class
> + for _, testcase_class in inspect.getmembers(testcase_module,
> is_testcase)
> + ]
> --
> 2.30.2
>
>
[-- Attachment #2: Type: text/html, Size: 26431 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* RE: [RFC PATCH v2 03/10] dts: add dpdk build on sut
[not found] ` <CAHx6DYDOFMuEm4xc65OTrtUmGBtk8Z6UtSgS2grnR_RBY5HcjQ@mail.gmail.com>
@ 2022-11-23 12:37 ` Juraj Linkeš
0 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-23 12:37 UTC (permalink / raw)
To: Owen Hilyard
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 6644 bytes --]
Apologies for removing recipients in my previous reply.
From: Owen Hilyard <ohilyard@iol.unh.edu>
Sent: Monday, November 21, 2022 1:35 PM
To: Juraj Linkeš <juraj.linkes@pantheon.tech>
Subject: Re: [RFC PATCH v2 03/10] dts: add dpdk build on sut
On Fri, Nov 18, 2022 at 7:24 AM Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>> wrote:
A note: If I'm not mistaken, review should be done in plain text. I've formatted this as plain text and prefixed my replies with [Juraj].
+ @abstractmethod
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: str,
+ remote_dpdk_dir: str | PurePath,
+ target_name: str,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> PurePath:
I think that we should consider having a MesonArgs type which implements the builder pattern. That way common things like static vs dynamic linking, enabling lto, setting the optimization level, et can be handled via dedicated methods, and then we can add a method on that which is "add this string onto the end". This would also allow defining additional methods for DPDK-specific meson arguments, like only enabling certain drivers/applications/tests or forcing certain vector widths. I would also like to see an option to make use of ccache, because currently the only way I see to do that is via environment variables, which will make creating a test matrix that includes multiple compilers difficult.
[Juraj] The MesonArgs type is a good suggestion, I'll do that.
[Juraj] We don't necessarily need ccache at this point, but it is very useful and it doesn't look like that big of an addition. How exactly should the implementation look like? Do we want configure something in the conf.yaml file? What to I need to add to meson invocation?
[Owen] I think that we probably want to have a setting in the conf.yaml file that creates a "compiler wrapper". You can either declare one for all compilers or declare one for some subset of compilers. I think putting it into the conf.yaml file makes sense.
executions:
- build_targets:
- arch: x86_64
os: linux
cpu: native
compiler: gcc
compiler_wrapper: ccache
- arch: x86_64
os: linux
cpu: native
compiler: icc
compiler_wrapper: /usr/local/bin/my_super_special_compiler_wrapper
- arch: x86_64
os: linux
cpu: native
compiler: clang # clang doesn't need a wrapper for some reason
The only way that I know of to easily set the compiler in Meson is to set CC="<compiler_wrapper> <compiler>" for "meson setup". Also, you will need to completely wipe out the build directory between build targets due to meson not actually reconfiguring properly.
Ok, I'll modify the CC variable when compiler_wrapper is defined. It seems to be working, but may not be the cleanest implementation.
The current DPDK build works this way: The first DPDK build per build target is done from scratch and subsequent builds (currently application building) is done on top of that, so we should be fine on this front.
<snip>
+ @abstractmethod
+ def copy_file(
+ self, source_file: str, destination_file: str, source_remote: bool = False
+ ) -> None:
+ """
+ Copy source_file from local storage to destination_file on the remote Node
This should clarify that local storage means inside of the DTS container, not the system it is running on.
[Juraj] Ack. The local storage (I really should've said filesystem) could be any place where DTS is running, be it a container, a VM or a baremetal host. I think just changing local storage to local filesystem should be enough. If not, please propose an alternative wording.
[Juraj] And a related note - should we split copy_file into copy_file_to and copy_file_from?
<snip>
+ @skip_setup
+ def _copy_dpdk_tarball(self) -> None:
+ """
+ Copy to and extract DPDK tarball on the SUT node.
+ """
+ # check local path
+ assert SETTINGS.dpdk_ref.exists(), f"Package {SETTINGS.dpdk_ref} doesn't exist."
+
+ http://self.logger.info("Copying DPDK tarball to SUT.")
+ self.main_session.copy_file(SETTINGS.dpdk_ref, self._remote_tmp_dir)
+
+ # construct remote tarball path
+ # the basename is the same on local host and on remote Node
+ remote_tarball_path = self.main_session.join_remote_path(
+ self._remote_tmp_dir, os.path.basename(SETTINGS.dpdk_ref)
+ )
+
+ # construct remote path after extracting
+ with tarfile.open(SETTINGS.dpdk_ref) as dpdk_tar:
+ dpdk_top_dir = dpdk_tar.getnames()[0]
+ self._remote_dpdk_dir = self.main_session.join_remote_path(
+ self._remote_tmp_dir, dpdk_top_dir
+ )
+
+ http://self.logger.info("Extracting DPDK tarball on SUT.")
Can we add a path to this log message?
[Juraj] Absolutely, I'll add it. If there are more logs that would be useful to you, I'll add those as well (maybe as debugs).
<snip>
+class EnvVarsDict(dict):
+ def __str__(self) -> str:
+ return " ".join(["=".join(item) for item in self.items()])
This needs to make sure it doesn't silently run over the line length limitations in posix sh/bash (4096 chars) or cmd (8191 chars). That would be a VERY frustrating bug to track down and it can easily be stopped by checking that this is a reasonable length (< 2k characters) and emitting a warning if something goes over that.
[Juraj] Interesting, I didn't know about this. Would a warning be enough?
Also, Allowing less than 2k characters leaves us with at least 2k characters for the rest of the command and that should be plenty, but do we want to check that as well? If so, we may want to do the check when sending a commad. Another thing to consider is that we're going to switch to Fabric and we won't need to worry about this - it would be up to the underlying RemoteSession implementations to check this.
[Owen] A warning would probably be enough. "Another thing to consider is that we're going to switch to Fabric and we won't need to worry about this" We will need to worry about this if we are still exposing a bash shell in any way to the user.
Ok, I think we should note this and consider it when implementing Farbric. I don't think we'll be exposing shell at this point, but maybe that'll change when we need to handle DPDK applications - we should address this then I think.
[-- Attachment #2: Type: text/html, Size: 13422 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* RE: [RFC PATCH v2 04/10] dts: add dpdk execution handling
[not found] ` <CAHx6DYCEYxZ0Osm6fKhp3Jx8n7s=r7qVh8R41c6nCan8Or-dpA@mail.gmail.com>
@ 2022-11-23 13:03 ` Juraj Linkeš
2022-11-28 13:05 ` Owen Hilyard
0 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-23 13:03 UTC (permalink / raw)
To: Owen Hilyard
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 3027 bytes --]
Again, apologies for removing recipients in my earlier reply.
From: Owen Hilyard <ohilyard@iol.unh.edu>
Sent: Monday, November 21, 2022 1:40 PM
To: Juraj Linkeš <juraj.linkes@pantheon.tech>
Subject: Re: [RFC PATCH v2 04/10] dts: add dpdk execution handling
On Fri, Nov 18, 2022 at 8:00 AM Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>> wrote:
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 409ce7ac74..c59d3e30e6 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -6,6 +6,12 @@
"type": "string",
"description": "A unique identifier for a node"
},
+ "ARCH": {
+ "type": "string",
+ "enum": [
+ "x86_64"
arm64 and ppc64le should probably be included here. I think that we can focus on 64 bit arches for now.
[Juraj] Seems safe enough. At this point it doesn't matter, but when we have a number of testcases, we may need to revisit this (if we can't verify an architecture for example).
[Owen] The reason I want this is because I want there to always be an architecture that is not the one being developed on that developers need to handle properly. LoongArch might actually be a good candidate for this if support gets merged, since to my knowledge almost no one has access to their server-class CPUs yet. Essentially, I want to force anyone who does something that is architecture dependent to consider other architectures, not just have the "the entire world is x86" mentality.
Alright, good to know.
I have a semi-related point, we specify arch (and os as well) in both build target and SUT config. Are these even going to be different? I see cpu (or platform in meson config) being different, but not the other two and that could simplify the config a bit.
<snip>
+ def kill_cleanup_dpdk_apps(self) -> None:
+ """
+ Kill all dpdk applications on the SUT. Cleanup hugepages.
+ """
+ if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
+ # we can use the session if it exists and responds
+ self._dpdk_kill_session.kill_cleanup_dpdk_apps(self.dpdk_prefix_list)
+ else:
+ # otherwise, we need to (re)create it
+ self._dpdk_kill_session = self.create_session("dpdk_kill")
+ self.dpdk_prefix_list = []
+
+ def create_eal_parameters(
+ self,
+ fixed_prefix: bool = False,
+ core_filter_specifier: CPUAmount | CPUList = CPUAmount(),
+ ascending_cores: bool = True,
+ prefix: str = "",
+ no_pci: bool = False,
+ vdevs: list[str] = None,
I would prefer to have vdevs be a list of objects, even if for now that class just takes a string in its constructor. Later on we can add subclasses for specific vdevs that might see heavy use, such as librte_net_pcap and crypto_openssl.
[Juraj] Ok, this is simple enough, I'll add it.
[-- Attachment #2: Type: text/html, Size: 7888 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* RE: [RFC PATCH v2 05/10] dts: add node memory setup
2022-11-16 13:47 ` Owen Hilyard
@ 2022-11-23 13:58 ` Juraj Linkeš
0 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2022-11-23 13:58 UTC (permalink / raw)
To: Owen Hilyard
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 5317 bytes --]
From: Owen Hilyard <ohilyard@iol.unh.edu>
Sent: Wednesday, November 16, 2022 2:48 PM
To: Juraj Linkeš <juraj.linkes@pantheon.tech>
Cc: thomas@monjalon.net; Honnappa.Nagarahalli@arm.com; lijuan.tu@intel.com; bruce.richardson@intel.com; dev@dpdk.org
Subject: Re: [RFC PATCH v2 05/10] dts: add node memory setup
On Mon, Nov 14, 2022 at 11:54 AM Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>> wrote:
Setup hugepages on nodes. This is useful not only on SUT nodes, but
also on TG nodes which use TGs that utilize hugepages.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>>
---
dts/framework/remote_session/__init__.py | 1 +
dts/framework/remote_session/arch/__init__.py | 20 +++++
dts/framework/remote_session/arch/arch.py | 57 +++++++++++++
.../remote_session/os/linux_session.py | 85 +++++++++++++++++++
dts/framework/remote_session/os/os_session.py | 10 +++
dts/framework/testbed_model/node/node.py | 15 +++-
6 files changed, 187 insertions(+), 1 deletion(-)
create mode 100644 dts/framework/remote_session/arch/__init__.py
create mode 100644 dts/framework/remote_session/arch/arch.py
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index f2339b20bd..f0deeadac6 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -11,4 +11,5 @@
# pylama:ignore=W0611
+from .arch import Arch, create_arch
from .os import OSSession, create_session
diff --git a/dts/framework/remote_session/arch/__init__.py b/dts/framework/remote_session/arch/__init__.py
new file mode 100644
index 0000000000..d78ad42ac5
--- /dev/null
+++ b/dts/framework/remote_session/arch/__init__.py
@@ -0,0 +1,20 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+from framework.config import Architecture, NodeConfiguration
+
+from .arch import PPC64, Arch, Arm64, i686, x86_32, x86_64
+
+
+def create_arch(node_config: NodeConfiguration) -> Arch:
+ match node_config.arch:
+ case Architecture.x86_64:
+ return x86_64()
+ case Architecture.x86_32:
+ return x86_32()
+ case Architecture.i686:
+ return i686()
+ case Architecture.ppc64le:
+ return PPC64()
+ case Architecture.arm64:
+ return Arm64()
diff --git a/dts/framework/remote_session/arch/arch.py b/dts/framework/remote_session/arch/arch.py
new file mode 100644
index 0000000000..05c7602def
--- /dev/null
+++ b/dts/framework/remote_session/arch/arch.py
@@ -0,0 +1,57 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 PANTHEON.tech s.r.o.
+
+
+class Arch(object):
+ """
+ Stores architecture-specific information.
+ """
+
+ @property
+ def default_hugepage_memory(self) -> int:
+ """
+ Return the default amount of memory allocated for hugepages DPDK will use.
+ The default is an amount equal to 256 2MB hugepages (512MB memory).
+ """
+ return 256 * 2048
+
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ """
+ An architecture may need to force configuration of hugepages to first socket.
+ """
+ return False
+
+
+class x86_64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 4096 * 2048
+
+
+class x86_32(Arch):
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ return True
+
+
+class i686(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 512 * 2048
+
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ return True
+
+
+class PPC64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 512 * 2048
+
+
+class Arm64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 2048 * 2048
diff --git a/dts/framework/remote_session/os/linux_session.py b/dts/framework/remote_session/os/linux_session.py
index 21f117b714..fad33d7613 100644
--- a/dts/framework/remote_session/os/linux_session.py
+++ b/dts/framework/remote_session/os/linux_session.py
@@ -3,6 +3,8 @@
# Copyright(c) 2022 University of New Hampshire
from framework.config import CPU
+from framework.exception import RemoteCommandExecutionError
+from framework.utils import expand_range
from .posix_session import PosixSession
@@ -24,3 +26,86 @@ def get_remote_cpus(self, bypass_core0: bool) -> list[CPU]:
continue
cpus.append(CPU(int(cpu), int(core), int(socket), int(node)))
return cpus
+
+ def setup_hugepages(
+ self, hugepage_amount: int = -1, force_first_numa: bool = False
I think that hugepage_amount: int | None = None is better, since it expresses it is an optional argument and the type checker will force anyone using the value to check if it is none, whereas that will not happen with -1.
This is actually a remnant from original DTS, where -1 meant use per-arch default. I've addressed this default elsewhere in the code, so I'll remove the default for this argument (making it mandatory).
[-- Attachment #2: Type: text/html, Size: 11055 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [RFC PATCH v2 04/10] dts: add dpdk execution handling
2022-11-23 13:03 ` Juraj Linkeš
@ 2022-11-28 13:05 ` Owen Hilyard
0 siblings, 0 replies; 97+ messages in thread
From: Owen Hilyard @ 2022-11-28 13:05 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 3633 bytes --]
On Wed, Nov 23, 2022 at 8:03 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Again, apologies for removing recipients in my earlier reply.
>
>
>
> *From:* Owen Hilyard <ohilyard@iol.unh.edu>
> *Sent:* Monday, November 21, 2022 1:40 PM
> *To:* Juraj Linkeš <juraj.linkes@pantheon.tech>
> *Subject:* Re: [RFC PATCH v2 04/10] dts: add dpdk execution handling
>
>
>
> On Fri, Nov 18, 2022 at 8:00 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
> wrote:
>
> diff --git a/dts/framework/config/conf_yaml_schema.json
> b/dts/framework/config/conf_yaml_schema.json
> index 409ce7ac74..c59d3e30e6 100644
> --- a/dts/framework/config/conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_schema.json
> @@ -6,6 +6,12 @@
> "type": "string",
> "description": "A unique identifier for a node"
> },
> + "ARCH": {
> + "type": "string",
> + "enum": [
> + "x86_64"
>
> arm64 and ppc64le should probably be included here. I think that we can
> focus on 64 bit arches for now.
>
> [Juraj] Seems safe enough. At this point it doesn't matter, but when we
> have a number of testcases, we may need to revisit this (if we can't verify
> an architecture for example).
>
>
>
> [Owen] The reason I want this is because I want there to always be an
> architecture that is not the one being developed on that developers need to
> handle properly. LoongArch might actually be a good candidate for this if
> support gets merged, since to my knowledge almost no one has access to
> their server-class CPUs yet. Essentially, I want to force anyone who does
> something that is architecture dependent to consider other architectures,
> not just have the "the entire world is x86" mentality.
>
>
>
> Alright, good to know.
>
> I have a semi-related point, we specify arch (and os as well) in both
> build target and SUT config. Are these even going to be different? I see
> cpu (or platform in meson config) being different, but not the other two
> and that could simplify the config a bit.
>
[Owen] If I remember correctly, the older DTS has i686 (32 bit x86)
support, and you might want to run i686 on an x86_64 cpu. That is the only
use case I can see for differing build arch and SUT arch. The community lab
doesn't have any 32 bit hardware, so any future 32 bit testing would need
to happen on a 64 bit system running in a compatibility mode.
> <snip>
>
> + def kill_cleanup_dpdk_apps(self) -> None:
> + """
> + Kill all dpdk applications on the SUT. Cleanup hugepages.
> + """
> + if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
> + # we can use the session if it exists and responds
> +
> self._dpdk_kill_session.kill_cleanup_dpdk_apps(self.dpdk_prefix_list)
> + else:
> + # otherwise, we need to (re)create it
> + self._dpdk_kill_session = self.create_session("dpdk_kill")
> + self.dpdk_prefix_list = []
> +
> + def create_eal_parameters(
> + self,
> + fixed_prefix: bool = False,
> + core_filter_specifier: CPUAmount | CPUList = CPUAmount(),
> + ascending_cores: bool = True,
> + prefix: str = "",
> + no_pci: bool = False,
> + vdevs: list[str] = None,
>
> I would prefer to have vdevs be a list of objects, even if for now that
> class just takes a string in its constructor. Later on we can add
> subclasses for specific vdevs that might see heavy use, such
> as librte_net_pcap and crypto_openssl.
>
> [Juraj] Ok, this is simple enough, I'll add it.
>
>
[-- Attachment #2: Type: text/html, Size: 6704 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 00/10] dts: add hello world testcase
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
` (9 preceding siblings ...)
2022-11-14 16:54 ` [RFC PATCH v2 10/10] dts: add hello world testsuite Juraj Linkeš
@ 2023-01-17 15:48 ` Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 01/10] dts: add node and os abstractions Juraj Linkeš
` (11 more replies)
10 siblings, 12 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:48 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Add code needed to run the HelloWorld testcase which just runs the hello
world dpdk application.
The patchset currently heavily refactors this original DTS code needed
to run the testcase:
* The whole architecture has been redone into more sensible class
hierarchy
* DPDK build on the System under Test
* DPDK eal args construction, app running and shutting down
* SUT hugepage memory configuration
* Test runner
* Test results
* TestSuite class
* Test runner parts interfacing with TestSuite
* The HelloWorld testsuite itself
The code is divided into sub-packages, some of which are divided
further.
There patch may need to be divided into smaller chunks. If so, proposals
on where exactly to split it would be very helpful.
v3:
Finished refactoring everything in this patch, with test suite and test
results being the last parts.
Also changed the directory structure. It's now simplified and the
imports look much better.
I've also many many minor changes such as renaming variables here and
there.
Juraj Linkeš (10):
dts: add node and os abstractions
dts: add ssh command verification
dts: add dpdk build on sut
dts: add dpdk execution handling
dts: add node memory setup
dts: add test suite module
dts: add hello world testplan
dts: add hello world testsuite
dts: add test suite config and runner
dts: add test results module
dts/conf.yaml | 19 +-
dts/framework/config/__init__.py | 132 +++++++-
dts/framework/config/arch.py | 57 ++++
dts/framework/config/conf_yaml_schema.json | 150 ++++++++-
dts/framework/dts.py | 185 ++++++++--
dts/framework/exception.py | 100 +++++-
dts/framework/logger.py | 24 +-
dts/framework/remote_session/__init__.py | 30 +-
dts/framework/remote_session/linux_session.py | 114 +++++++
dts/framework/remote_session/os_session.py | 177 ++++++++++
dts/framework/remote_session/posix_session.py | 221 ++++++++++++
.../remote_session/remote/__init__.py | 16 +
.../remote_session/remote/remote_session.py | 155 +++++++++
.../{ => remote}/ssh_session.py | 91 ++++-
.../remote_session/remote_session.py | 95 ------
dts/framework/settings.py | 79 ++++-
dts/framework/test_result.py | 316 ++++++++++++++++++
dts/framework/test_suite.py | 254 ++++++++++++++
dts/framework/testbed_model/__init__.py | 20 +-
dts/framework/testbed_model/dpdk.py | 78 +++++
dts/framework/testbed_model/hw/__init__.py | 27 ++
dts/framework/testbed_model/hw/cpu.py | 253 ++++++++++++++
.../testbed_model/hw/virtual_device.py | 16 +
dts/framework/testbed_model/node.py | 165 +++++++--
dts/framework/testbed_model/sut_node.py | 261 +++++++++++++++
dts/framework/utils.py | 39 ++-
dts/test_plans/hello_world_test_plan.rst | 68 ++++
dts/tests/TestSuite_hello_world.py | 59 ++++
28 files changed, 2998 insertions(+), 203 deletions(-)
create mode 100644 dts/framework/config/arch.py
create mode 100644 dts/framework/remote_session/linux_session.py
create mode 100644 dts/framework/remote_session/os_session.py
create mode 100644 dts/framework/remote_session/posix_session.py
create mode 100644 dts/framework/remote_session/remote/__init__.py
create mode 100644 dts/framework/remote_session/remote/remote_session.py
rename dts/framework/remote_session/{ => remote}/ssh_session.py (65%)
delete mode 100644 dts/framework/remote_session/remote_session.py
create mode 100644 dts/framework/test_result.py
create mode 100644 dts/framework/test_suite.py
create mode 100644 dts/framework/testbed_model/dpdk.py
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
create mode 100644 dts/framework/testbed_model/sut_node.py
create mode 100644 dts/test_plans/hello_world_test_plan.rst
create mode 100644 dts/tests/TestSuite_hello_world.py
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 01/10] dts: add node and os abstractions
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
@ 2023-01-17 15:48 ` Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 02/10] dts: add ssh command verification Juraj Linkeš
` (10 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:48 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The abstraction model in DTS is as follows:
Node, defining and implementing methods common to and the base of SUT
(system under test) Node and TG (traffic generator) Node.
Remote Session, defining and implementing methods common to any remote
session implementation, such as SSH Session.
OSSession, defining and implementing methods common to any operating
system/distribution, such as Linux.
OSSession uses a derived Remote Session and Node in turn uses a derived
OSSession. This split delegates OS-specific and connection-specific code
to specialized classes designed to handle the differences.
The base classes implement the methods or parts of methods that are
common to all implementations and defines abstract methods that must be
implemented by derived classes.
Part of the abstractions is the DTS test execution skeleton:
execution setup, build setup and then test execution.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 11 +-
dts/framework/config/__init__.py | 73 +++++++-
dts/framework/config/conf_yaml_schema.json | 76 +++++++-
dts/framework/dts.py | 162 ++++++++++++++----
dts/framework/exception.py | 46 ++++-
dts/framework/logger.py | 24 +--
dts/framework/remote_session/__init__.py | 30 +++-
dts/framework/remote_session/linux_session.py | 11 ++
dts/framework/remote_session/os_session.py | 46 +++++
dts/framework/remote_session/posix_session.py | 12 ++
.../remote_session/remote/__init__.py | 16 ++
.../{ => remote}/remote_session.py | 41 +++--
.../{ => remote}/ssh_session.py | 20 +--
dts/framework/testbed_model/__init__.py | 10 +-
dts/framework/testbed_model/node.py | 104 ++++++++---
dts/framework/testbed_model/sut_node.py | 13 ++
16 files changed, 583 insertions(+), 112 deletions(-)
create mode 100644 dts/framework/remote_session/linux_session.py
create mode 100644 dts/framework/remote_session/os_session.py
create mode 100644 dts/framework/remote_session/posix_session.py
create mode 100644 dts/framework/remote_session/remote/__init__.py
rename dts/framework/remote_session/{ => remote}/remote_session.py (61%)
rename dts/framework/remote_session/{ => remote}/ssh_session.py (91%)
create mode 100644 dts/framework/testbed_model/sut_node.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1aaa593612..03696d2bab 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -1,9 +1,16 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2022 The DPDK contributors
+# Copyright 2022-2023 The DPDK contributors
executions:
- - system_under_test: "SUT 1"
+ - build_targets:
+ - arch: x86_64
+ os: linux
+ cpu: native
+ compiler: gcc
+ compiler_wrapper: ccache
+ system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ os: linux
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 214be8e7f4..e3e2d74eac 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -1,15 +1,17 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2021 Intel Corporation
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
"""
-Generic port and topology nodes configuration file load function
+Yaml config parsing methods
"""
import json
import os.path
import pathlib
from dataclasses import dataclass
+from enum import Enum, auto, unique
from typing import Any
import warlock # type: ignore
@@ -18,6 +20,47 @@
from framework.settings import SETTINGS
+class StrEnum(Enum):
+ @staticmethod
+ def _generate_next_value_(
+ name: str, start: int, count: int, last_values: object
+ ) -> str:
+ return name
+
+
+@unique
+class Architecture(StrEnum):
+ i686 = auto()
+ x86_64 = auto()
+ x86_32 = auto()
+ arm64 = auto()
+ ppc64le = auto()
+
+
+@unique
+class OS(StrEnum):
+ linux = auto()
+ freebsd = auto()
+ windows = auto()
+
+
+@unique
+class CPUType(StrEnum):
+ native = auto()
+ armv8a = auto()
+ dpaa2 = auto()
+ thunderx = auto()
+ xgene1 = auto()
+
+
+@unique
+class Compiler(StrEnum):
+ gcc = auto()
+ clang = auto()
+ icc = auto()
+ msvc = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -29,6 +72,7 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ os: OS
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -37,19 +81,44 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ os=OS(d["os"]),
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class BuildTargetConfiguration:
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+ name: str
+
+ @staticmethod
+ def from_dict(d: dict) -> "BuildTargetConfiguration":
+ return BuildTargetConfiguration(
+ arch=Architecture(d["arch"]),
+ os=OS(d["os"]),
+ cpu=CPUType(d["cpu"]),
+ compiler=Compiler(d["compiler"]),
+ name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
+ build_targets: list[BuildTargetConfiguration]
system_under_test: NodeConfiguration
@staticmethod
def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
+ build_targets: list[BuildTargetConfiguration] = list(
+ map(BuildTargetConfiguration.from_dict, d["build_targets"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
return ExecutionConfiguration(
+ build_targets=build_targets,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 6b8d6ccd05..9170307fbe 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -5,6 +5,68 @@
"node_name": {
"type": "string",
"description": "A unique identifier for a node"
+ },
+ "OS": {
+ "type": "string",
+ "enum": [
+ "linux"
+ ]
+ },
+ "cpu": {
+ "type": "string",
+ "description": "Native should be the default on x86",
+ "enum": [
+ "native",
+ "armv8a",
+ "dpaa2",
+ "thunderx",
+ "xgene1"
+ ]
+ },
+ "compiler": {
+ "type": "string",
+ "enum": [
+ "gcc",
+ "clang",
+ "icc",
+ "mscv"
+ ]
+ },
+ "build_target": {
+ "type": "object",
+ "description": "Targets supported by DTS",
+ "properties": {
+ "arch": {
+ "type": "string",
+ "enum": [
+ "ALL",
+ "x86_64",
+ "arm64",
+ "ppc64le",
+ "other"
+ ]
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
+ },
+ "cpu": {
+ "$ref": "#/definitions/cpu"
+ },
+ "compiler": {
+ "$ref": "#/definitions/compiler"
+ },
+ "compiler_wrapper": {
+ "type": "string",
+ "description": "This will be added before compiler to the CC variable when building DPDK. Optional."
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "arch",
+ "os",
+ "cpu",
+ "compiler"
+ ]
}
},
"type": "object",
@@ -29,13 +91,17 @@
"password": {
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
}
},
"additionalProperties": false,
"required": [
"name",
"hostname",
- "user"
+ "user",
+ "os"
]
},
"minimum": 1
@@ -45,12 +111,20 @@
"items": {
"type": "object",
"properties": {
+ "build_targets": {
+ "type": "array",
+ "items": {
+ "$ref": "#/definitions/build_target"
+ },
+ "minimum": 1
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
},
"additionalProperties": false,
"required": [
+ "build_targets",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index d23cfc4526..6ea7c6e736 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -1,67 +1,157 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2019 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import sys
-import traceback
-from collections.abc import Iterable
-from framework.testbed_model.node import Node
-
-from .config import CONFIGURATION
+from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
+from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .testbed_model import SutNode
from .utils import check_dts_python_version
-dts_logger: DTSLOG | None = None
+dts_logger: DTSLOG = getLogger("dts_runner")
+errors = []
def run_all() -> None:
"""
- Main process of DTS, it will run all test suites in the config file.
+ The main process of DTS. Runs all build targets in all executions from the main
+ config file.
"""
-
global dts_logger
+ global errors
# check the python version of the server that run dts
check_dts_python_version()
- dts_logger = getLogger("dts")
-
- nodes = {}
- # This try/finally block means "Run the try block, if there is an exception,
- # run the finally block before passing it upward. If there is not an exception,
- # run the finally block after the try block is finished." This helps avoid the
- # problem of python's interpreter exit context, which essentially prevents you
- # from making certain system calls. This makes cleaning up resources difficult,
- # since most of the resources in DTS are network-based, which is restricted.
+ nodes: dict[str, SutNode] = {}
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
- sut_config = execution.system_under_test
- if sut_config.name not in nodes:
- node = Node(sut_config)
- nodes[sut_config.name] = node
- node.send_command("echo Hello World")
+ sut_node = None
+ if execution.system_under_test.name in nodes:
+ # a Node with the same name already exists
+ sut_node = nodes[execution.system_under_test.name]
+ else:
+ # the SUT has not been initialized yet
+ try:
+ sut_node = SutNode(execution.system_under_test)
+ except Exception as e:
+ dts_logger.exception(
+ f"Connection to node {execution.system_under_test} failed."
+ )
+ errors.append(e)
+ else:
+ nodes[sut_node.name] = sut_node
+
+ if sut_node:
+ _run_execution(sut_node, execution)
+
+ except Exception as e:
+ dts_logger.exception("An unexpected error has occurred.")
+ errors.append(e)
+ raise
+
+ finally:
+ try:
+ for node in nodes.values():
+ node.close()
+ except Exception as e:
+ dts_logger.exception("Final cleanup of nodes failed.")
+ errors.append(e)
+ # we need to put the sys.exit call outside the finally clause to make sure
+ # that unexpected exceptions will propagate
+ # in that case, the error that should be reported is the uncaught exception as
+ # that is a severe error originating from the framework
+ # at that point, we'll only have partial results which could be impacted by the
+ # error causing the uncaught exception, making them uninterpretable
+ _exit_dts()
+
+
+def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+ """
+ Run the given execution. This involves running the execution setup as well as
+ running all build targets in the given execution.
+ """
+ dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+
+ try:
+ sut_node.set_up_execution(execution)
except Exception as e:
- # sys.exit() doesn't produce a stack trace, need to print it explicitly
- traceback.print_exc()
- raise e
+ dts_logger.exception("Execution setup failed.")
+ errors.append(e)
+
+ else:
+ for build_target in execution.build_targets:
+ _run_build_target(sut_node, build_target, execution)
finally:
- quit_execution(nodes.values())
+ try:
+ sut_node.tear_down_execution()
+ except Exception as e:
+ dts_logger.exception("Execution teardown failed.")
+ errors.append(e)
-def quit_execution(sut_nodes: Iterable[Node]) -> None:
+def _run_build_target(
+ sut_node: SutNode,
+ build_target: BuildTargetConfiguration,
+ execution: ExecutionConfiguration,
+) -> None:
"""
- Close session to SUT and TG before quit.
- Return exit status when failure occurred.
+ Run the given build target.
"""
- for sut_node in sut_nodes:
- # close all session
- sut_node.node_exit()
+ dts_logger.info(f"Running build target '{build_target.name}'.")
+
+ try:
+ sut_node.set_up_build_target(build_target)
+ except Exception as e:
+ dts_logger.exception("Build target setup failed.")
+ errors.append(e)
+
+ else:
+ _run_suites(sut_node, execution)
+
+ finally:
+ try:
+ sut_node.tear_down_build_target()
+ except Exception as e:
+ dts_logger.exception("Build target teardown failed.")
+ errors.append(e)
+
+
+def _run_suites(
+ sut_node: SutNode,
+ execution: ExecutionConfiguration,
+) -> None:
+ """
+ Use the given build_target to run execution's test suites
+ with possibly only a subset of test cases.
+ If no subset is specified, run all test cases.
+ """
+
+
+def _exit_dts() -> None:
+ """
+ Process all errors and exit with the proper exit code.
+ """
+ if errors and dts_logger:
+ dts_logger.debug("Summary of errors:")
+ for error in errors:
+ dts_logger.debug(repr(error))
+
+ return_code = ErrorSeverity.NO_ERR
+ for error in errors:
+ error_return_code = ErrorSeverity.GENERIC_ERR
+ if isinstance(error, DTSError):
+ error_return_code = error.severity
+
+ if error_return_code > return_code:
+ return_code = error_return_code
- if dts_logger is not None:
+ if dts_logger:
dts_logger.info("DTS execution has ended.")
- sys.exit(0)
+ sys.exit(return_code)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 8b2f08a8f0..121a0f7296 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -1,20 +1,46 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
User-defined exceptions used across the framework.
"""
+from enum import IntEnum, unique
+from typing import ClassVar
-class SSHTimeoutError(Exception):
+
+@unique
+class ErrorSeverity(IntEnum):
+ """
+ The severity of errors that occur during DTS execution.
+ All exceptions are caught and the most severe error is used as return code.
+ """
+
+ NO_ERR = 0
+ GENERIC_ERR = 1
+ CONFIG_ERR = 2
+ SSH_ERR = 3
+
+
+class DTSError(Exception):
+ """
+ The base exception from which all DTS exceptions are derived.
+ Stores error severity.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
+
+
+class SSHTimeoutError(DTSError):
"""
Command execution timeout.
"""
command: str
output: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, command: str, output: str):
self.command = command
@@ -27,12 +53,13 @@ def get_output(self) -> str:
return self.output
-class SSHConnectionError(Exception):
+class SSHConnectionError(DTSError):
"""
SSH connection error.
"""
host: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, host: str):
self.host = host
@@ -41,16 +68,25 @@ def __str__(self) -> str:
return f"Error trying to connect with {self.host}"
-class SSHSessionDeadError(Exception):
+class SSHSessionDeadError(DTSError):
"""
SSH session is not alive.
It can no longer be used.
"""
host: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, host: str):
self.host = host
def __str__(self) -> str:
return f"SSH session with {self.host} has died"
+
+
+class ConfigurationError(DTSError):
+ """
+ Raised when an invalid configuration is encountered.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index a31fcc8242..bb2991e994 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
DTS logger module with several log level. DTS framework and TestSuite logs
@@ -33,17 +33,17 @@ class DTSLOG(logging.LoggerAdapter):
DTS log class for framework and testsuite.
"""
- logger: logging.Logger
+ _logger: logging.Logger
node: str
sh: logging.StreamHandler
fh: logging.FileHandler
verbose_fh: logging.FileHandler
def __init__(self, logger: logging.Logger, node: str = "suite"):
- self.logger = logger
+ self._logger = logger
# 1 means log everything, this will be used by file handlers if their level
# is not set
- self.logger.setLevel(1)
+ self._logger.setLevel(1)
self.node = node
@@ -55,9 +55,13 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
if SETTINGS.verbose is True:
sh.setLevel(logging.DEBUG)
- self.logger.addHandler(sh)
+ self._logger.addHandler(sh)
self.sh = sh
+ # prepare the output folder
+ if not os.path.exists(SETTINGS.output_dir):
+ os.mkdir(SETTINGS.output_dir)
+
logging_path_prefix = os.path.join(SETTINGS.output_dir, node)
fh = logging.FileHandler(f"{logging_path_prefix}.log")
@@ -68,7 +72,7 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
)
)
- self.logger.addHandler(fh)
+ self._logger.addHandler(fh)
self.fh = fh
# This outputs EVERYTHING, intended for post-mortem debugging
@@ -82,10 +86,10 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
)
)
- self.logger.addHandler(verbose_fh)
+ self._logger.addHandler(verbose_fh)
self.verbose_fh = verbose_fh
- super(DTSLOG, self).__init__(self.logger, dict(node=self.node))
+ super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
def logger_exit(self) -> None:
"""
@@ -93,7 +97,7 @@ def logger_exit(self) -> None:
"""
for handler in (self.sh, self.fh, self.verbose_fh):
handler.flush()
- self.logger.removeHandler(handler)
+ self._logger.removeHandler(handler)
def getLogger(name: str, node: str = "suite") -> DTSLOG:
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index a227d8db22..747316c78a 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -1,14 +1,30 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
-from framework.config import NodeConfiguration
+"""
+The package provides modules for managing remote connections to a remote host (node),
+differentiated by OS.
+The package provides a factory function, create_session, that returns the appropriate
+remote connection based on the passed configuration. The differences are in the
+underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""
+
+# pylama:ignore=W0611
+
+from framework.config import OS, NodeConfiguration
+from framework.exception import ConfigurationError
from framework.logger import DTSLOG
-from .remote_session import RemoteSession
-from .ssh_session import SSHSession
+from .linux_session import LinuxSession
+from .os_session import OSSession
+from .remote import RemoteSession, SSHSession
-def create_remote_session(
+def create_session(
node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
- return SSHSession(node_config, name, logger)
+) -> OSSession:
+ match node_config.os:
+ case OS.linux:
+ return LinuxSession(node_config, name, logger)
+ case _:
+ raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
new file mode 100644
index 0000000000..9d14166077
--- /dev/null
+++ b/dts/framework/remote_session/linux_session.py
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from .posix_session import PosixSession
+
+
+class LinuxSession(PosixSession):
+ """
+ The implementation of non-Posix compliant parts of Linux remote sessions.
+ """
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
new file mode 100644
index 0000000000..7a4cc5e669
--- /dev/null
+++ b/dts/framework/remote_session/os_session.py
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from abc import ABC
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote import RemoteSession, create_remote_session
+
+
+class OSSession(ABC):
+ """
+ The OS classes create a DTS node remote session and implement OS specific
+ behavior. There a few control methods implemented by the base class, the rest need
+ to be implemented by derived classes.
+ """
+
+ _config: NodeConfiguration
+ name: str
+ _logger: DTSLOG
+ remote_session: RemoteSession
+
+ def __init__(
+ self,
+ node_config: NodeConfiguration,
+ name: str,
+ logger: DTSLOG,
+ ):
+ self._config = node_config
+ self.name = name
+ self._logger = logger
+ self.remote_session = create_remote_session(node_config, name, logger)
+
+ def close(self, force: bool = False) -> None:
+ """
+ Close the remote session.
+ """
+ self.remote_session.close(force)
+
+ def is_alive(self) -> bool:
+ """
+ Check whether the remote session is still responding.
+ """
+ return self.remote_session.is_alive()
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
new file mode 100644
index 0000000000..110b6a4804
--- /dev/null
+++ b/dts/framework/remote_session/posix_session.py
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from .os_session import OSSession
+
+
+class PosixSession(OSSession):
+ """
+ An intermediary class implementing the Posix compliant parts of
+ Linux and other OS remote sessions.
+ """
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
new file mode 100644
index 0000000000..f3092f8bbe
--- /dev/null
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote_session import RemoteSession
+from .ssh_session import SSHSession
+
+
+def create_remote_session(
+ node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+ return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
similarity index 61%
rename from dts/framework/remote_session/remote_session.py
rename to dts/framework/remote_session/remote/remote_session.py
index 33047d9d0a..7c7b30225f 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import dataclasses
from abc import ABC, abstractmethod
@@ -19,14 +19,23 @@ class HistoryRecord:
class RemoteSession(ABC):
+ """
+ The base class for defining which methods must be implemented in order to connect
+ to a remote host (node) and maintain a remote session. The derived classes are
+ supposed to implement/use some underlying transport protocol (e.g. SSH) to
+ implement the methods. On top of that, it provides some basic services common to
+ all derived classes, such as keeping history and logging what's being executed
+ on the remote node.
+ """
+
name: str
hostname: str
ip: str
port: int | None
username: str
password: str
- logger: DTSLOG
history: list[HistoryRecord]
+ _logger: DTSLOG
_node_config: NodeConfiguration
def __init__(
@@ -46,31 +55,34 @@ def __init__(
self.port = int(port)
self.username = node_config.user
self.password = node_config.password or ""
- self.logger = logger
self.history = []
- self.logger.info(f"Connecting to {self.username}@{self.hostname}.")
+ self._logger = logger
+ self._logger.info(f"Connecting to {self.username}@{self.hostname}.")
self._connect()
- self.logger.info(f"Connection to {self.username}@{self.hostname} successful.")
+ self._logger.info(f"Connection to {self.username}@{self.hostname} successful.")
@abstractmethod
def _connect(self) -> None:
"""
Create connection to assigned node.
"""
- pass
def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
- self.logger.info(f"Sending: {command}")
+ """
+ Send a command and return the output.
+ """
+ self._logger.info(f"Sending: {command}")
out = self._send_command(command, timeout)
- self.logger.debug(f"Received from {command}: {out}")
+ self._logger.debug(f"Received from {command}: {out}")
self._history_add(command=command, output=out)
return out
@abstractmethod
def _send_command(self, command: str, timeout: float) -> str:
"""
- Send a command and return the output.
+ Use the underlying protocol to execute the command and return the output
+ of the command.
"""
def _history_add(self, command: str, output: str) -> None:
@@ -79,17 +91,20 @@ def _history_add(self, command: str, output: str) -> None:
)
def close(self, force: bool = False) -> None:
- self.logger.logger_exit()
+ """
+ Close the remote session and free all used resources.
+ """
+ self._logger.logger_exit()
self._close(force)
@abstractmethod
def _close(self, force: bool = False) -> None:
"""
- Close the remote session, freeing all used resources.
+ Execute protocol specific steps needed to close the session properly.
"""
@abstractmethod
def is_alive(self) -> bool:
"""
- Check whether the session is still responding.
+ Check whether the remote session is still responding.
"""
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/ssh_session.py
rename to dts/framework/remote_session/remote/ssh_session.py
index 7ec327054d..96175f5284 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import time
@@ -17,7 +17,7 @@
class SSHSession(RemoteSession):
"""
- Module for creating Pexpect SSH sessions to a node.
+ Module for creating Pexpect SSH remote sessions.
"""
session: pxssh.pxssh
@@ -56,9 +56,9 @@ def _connect(self) -> None:
)
break
except Exception as e:
- self.logger.warning(e)
+ self._logger.warning(e)
time.sleep(2)
- self.logger.info(
+ self._logger.info(
f"Retrying connection: retry number {retry_attempt + 1}."
)
else:
@@ -67,13 +67,13 @@ def _connect(self) -> None:
self.send_expect("stty -echo", "#")
self.send_expect("stty columns 1000", "#")
except Exception as e:
- self.logger.error(RED(str(e)))
+ self._logger.error(RED(str(e)))
if getattr(self, "port", None):
suggestion = (
f"\nSuggestion: Check if the firewall on {self.hostname} is "
f"stopped.\n"
)
- self.logger.info(GREEN(suggestion))
+ self._logger.info(GREEN(suggestion))
raise SSHConnectionError(self.hostname)
@@ -87,8 +87,8 @@ def send_expect(
try:
retval = int(ret_status)
if retval:
- self.logger.error(f"Command: {command} failure!")
- self.logger.error(ret)
+ self._logger.error(f"Command: {command} failure!")
+ self._logger.error(ret)
return retval
else:
return ret
@@ -97,7 +97,7 @@ def send_expect(
else:
return ret
except Exception as e:
- self.logger.error(
+ self._logger.error(
f"Exception happened in [{command}] and output is "
f"[{self._get_output()}]"
)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index c5512e5812..8ead9db482 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -1,7 +1,13 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
"""
-This module contains the classes used to model the physical traffic generator,
+This package contains the classes used to model the physical traffic generator,
system under test and any other components that need to be interacted with.
"""
+
+# pylama:ignore=W0611
+
+from .node import Node
+from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 8437975416..a37f7921e0 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -1,62 +1,118 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
A node is a generic host that DTS connects to and manages.
"""
-from framework.config import NodeConfiguration
+from framework.config import (
+ BuildTargetConfiguration,
+ ExecutionConfiguration,
+ NodeConfiguration,
+)
from framework.logger import DTSLOG, getLogger
-from framework.remote_session import RemoteSession, create_remote_session
-from framework.settings import SETTINGS
+from framework.remote_session import OSSession, create_session
class Node(object):
"""
- Basic module for node management. This module implements methods that
+ Basic class for node management. This class implements methods that
manage a node, such as information gathering (of CPU/PCI/NIC) and
environment setup.
"""
+ main_session: OSSession
+ config: NodeConfiguration
name: str
- main_session: RemoteSession
- logger: DTSLOG
- _config: NodeConfiguration
- _other_sessions: list[RemoteSession]
+ _logger: DTSLOG
+ _other_sessions: list[OSSession]
def __init__(self, node_config: NodeConfiguration):
- self._config = node_config
+ self.config = node_config
+ self.name = node_config.name
+ self._logger = getLogger(self.name)
+ self.main_session = create_session(self.config, self.name, self._logger)
+
self._other_sessions = []
- self.name = node_config.name
- self.logger = getLogger(self.name)
- self.logger.info(f"Created node: {self.name}")
- self.main_session = create_remote_session(self._config, self.name, self.logger)
+ self._logger.info(f"Created node: {self.name}")
- def send_command(self, cmds: str, timeout: float = SETTINGS.timeout) -> str:
+ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
"""
- Send commands to node and return string before timeout.
+ Perform the execution setup that will be done for each execution
+ this node is part of.
"""
+ self._set_up_execution(execution_config)
- return self.main_session.send_command(cmds, timeout)
+ def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
- def create_session(self, name: str) -> RemoteSession:
- connection = create_remote_session(
- self._config,
+ def tear_down_execution(self) -> None:
+ """
+ Perform the execution teardown that will be done after each execution
+ this node is part of concludes.
+ """
+ self._tear_down_execution()
+
+ def _tear_down_execution(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Perform the build target setup that will be done for each build target
+ tested on this node.
+ """
+ self._set_up_build_target(build_target_config)
+
+ def _set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def tear_down_build_target(self) -> None:
+ """
+ Perform the build target teardown that will be done after each build target
+ tested on this node.
+ """
+ self._tear_down_build_target()
+
+ def _tear_down_build_target(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def create_session(self, name: str) -> OSSession:
+ """
+ Create and return a new OSSession tailored to the remote OS.
+ """
+ connection = create_session(
+ self.config,
name,
getLogger(name, node=self.name),
)
self._other_sessions.append(connection)
return connection
- def node_exit(self) -> None:
+ def close(self) -> None:
"""
- Recover all resource before node exit
+ Close all connections and free other resources.
"""
if self.main_session:
self.main_session.close()
for session in self._other_sessions:
session.close()
- self.logger.logger_exit()
+ self._logger.logger_exit()
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
new file mode 100644
index 0000000000..42acb6f9b2
--- /dev/null
+++ b/dts/framework/testbed_model/sut_node.py
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from .node import Node
+
+
+class SutNode(Node):
+ """
+ A class for managing connections to the System under Test, providing
+ methods that retrieve the necessary information about the node (such as
+ CPU, memory and NIC details) and configuration capabilities.
+ """
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 02/10] dts: add ssh command verification
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 01/10] dts: add node and os abstractions Juraj Linkeš
@ 2023-01-17 15:48 ` Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 03/10] dts: add dpdk build on sut Juraj Linkeš
` (9 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:48 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
This is a basic capability needed to check whether the command execution
was successful or not. If not, raise a RemoteCommandExecutionError. When
a failure is expected, the caller is supposed to catch the exception.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/exception.py | 23 +++++++-
.../remote_session/remote/remote_session.py | 55 +++++++++++++------
.../remote_session/remote/ssh_session.py | 11 +++-
3 files changed, 68 insertions(+), 21 deletions(-)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 121a0f7296..e776b42bd9 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -21,7 +21,8 @@ class ErrorSeverity(IntEnum):
NO_ERR = 0
GENERIC_ERR = 1
CONFIG_ERR = 2
- SSH_ERR = 3
+ REMOTE_CMD_EXEC_ERR = 3
+ SSH_ERR = 4
class DTSError(Exception):
@@ -90,3 +91,23 @@ class ConfigurationError(DTSError):
"""
severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
+
+
+class RemoteCommandExecutionError(DTSError):
+ """
+ Raised when a command executed on a Node returns a non-zero exit status.
+ """
+
+ command: str
+ command_return_code: int
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+
+ def __init__(self, command: str, command_return_code: int):
+ self.command = command
+ self.command_return_code = command_return_code
+
+ def __str__(self) -> str:
+ return (
+ f"Command {self.command} returned a non-zero exit code: "
+ f"{self.command_return_code}"
+ )
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 7c7b30225f..5ac395ec79 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -7,15 +7,29 @@
from abc import ABC, abstractmethod
from framework.config import NodeConfiguration
+from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
@dataclasses.dataclass(slots=True, frozen=True)
-class HistoryRecord:
+class CommandResult:
+ """
+ The result of remote execution of a command.
+ """
+
name: str
command: str
- output: str | int
+ stdout: str
+ stderr: str
+ return_code: int
+
+ def __str__(self) -> str:
+ return (
+ f"stdout: '{self.stdout}'\n"
+ f"stderr: '{self.stderr}'\n"
+ f"return_code: '{self.return_code}'"
+ )
class RemoteSession(ABC):
@@ -34,7 +48,7 @@ class RemoteSession(ABC):
port: int | None
username: str
password: str
- history: list[HistoryRecord]
+ history: list[CommandResult]
_logger: DTSLOG
_node_config: NodeConfiguration
@@ -68,28 +82,33 @@ def _connect(self) -> None:
Create connection to assigned node.
"""
- def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
+ def send_command(
+ self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ ) -> CommandResult:
"""
- Send a command and return the output.
+ Send a command to the connected node and return CommandResult.
+ If verify is True, check the return code of the executed command
+ and raise a RemoteCommandExecutionError if the command failed.
"""
- self._logger.info(f"Sending: {command}")
- out = self._send_command(command, timeout)
- self._logger.debug(f"Received from {command}: {out}")
- self._history_add(command=command, output=out)
- return out
+ self._logger.info(f"Sending: '{command}'")
+ result = self._send_command(command, timeout)
+ if verify and result.return_code:
+ self._logger.debug(
+ f"Command '{command}' failed with return code '{result.return_code}'"
+ )
+ self._logger.debug(f"stdout: '{result.stdout}'")
+ self._logger.debug(f"stderr: '{result.stderr}'")
+ raise RemoteCommandExecutionError(command, result.return_code)
+ self._logger.debug(f"Received from '{command}':\n{result}")
+ self.history.append(result)
+ return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return the output
- of the command.
+ Use the underlying protocol to execute the command and return CommandResult.
"""
- def _history_add(self, command: str, output: str) -> None:
- self.history.append(
- HistoryRecord(name=self.name, command=command, output=output)
- )
-
def close(self, force: bool = False) -> None:
"""
Close the remote session and free all used resources.
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index 96175f5284..6da5be9fff 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -12,7 +12,7 @@
from framework.logger import DTSLOG
from framework.utils import GREEN, RED
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
class SSHSession(RemoteSession):
@@ -163,7 +163,14 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
+ output = self._send_command_get_output(command, timeout)
+ return_code = int(self._send_command_get_output("echo $?", timeout))
+
+ # we're capturing only stdout
+ return CommandResult(self.name, command, output, "", return_code)
+
+ def _send_command_get_output(self, command: str, timeout: float) -> str:
try:
self._clean_session()
self._send_line(command)
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 03/10] dts: add dpdk build on sut
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 01/10] dts: add node and os abstractions Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 02/10] dts: add ssh command verification Juraj Linkeš
@ 2023-01-17 15:48 ` Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 04/10] dts: add dpdk execution handling Juraj Linkeš
` (8 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:48 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Add the ability to build DPDK and apps on the SUT, using a configured
target.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/config/__init__.py | 2 +
dts/framework/exception.py | 17 ++
dts/framework/remote_session/os_session.py | 90 +++++++++-
dts/framework/remote_session/posix_session.py | 126 ++++++++++++++
.../remote_session/remote/remote_session.py | 38 ++++-
.../remote_session/remote/ssh_session.py | 68 +++++++-
dts/framework/settings.py | 55 +++++-
dts/framework/testbed_model/__init__.py | 1 +
dts/framework/testbed_model/dpdk.py | 33 ++++
dts/framework/testbed_model/sut_node.py | 158 ++++++++++++++++++
dts/framework/utils.py | 19 ++-
11 files changed, 589 insertions(+), 18 deletions(-)
create mode 100644 dts/framework/testbed_model/dpdk.py
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index e3e2d74eac..ca61cb10fe 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -91,6 +91,7 @@ class BuildTargetConfiguration:
os: OS
cpu: CPUType
compiler: Compiler
+ compiler_wrapper: str
name: str
@staticmethod
@@ -100,6 +101,7 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
os=OS(d["os"]),
cpu=CPUType(d["cpu"]),
compiler=Compiler(d["compiler"]),
+ compiler_wrapper=d.get("compiler_wrapper", ""),
name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index e776b42bd9..b4545a5a40 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -23,6 +23,7 @@ class ErrorSeverity(IntEnum):
CONFIG_ERR = 2
REMOTE_CMD_EXEC_ERR = 3
SSH_ERR = 4
+ DPDK_BUILD_ERR = 10
class DTSError(Exception):
@@ -111,3 +112,19 @@ def __str__(self) -> str:
f"Command {self.command} returned a non-zero exit code: "
f"{self.command_return_code}"
)
+
+
+class RemoteDirectoryExistsError(DTSError):
+ """
+ Raised when a remote directory to be created already exists.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+
+
+class DPDKBuildError(DTSError):
+ """
+ Raised when DPDK build fails for any reason.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 7a4cc5e669..06d1ffefdd 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -2,10 +2,14 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
-from abc import ABC
+from abc import ABC, abstractmethod
+from pathlib import PurePath
-from framework.config import NodeConfiguration
+from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
+from framework.settings import SETTINGS
+from framework.testbed_model import MesonArgs
+from framework.utils import EnvVarsDict
from .remote import RemoteSession, create_remote_session
@@ -44,3 +48,85 @@ def is_alive(self) -> bool:
Check whether the remote session is still responding.
"""
return self.remote_session.is_alive()
+
+ @abstractmethod
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+ """
+ Try to find DPDK remote dir in remote_dir.
+ """
+
+ @abstractmethod
+ def get_remote_tmp_dir(self) -> PurePath:
+ """
+ Get the path of the temporary directory of the remote OS.
+ """
+
+ @abstractmethod
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for the target architecture. Get
+ information from the node if needed.
+ """
+
+ @abstractmethod
+ def join_remote_path(self, *args: str | PurePath) -> PurePath:
+ """
+ Join path parts using the path separator that fits the remote OS.
+ """
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local filesystem to destination_file
+ on the remote Node associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated remote Node to destination_file on local storage.
+ """
+
+ @abstractmethod
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ """
+ Remove remote directory, by default remove recursively and forcefully.
+ """
+
+ @abstractmethod
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ """
+ Extract remote tarball in place. If expected_dir is a non-empty string, check
+ whether the dir exists after extracting the archive.
+ """
+
+ @abstractmethod
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: MesonArgs,
+ remote_dpdk_dir: str | PurePath,
+ remote_dpdk_build_dir: str | PurePath,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> None:
+ """
+ Build DPDK in the input dir with specified environment variables and meson
+ arguments.
+ """
+
+ @abstractmethod
+ def get_dpdk_version(self, version_path: str | PurePath) -> str:
+ """
+ Inspect DPDK version on the remote node from version_path.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index 110b6a4804..d4da9f114e 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -2,6 +2,14 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from pathlib import PurePath, PurePosixPath
+
+from framework.config import Architecture
+from framework.exception import DPDKBuildError, RemoteCommandExecutionError
+from framework.settings import SETTINGS
+from framework.testbed_model import MesonArgs
+from framework.utils import EnvVarsDict
+
from .os_session import OSSession
@@ -10,3 +18,121 @@ class PosixSession(OSSession):
An intermediary class implementing the Posix compliant parts of
Linux and other OS remote sessions.
"""
+
+ @staticmethod
+ def combine_short_options(**opts: bool) -> str:
+ ret_opts = ""
+ for opt, include in opts.items():
+ if include:
+ ret_opts = f"{ret_opts}{opt}"
+
+ if ret_opts:
+ ret_opts = f" -{ret_opts}"
+
+ return ret_opts
+
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+ remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
+ result = self.remote_session.send_command(f"ls -d {remote_guess} | tail -1")
+ return PurePosixPath(result.stdout)
+
+ def get_remote_tmp_dir(self) -> PurePosixPath:
+ return PurePosixPath("/tmp")
+
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for i686 arch build. Get information
+ from the node if needed.
+ """
+ env_vars = {}
+ if arch == Architecture.i686:
+ # find the pkg-config path and store it in PKG_CONFIG_LIBDIR
+ out = self.remote_session.send_command("find /usr -type d -name pkgconfig")
+ pkg_path = ""
+ res_path = out.stdout.split("\r\n")
+ for cur_path in res_path:
+ if "i386" in cur_path:
+ pkg_path = cur_path
+ break
+ assert pkg_path != "", "i386 pkg-config path not found"
+
+ env_vars["CFLAGS"] = "-m32"
+ env_vars["PKG_CONFIG_LIBDIR"] = pkg_path
+
+ return env_vars
+
+ def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+ return PurePosixPath(*args)
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ self.remote_session.copy_file(source_file, destination_file, source_remote)
+
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ opts = PosixSession.combine_short_options(r=recursive, f=force)
+ self.remote_session.send_command(f"rm{opts} {remote_dir_path}")
+
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ self.remote_session.send_command(
+ f"tar xfm {remote_tarball_path} "
+ f"-C {PurePosixPath(remote_tarball_path).parent}",
+ 60,
+ )
+ if expected_dir:
+ self.remote_session.send_command(f"ls {expected_dir}", verify=True)
+
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: MesonArgs,
+ remote_dpdk_dir: str | PurePath,
+ remote_dpdk_build_dir: str | PurePath,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> None:
+ try:
+ if rebuild:
+ # reconfigure, then build
+ self._logger.info("Reconfiguring DPDK build.")
+ self.remote_session.send_command(
+ f"meson configure {meson_args} {remote_dpdk_build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+ else:
+ # fresh build - remove target dir first, then build from scratch
+ self._logger.info("Configuring DPDK build from scratch.")
+ self.remove_remote_dir(remote_dpdk_build_dir)
+ self.remote_session.send_command(
+ f"meson {meson_args} {remote_dpdk_dir} {remote_dpdk_build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+
+ self._logger.info("Building DPDK.")
+ self.remote_session.send_command(
+ f"ninja -C {remote_dpdk_build_dir}", timeout, verify=True, env=env_vars
+ )
+ except RemoteCommandExecutionError as e:
+ raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
+
+ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+ out = self.remote_session.send_command(
+ f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
+ )
+ return out.stdout
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 5ac395ec79..91dee3cb4f 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -5,11 +5,13 @@
import dataclasses
from abc import ABC, abstractmethod
+from pathlib import PurePath
from framework.config import NodeConfiguration
from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict
@dataclasses.dataclass(slots=True, frozen=True)
@@ -83,15 +85,22 @@ def _connect(self) -> None:
"""
def send_command(
- self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ self,
+ command: str,
+ timeout: float = SETTINGS.timeout,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
) -> CommandResult:
"""
- Send a command to the connected node and return CommandResult.
+ Send a command to the connected node using optional env vars
+ and return CommandResult.
If verify is True, check the return code of the executed command
and raise a RemoteCommandExecutionError if the command failed.
"""
- self._logger.info(f"Sending: '{command}'")
- result = self._send_command(command, timeout)
+ self._logger.info(
+ f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
+ )
+ result = self._send_command(command, timeout, env)
if verify and result.return_code:
self._logger.debug(
f"Command '{command}' failed with return code '{result.return_code}'"
@@ -104,9 +113,12 @@ def send_command(
return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> CommandResult:
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return CommandResult.
+ Use the underlying protocol to execute the command using optional env vars
+ and return CommandResult.
"""
def close(self, force: bool = False) -> None:
@@ -127,3 +139,17 @@ def is_alive(self) -> bool:
"""
Check whether the remote session is still responding.
"""
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local filesystem to destination_file on the remote Node
+ associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated Node to destination_file on local filesystem.
+ """
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index 6da5be9fff..d0863d8791 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -4,13 +4,15 @@
# Copyright(c) 2022-2023 University of New Hampshire
import time
+from pathlib import PurePath
+import pexpect # type: ignore
from pexpect import pxssh # type: ignore
from framework.config import NodeConfiguration
from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
from framework.logger import DTSLOG
-from framework.utils import GREEN, RED
+from framework.utils import GREEN, RED, EnvVarsDict
from .remote_session import CommandResult, RemoteSession
@@ -163,16 +165,22 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> CommandResult:
- output = self._send_command_get_output(command, timeout)
- return_code = int(self._send_command_get_output("echo $?", timeout))
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
+ output = self._send_command_get_output(command, timeout, env)
+ return_code = int(self._send_command_get_output("echo $?", timeout, None))
# we're capturing only stdout
return CommandResult(self.name, command, output, "", return_code)
- def _send_command_get_output(self, command: str, timeout: float) -> str:
+ def _send_command_get_output(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> str:
try:
self._clean_session()
+ if env:
+ command = f"{env} {command}"
self._send_line(command)
except Exception as e:
raise e
@@ -189,3 +197,53 @@ def _close(self, force: bool = False) -> None:
else:
if self.is_alive():
self.session.logout()
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Send a local file to a remote host.
+ """
+ if source_remote:
+ source_file = f"{self.username}@{self.ip}:{source_file}"
+ else:
+ destination_file = f"{self.username}@{self.ip}:{destination_file}"
+
+ port = ""
+ if self.port:
+ port = f" -P {self.port}"
+
+ # this is not OS agnostic, find a Pythonic (and thus OS agnostic) way
+ # TODO Fabric should handle this
+ command = (
+ f"scp -v{port} -o NoHostAuthenticationForLocalhost=yes"
+ f" {source_file} {destination_file}"
+ )
+
+ self._spawn_scp(command)
+
+ def _spawn_scp(self, scp_cmd: str) -> None:
+ """
+ Transfer a file with SCP
+ """
+ self._logger.info(scp_cmd)
+ p: pexpect.spawn = pexpect.spawn(scp_cmd)
+ time.sleep(0.5)
+ ssh_newkey: str = "Are you sure you want to continue connecting"
+ i: int = p.expect(
+ [ssh_newkey, "[pP]assword", "# ", pexpect.EOF, pexpect.TIMEOUT], 120
+ )
+ if i == 0: # add once in trust list
+ p.sendline("yes")
+ i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2)
+
+ if i == 1:
+ time.sleep(0.5)
+ p.sendline(self.password)
+ p.expect("Exit status 0", 60)
+ if i == 4:
+ self._logger.error("SCP TIMEOUT error %d" % i)
+ p.close()
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 800f2c7b7f..a298b1eaac 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -1,14 +1,17 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2021 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import argparse
import os
from collections.abc import Callable, Iterable, Sequence
from dataclasses import dataclass
+from pathlib import Path
from typing import Any, TypeVar
+from .exception import ConfigurationError
+
_T = TypeVar("_T")
@@ -60,6 +63,9 @@ class _Settings:
output_dir: str
timeout: float
verbose: bool
+ skip_setup: bool
+ dpdk_ref: Path | str
+ compile_timeout: float
def _get_parser() -> argparse.ArgumentParser:
@@ -88,6 +94,7 @@ def _get_parser() -> argparse.ArgumentParser:
"--timeout",
action=_env_arg("DTS_TIMEOUT"),
default=15,
+ type=float,
required=False,
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
"compiling DPDK.",
@@ -103,16 +110,58 @@ def _get_parser() -> argparse.ArgumentParser:
"to the console.",
)
+ parser.add_argument(
+ "-s",
+ "--skip-setup",
+ action=_env_arg("DTS_SKIP_SETUP"),
+ required=False,
+ help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+ )
+
+ parser.add_argument(
+ "--dpdk-ref",
+ "--git",
+ "--snapshot",
+ action=_env_arg("DTS_DPDK_REF"),
+ default="dpdk.tar.xz",
+ required=False,
+ help="[DTS_DPDK_REF] Reference to DPDK source code, "
+ "can be either a path to a tarball or a git refspec. "
+ "In case of a tarball, it will be extracted in the same directory.",
+ )
+
+ parser.add_argument(
+ "--compile-timeout",
+ action=_env_arg("DTS_COMPILE_TIMEOUT"),
+ default=1200,
+ type=float,
+ required=False,
+ help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
+ )
+
return parser
+def _check_dpdk_ref(parsed_args: argparse.Namespace) -> None:
+ if not os.path.exists(parsed_args.dpdk_ref):
+ raise ConfigurationError(
+ f"DPDK tarball '{parsed_args.dpdk_ref}' doesn't exist."
+ )
+ else:
+ parsed_args.dpdk_ref = Path(parsed_args.dpdk_ref)
+
+
def _get_settings() -> _Settings:
parsed_args = _get_parser().parse_args()
+ _check_dpdk_ref(parsed_args)
return _Settings(
config_file_path=parsed_args.config_file,
output_dir=parsed_args.output_dir,
- timeout=float(parsed_args.timeout),
+ timeout=parsed_args.timeout,
verbose=(parsed_args.verbose == "Y"),
+ skip_setup=(parsed_args.skip_setup == "Y"),
+ dpdk_ref=parsed_args.dpdk_ref,
+ compile_timeout=parsed_args.compile_timeout,
)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ead9db482..96e2ab7c3f 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,5 +9,6 @@
# pylama:ignore=W0611
+from .dpdk import MesonArgs
from .node import Node
from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/dpdk.py b/dts/framework/testbed_model/dpdk.py
new file mode 100644
index 0000000000..0526974f72
--- /dev/null
+++ b/dts/framework/testbed_model/dpdk.py
@@ -0,0 +1,33 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Various utilities used for configuring, building and running DPDK.
+"""
+
+
+class MesonArgs(object):
+ """
+ Aggregate the arguments needed to build DPDK:
+ default_library: Default library type, Meson allows "shared", "static" and "both".
+ Defaults to None, in which case the argument won't be used.
+ Keyword arguments: The arguments found in meson_option.txt in root DPDK directory.
+ Do not use -D with them, for example: enable_kmods=True.
+ """
+
+ default_library: str
+
+ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+ self.default_library = (
+ f"--default-library={default_library}" if default_library else ""
+ )
+ self.dpdk_args = " ".join(
+ (
+ f"-D{dpdk_arg_name}={dpdk_arg_value}"
+ for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
+ )
+ )
+
+ def __str__(self) -> str:
+ return " ".join(f"{self.default_library} {self.dpdk_args}".split())
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 42acb6f9b2..c97d995b31 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -2,6 +2,15 @@
# Copyright(c) 2010-2014 Intel Corporation
# Copyright(c) 2023 PANTHEON.tech s.r.o.
+import os
+import tarfile
+from pathlib import PurePath
+
+from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict, skip_setup
+
+from .dpdk import MesonArgs
from .node import Node
@@ -10,4 +19,153 @@ class SutNode(Node):
A class for managing connections to the System under Test, providing
methods that retrieve the necessary information about the node (such as
CPU, memory and NIC details) and configuration capabilities.
+ Another key capability is building DPDK according to given build target.
"""
+
+ _build_target_config: BuildTargetConfiguration | None
+ _env_vars: EnvVarsDict
+ _remote_tmp_dir: PurePath
+ __remote_dpdk_dir: PurePath | None
+ _dpdk_version: str | None
+ _app_compile_timeout: float
+
+ def __init__(self, node_config: NodeConfiguration):
+ super(SutNode, self).__init__(node_config)
+ self._build_target_config = None
+ self._env_vars = EnvVarsDict()
+ self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
+ self.__remote_dpdk_dir = None
+ self._dpdk_version = None
+ self._app_compile_timeout = 90
+
+ @property
+ def _remote_dpdk_dir(self) -> PurePath:
+ if self.__remote_dpdk_dir is None:
+ self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
+ return self.__remote_dpdk_dir
+
+ @_remote_dpdk_dir.setter
+ def _remote_dpdk_dir(self, value: PurePath) -> None:
+ self.__remote_dpdk_dir = value
+
+ @property
+ def remote_dpdk_build_dir(self) -> PurePath:
+ if self._build_target_config:
+ return self.main_session.join_remote_path(
+ self._remote_dpdk_dir, self._build_target_config.name
+ )
+ else:
+ return self.main_session.join_remote_path(self._remote_dpdk_dir, "build")
+
+ @property
+ def dpdk_version(self) -> str:
+ if self._dpdk_version is None:
+ self._dpdk_version = self.main_session.get_dpdk_version(
+ self._remote_dpdk_dir
+ )
+ return self._dpdk_version
+
+ def _guess_dpdk_remote_dir(self) -> PurePath:
+ return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
+
+ def _set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Setup DPDK on the SUT node.
+ """
+ self._configure_build_target(build_target_config)
+ self._copy_dpdk_tarball()
+ self._build_dpdk()
+
+ def _configure_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Populate common environment variables and set build target config.
+ """
+ self._env_vars = EnvVarsDict()
+ self._build_target_config = build_target_config
+ self._env_vars.update(
+ self.main_session.get_dpdk_build_env_vars(build_target_config.arch)
+ )
+ self._env_vars["CC"] = build_target_config.compiler.name
+ if build_target_config.compiler_wrapper:
+ self._env_vars["CC"] = (
+ f"'{build_target_config.compiler_wrapper} "
+ f"{build_target_config.compiler.name}'"
+ )
+
+ @skip_setup
+ def _copy_dpdk_tarball(self) -> None:
+ """
+ Copy to and extract DPDK tarball on the SUT node.
+ """
+ self._logger.info("Copying DPDK tarball to SUT.")
+ self.main_session.copy_file(SETTINGS.dpdk_ref, self._remote_tmp_dir)
+
+ # construct remote tarball path
+ # the basename is the same on local host and on remote Node
+ remote_tarball_path = self.main_session.join_remote_path(
+ self._remote_tmp_dir, os.path.basename(SETTINGS.dpdk_ref)
+ )
+
+ # construct remote path after extracting
+ with tarfile.open(SETTINGS.dpdk_ref) as dpdk_tar:
+ dpdk_top_dir = dpdk_tar.getnames()[0]
+ self._remote_dpdk_dir = self.main_session.join_remote_path(
+ self._remote_tmp_dir, dpdk_top_dir
+ )
+
+ self._logger.info(
+ f"Extracting DPDK tarball on SUT: "
+ f"'{remote_tarball_path}' into '{self._remote_dpdk_dir}'."
+ )
+ # clean remote path where we're extracting
+ self.main_session.remove_remote_dir(self._remote_dpdk_dir)
+
+ # then extract to remote path
+ self.main_session.extract_remote_tarball(
+ remote_tarball_path, self._remote_dpdk_dir
+ )
+
+ @skip_setup
+ def _build_dpdk(self) -> None:
+ """
+ Build DPDK. Uses the already configured target. Assumes that the tarball has
+ already been copied to and extracted on the SUT node.
+ """
+ self.main_session.build_dpdk(
+ self._env_vars,
+ MesonArgs(default_library="static", enable_kmods=True, libdir="lib"),
+ self._remote_dpdk_dir,
+ self.remote_dpdk_build_dir,
+ )
+
+ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
+ """
+ Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
+ When app_name is 'all', build all example apps.
+ When app_name is any other string, tries to build that example app.
+ Return the directory path of the built app. If building all apps, return
+ the path to the examples directory (where all apps reside).
+ The meson_dpdk_args are keyword arguments
+ found in meson_option.txt in root DPDK directory. Do not use -D with them,
+ for example: enable_kmods=True.
+ """
+ self.main_session.build_dpdk(
+ self._env_vars,
+ MesonArgs(examples=app_name, **meson_dpdk_args),
+ self._remote_dpdk_dir,
+ self.remote_dpdk_build_dir,
+ rebuild=True,
+ timeout=self._app_compile_timeout,
+ )
+
+ if app_name == "all":
+ return self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, "examples"
+ )
+ return self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index c28c8f1082..611071604b 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -1,9 +1,12 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import sys
+from typing import Callable
+
+from .settings import SETTINGS
def check_dts_python_version() -> None:
@@ -22,9 +25,21 @@ def check_dts_python_version() -> None:
print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+def skip_setup(func) -> Callable[..., None]:
+ if SETTINGS.skip_setup:
+ return lambda *args: None
+ else:
+ return func
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
def RED(text: str) -> str:
return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+
+class EnvVarsDict(dict):
+ def __str__(self) -> str:
+ return " ".join(["=".join(item) for item in self.items()])
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 04/10] dts: add dpdk execution handling
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (2 preceding siblings ...)
2023-01-17 15:48 ` [PATCH v3 03/10] dts: add dpdk build on sut Juraj Linkeš
@ 2023-01-17 15:49 ` Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 05/10] dts: add node memory setup Juraj Linkeš
` (7 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:49 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Add methods for setting up and shutting down DPDK apps and for
constructing EAL parameters.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 4 +
dts/framework/config/__init__.py | 8 +
dts/framework/config/conf_yaml_schema.json | 24 ++
dts/framework/remote_session/linux_session.py | 18 ++
dts/framework/remote_session/os_session.py | 23 +-
dts/framework/remote_session/posix_session.py | 83 ++++++
dts/framework/testbed_model/__init__.py | 8 +
dts/framework/testbed_model/dpdk.py | 45 ++++
dts/framework/testbed_model/hw/__init__.py | 27 ++
dts/framework/testbed_model/hw/cpu.py | 253 ++++++++++++++++++
.../testbed_model/hw/virtual_device.py | 16 ++
dts/framework/testbed_model/node.py | 46 ++++
dts/framework/testbed_model/sut_node.py | 82 +++++-
dts/framework/utils.py | 20 ++
14 files changed, 655 insertions(+), 2 deletions(-)
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 03696d2bab..1648e5c3c5 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,4 +13,8 @@ nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ arch: x86_64
os: linux
+ lcores: ""
+ use_first_core: false
+ memory_channels: 4
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ca61cb10fe..17b917f3b3 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -72,7 +72,11 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ arch: Architecture
os: OS
+ lcores: str
+ use_first_core: bool
+ memory_channels: int
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -81,7 +85,11 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ arch=Architecture(d["arch"]),
os=OS(d["os"]),
+ lcores=d.get("lcores", "1"),
+ use_first_core=d.get("use_first_core", False),
+ memory_channels=d.get("memory_channels", 1),
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 9170307fbe..81f304da5e 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -6,6 +6,14 @@
"type": "string",
"description": "A unique identifier for a node"
},
+ "ARCH": {
+ "type": "string",
+ "enum": [
+ "x86_64",
+ "arm64",
+ "ppc64le"
+ ]
+ },
"OS": {
"type": "string",
"enum": [
@@ -92,8 +100,23 @@
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
},
+ "arch": {
+ "$ref": "#/definitions/ARCH"
+ },
"os": {
"$ref": "#/definitions/OS"
+ },
+ "lcores": {
+ "type": "string",
+ "description": "Optional comma-separated list of logical cores to use, e.g.: 1,2,3,4,5,18-22. Defaults to 1. An empty string means use all lcores."
+ },
+ "use_first_core": {
+ "type": "boolean",
+ "description": "Indicate whether DPDK should use the first physical core. It won't be used by default."
+ },
+ "memory_channels": {
+ "type": "integer",
+ "description": "How many memory channels to use. Optional, defaults to 1."
}
},
"additionalProperties": false,
@@ -101,6 +124,7 @@
"name",
"hostname",
"user",
+ "arch",
"os"
]
},
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index 9d14166077..6809102038 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from framework.testbed_model import LogicalCore
+
from .posix_session import PosixSession
@@ -9,3 +11,19 @@ class LinuxSession(PosixSession):
"""
The implementation of non-Posix compliant parts of Linux remote sessions.
"""
+
+ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+ cpu_info = self.remote_session.send_command(
+ "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#"
+ ).stdout
+ lcores = []
+ for cpu_line in cpu_info.splitlines():
+ lcore, core, socket, node = cpu_line.split(",")
+ if not use_first_core and core == 0 and socket == 0:
+ self._logger.info("Not using the first physical core.")
+ continue
+ lcores.append(LogicalCore(int(lcore), int(core), int(socket), int(node)))
+ return lcores
+
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ return dpdk_prefix
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 06d1ffefdd..c30753e0b8 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -3,12 +3,13 @@
# Copyright(c) 2023 University of New Hampshire
from abc import ABC, abstractmethod
+from collections.abc import Iterable
from pathlib import PurePath
from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
from framework.settings import SETTINGS
-from framework.testbed_model import MesonArgs
+from framework.testbed_model import LogicalCore, MesonArgs
from framework.utils import EnvVarsDict
from .remote import RemoteSession, create_remote_session
@@ -130,3 +131,23 @@ def get_dpdk_version(self, version_path: str | PurePath) -> str:
"""
Inspect DPDK version on the remote node from version_path.
"""
+
+ @abstractmethod
+ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+ """
+ Compose a list of LogicalCores present on the remote node.
+ If use_first_core is False, the first physical core won't be used.
+ """
+
+ @abstractmethod
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ """
+ Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
+ dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+ """
+
+ @abstractmethod
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ """
+ Get the DPDK file prefix that will be used when running DPDK apps.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index d4da9f114e..4c8474b804 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+import re
+from collections.abc import Iterable
from pathlib import PurePath, PurePosixPath
from framework.config import Architecture
@@ -136,3 +138,84 @@ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
)
return out.stdout
+
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ self._logger.info("Cleaning up DPDK apps.")
+ dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
+ if dpdk_runtime_dirs:
+ # kill and cleanup only if DPDK is running
+ dpdk_pids = self._get_dpdk_pids(dpdk_runtime_dirs)
+ for dpdk_pid in dpdk_pids:
+ self.remote_session.send_command(f"kill -9 {dpdk_pid}", 20)
+ self._check_dpdk_hugepages(dpdk_runtime_dirs)
+ self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
+
+ def _get_dpdk_runtime_dirs(
+ self, dpdk_prefix_list: Iterable[str]
+ ) -> list[PurePosixPath]:
+ prefix = PurePosixPath("/var", "run", "dpdk")
+ if not dpdk_prefix_list:
+ remote_prefixes = self._list_remote_dirs(prefix)
+ if not remote_prefixes:
+ dpdk_prefix_list = []
+ else:
+ dpdk_prefix_list = remote_prefixes
+
+ return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
+
+ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
+ """
+ Return a list of directories of the remote_dir.
+ If remote_path doesn't exist, return None.
+ """
+ out = self.remote_session.send_command(
+ f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
+ ).stdout
+ if "No such file or directory" in out:
+ return None
+ else:
+ return out.splitlines()
+
+ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+ pids = []
+ pid_regex = r"p(\d+)"
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ dpdk_config_file = PurePosixPath(dpdk_runtime_dir, "config")
+ if self._remote_files_exists(dpdk_config_file):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {dpdk_config_file}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ for out_line in out.splitlines():
+ match = re.match(pid_regex, out_line)
+ if match:
+ pids.append(int(match.group(1)))
+ return pids
+
+ def _remote_files_exists(self, remote_path: PurePath) -> bool:
+ result = self.remote_session.send_command(f"test -e {remote_path}")
+ return not result.return_code
+
+ def _check_dpdk_hugepages(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
+ if self._remote_files_exists(hugepage_info):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {hugepage_info}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ self._logger.warning("Some DPDK processes did not free hugepages.")
+ self._logger.warning("*******************************************")
+ self._logger.warning(out)
+ self._logger.warning("*******************************************")
+
+ def _remove_dpdk_runtime_dirs(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ self.remove_remote_dir(dpdk_runtime_dir)
+
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ return ""
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 96e2ab7c3f..2be5169dc8 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -10,5 +10,13 @@
# pylama:ignore=W0611
from .dpdk import MesonArgs
+from .hw import (
+ LogicalCore,
+ LogicalCoreAmount,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+ VirtualDevice,
+ lcore_filter,
+)
from .node import Node
from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/dpdk.py b/dts/framework/testbed_model/dpdk.py
index 0526974f72..9b3a9e7381 100644
--- a/dts/framework/testbed_model/dpdk.py
+++ b/dts/framework/testbed_model/dpdk.py
@@ -6,6 +6,8 @@
Various utilities used for configuring, building and running DPDK.
"""
+from .hw import LogicalCoreList, VirtualDevice
+
class MesonArgs(object):
"""
@@ -31,3 +33,46 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
def __str__(self) -> str:
return " ".join(f"{self.default_library} {self.dpdk_args}".split())
+
+
+class EalParameters(object):
+ def __init__(
+ self,
+ lcore_list: LogicalCoreList,
+ memory_channels: int,
+ prefix: str,
+ no_pci: bool,
+ vdevs: list[VirtualDevice],
+ other_eal_param: str,
+ ):
+ """
+ Generate eal parameters character string;
+ :param lcore_list: the list of logical cores to use.
+ :param memory_channels: the number of memory channels to use.
+ :param prefix: set file prefix string, eg:
+ prefix='vf'
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1']
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments'
+ """
+ self._lcore_list = f"-l {lcore_list}"
+ self._memory_channels = f"-n {memory_channels}"
+ self._prefix = prefix
+ if prefix:
+ self._prefix = f"--file-prefix={prefix}"
+ self._no_pci = "--no-pci" if no_pci else ""
+ self._vdevs = " ".join(f"--vdev {vdev}" for vdev in vdevs)
+ self._other_eal_param = other_eal_param
+
+ def __str__(self) -> str:
+ return (
+ f"{self._lcore_list} "
+ f"{self._memory_channels} "
+ f"{self._prefix} "
+ f"{self._no_pci} "
+ f"{self._vdevs} "
+ f"{self._other_eal_param}"
+ )
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
new file mode 100644
index 0000000000..fb4cdac8e3
--- /dev/null
+++ b/dts/framework/testbed_model/hw/__init__.py
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from .cpu import (
+ LogicalCore,
+ LogicalCoreAmount,
+ LogicalCoreAmountFilter,
+ LogicalCoreFilter,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+)
+from .virtual_device import VirtualDevice
+
+
+def lcore_filter(
+ core_list: list[LogicalCore],
+ filter_specifier: LogicalCoreAmount | LogicalCoreList,
+ ascending: bool,
+) -> LogicalCoreFilter:
+ if isinstance(filter_specifier, LogicalCoreList):
+ return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+ elif isinstance(filter_specifier, LogicalCoreAmount):
+ return LogicalCoreAmountFilter(core_list, filter_specifier, ascending)
+ else:
+ raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/hw/cpu.py
new file mode 100644
index 0000000000..96c46ee8c5
--- /dev/null
+++ b/dts/framework/testbed_model/hw/cpu.py
@@ -0,0 +1,253 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+import dataclasses
+from abc import ABC, abstractmethod
+from collections.abc import Iterable
+from dataclasses import dataclass
+
+from framework.utils import expand_range
+
+
+@dataclass(slots=True, frozen=True)
+class LogicalCore(object):
+ """
+ Representation of a CPU core. A physical core is represented in OS
+ by multiple logical cores (lcores) if CPU multithreading is enabled.
+ """
+
+ lcore: int
+ core: int
+ socket: int
+ node: int
+
+ def __int__(self) -> int:
+ return self.lcore
+
+
+class LogicalCoreList(object):
+ """
+ Convert these options into a list of logical core ids.
+ lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
+ lcore_list=[0,1,2,3] - a list of int indices
+ lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
+ lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
+
+ The class creates a unified format used across the framework and allows
+ the user to use either a str representation (using str(instance) or directly
+ in f-strings) or a list representation (by accessing instance.lcore_list).
+ Empty lcore_list is allowed.
+ """
+
+ _lcore_list: list[int]
+ _lcore_str: str
+
+ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+ self._lcore_list = []
+ if isinstance(lcore_list, str):
+ lcore_list = lcore_list.split(",")
+ for lcore in lcore_list:
+ if isinstance(lcore, str):
+ self._lcore_list.extend(expand_range(lcore))
+ else:
+ self._lcore_list.append(int(lcore))
+
+ # the input lcores may not be sorted
+ self._lcore_list.sort()
+ self._lcore_str = (
+ f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+ )
+
+ @property
+ def lcore_list(self) -> list[int]:
+ return self._lcore_list
+
+ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
+ formatted_core_list = []
+ segment = lcore_ids_list[:1]
+ for lcore_id in lcore_ids_list[1:]:
+ if lcore_id - segment[-1] == 1:
+ segment.append(lcore_id)
+ else:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}"
+ if len(segment) > 1
+ else f"{segment[0]}"
+ )
+ current_core_index = lcore_ids_list.index(lcore_id)
+ formatted_core_list.extend(
+ self._get_consecutive_lcores_range(
+ lcore_ids_list[current_core_index:]
+ )
+ )
+ segment.clear()
+ break
+ if len(segment) > 0:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
+ )
+ return formatted_core_list
+
+ def __str__(self) -> str:
+ return self._lcore_str
+
+
+@dataclasses.dataclass(slots=True, frozen=True)
+class LogicalCoreAmount(object):
+ """
+ Define the amount of logical cores to use.
+ If sockets is not None, socket_amount is ignored.
+ """
+
+ lcores_per_core: int = 1
+ cores_per_socket: int = 2
+ socket_amount: int = 1
+ sockets: list[int] | None = None
+
+
+class LogicalCoreFilter(ABC):
+ """
+ Filter according to the input filter specifier. Each filter needs to be
+ implemented in a derived class.
+ This class only implements operations common to all filters, such as sorting
+ the list to be filtered beforehand.
+ """
+
+ _filter_specifier: LogicalCoreAmount | LogicalCoreList
+ _lcores_to_filter: list[LogicalCore]
+
+ def __init__(
+ self,
+ lcore_list: list[LogicalCore],
+ filter_specifier: LogicalCoreAmount | LogicalCoreList,
+ ascending: bool = True,
+ ):
+ self._filter_specifier = filter_specifier
+
+ # sorting by core is needed in case hyperthreading is enabled
+ self._lcores_to_filter = sorted(
+ lcore_list, key=lambda x: x.core, reverse=not ascending
+ )
+ self.filter()
+
+ @abstractmethod
+ def filter(self) -> list[LogicalCore]:
+ """
+ Use self._filter_specifier to filter self._lcores_to_filter
+ and return the list of filtered LogicalCores.
+ self._lcores_to_filter is a sorted copy of the original list,
+ so it may be modified.
+ """
+
+
+class LogicalCoreAmountFilter(LogicalCoreFilter):
+ """
+ Filter the input list of LogicalCores according to specified rules:
+ Use cores from the specified amount of sockets or from the specified socket ids.
+ If sockets is specified, it takes precedence over socket_amount.
+ From each of those sockets, use only cores_per_socket of cores.
+ And for each core, use lcores_per_core of logical cores. Hypertheading
+ must be enabled for this to take effect.
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+
+ _filter_specifier: LogicalCoreAmount
+
+ def filter(self) -> list[LogicalCore]:
+ return self._filter_cores(self._filter_sockets(self._lcores_to_filter))
+
+ def _filter_sockets(
+ self, lcores_to_filter: Iterable[LogicalCore]
+ ) -> list[LogicalCore]:
+ allowed_sockets: set[int] = set()
+ socket_amount = self._filter_specifier.socket_amount
+ if self._filter_specifier.sockets:
+ socket_amount = len(self._filter_specifier.sockets)
+ allowed_sockets = set(self._filter_specifier.sockets)
+
+ filtered_lcores = []
+ for lcore in lcores_to_filter:
+ if not self._filter_specifier.sockets:
+ if len(allowed_sockets) < socket_amount:
+ allowed_sockets.add(lcore.socket)
+ if lcore.socket in allowed_sockets:
+ filtered_lcores.append(lcore)
+
+ if len(allowed_sockets) < socket_amount:
+ raise ValueError(
+ f"The amount of sockets from which to use cores "
+ f"({socket_amount}) exceeds the actual amount present "
+ f"on the node ({len(allowed_sockets)})"
+ )
+
+ return filtered_lcores
+
+ def _filter_cores(
+ self, lcores_to_filter: Iterable[LogicalCore]
+ ) -> list[LogicalCore]:
+ # no need to use ordered dict, from Python3.7 the dict
+ # insertion order is preserved (LIFO).
+ allowed_lcore_per_core_count_map: dict[int, int] = {}
+ filtered_lcores = []
+ for lcore in lcores_to_filter:
+ if lcore.core in allowed_lcore_per_core_count_map:
+ lcore_count = allowed_lcore_per_core_count_map[lcore.core]
+ if self._filter_specifier.lcores_per_core > lcore_count:
+ # only add lcores of the given core
+ allowed_lcore_per_core_count_map[lcore.core] += 1
+ filtered_lcores.append(lcore)
+ else:
+ raise ValueError(
+ f"The amount of logical cores per core to use "
+ f"({self._filter_specifier.lcores_per_core}) "
+ f"exceeds the actual amount present. Is hyperthreading enabled?"
+ )
+ elif self._filter_specifier.cores_per_socket > len(
+ allowed_lcore_per_core_count_map
+ ):
+ # only add lcores if we need more
+ allowed_lcore_per_core_count_map[lcore.core] = 1
+ filtered_lcores.append(lcore)
+ else:
+ # lcores are sorted by core, at this point we won't encounter new cores
+ break
+
+ cores_per_socket = len(allowed_lcore_per_core_count_map)
+ if cores_per_socket < self._filter_specifier.cores_per_socket:
+ raise ValueError(
+ f"The amount of cores per socket to use "
+ f"({self._filter_specifier.cores_per_socket}) "
+ f"exceeds the actual amount present ({cores_per_socket})"
+ )
+
+ return filtered_lcores
+
+
+class LogicalCoreListFilter(LogicalCoreFilter):
+ """
+ Filter the input list of Logical Cores according to the input list of
+ lcore indices.
+ An empty LogicalCoreList won't filter anything.
+ """
+
+ _filter_specifier: LogicalCoreList
+
+ def filter(self) -> list[LogicalCore]:
+ if not len(self._filter_specifier.lcore_list):
+ return self._lcores_to_filter
+
+ filtered_lcores = []
+ for core in self._lcores_to_filter:
+ if core.lcore in self._filter_specifier.lcore_list:
+ filtered_lcores.append(core)
+
+ if len(filtered_lcores) != len(self._filter_specifier.lcore_list):
+ raise ValueError(
+ f"Not all logical cores from {self._filter_specifier.lcore_list} "
+ f"were found among {self._lcores_to_filter}"
+ )
+
+ return filtered_lcores
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/hw/virtual_device.py
new file mode 100644
index 0000000000..eb664d9f17
--- /dev/null
+++ b/dts/framework/testbed_model/hw/virtual_device.py
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+
+class VirtualDevice(object):
+ """
+ Base class for virtual devices used by DPDK.
+ """
+
+ name: str
+
+ def __init__(self, name: str):
+ self.name = name
+
+ def __str__(self) -> str:
+ return self.name
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index a37f7921e0..cf2af2ca72 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -15,6 +15,14 @@
from framework.logger import DTSLOG, getLogger
from framework.remote_session import OSSession, create_session
+from .hw import (
+ LogicalCore,
+ LogicalCoreAmount,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+ lcore_filter,
+)
+
class Node(object):
"""
@@ -26,6 +34,7 @@ class Node(object):
main_session: OSSession
config: NodeConfiguration
name: str
+ lcores: list[LogicalCore]
_logger: DTSLOG
_other_sessions: list[OSSession]
@@ -35,6 +44,12 @@ def __init__(self, node_config: NodeConfiguration):
self._logger = getLogger(self.name)
self.main_session = create_session(self.config, self.name, self._logger)
+ self._get_remote_cpus()
+ # filter the node lcores according to user config
+ self.lcores = LogicalCoreListFilter(
+ self.lcores, LogicalCoreList(self.config.lcores)
+ ).filter()
+
self._other_sessions = []
self._logger.info(f"Created node: {self.name}")
@@ -107,6 +122,37 @@ def create_session(self, name: str) -> OSSession:
self._other_sessions.append(connection)
return connection
+ def filter_lcores(
+ self,
+ filter_specifier: LogicalCoreAmount | LogicalCoreList,
+ ascending: bool = True,
+ ) -> list[LogicalCore]:
+ """
+ Filter the LogicalCores found on the Node according to specified rules:
+ Use cores from the specified amount of sockets or from the specified socket ids.
+ If sockets is specified, it takes precedence over socket_amount.
+ From each of those sockets, use only cores_per_socket of cores.
+ And for each core, use lcores_per_core of logical cores. Hypertheading
+ must be enabled for this to take effect.
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+ self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
+ return lcore_filter(
+ self.lcores,
+ filter_specifier,
+ ascending,
+ ).filter()
+
+ def _get_remote_cpus(self) -> None:
+ """
+ Scan CPUs in the remote OS and store a list of LogicalCores.
+ """
+ self._logger.info("Getting CPU information.")
+ self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index c97d995b31..ea0b96d6bf 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -4,13 +4,16 @@
import os
import tarfile
+import time
from pathlib import PurePath
from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.remote_session import OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, skip_setup
-from .dpdk import MesonArgs
+from .dpdk import EalParameters, MesonArgs
+from .hw import LogicalCoreAmount, LogicalCoreList, VirtualDevice
from .node import Node
@@ -22,21 +25,29 @@ class SutNode(Node):
Another key capability is building DPDK according to given build target.
"""
+ _dpdk_prefix_list: list[str]
+ _dpdk_timestamp: str
_build_target_config: BuildTargetConfiguration | None
_env_vars: EnvVarsDict
_remote_tmp_dir: PurePath
__remote_dpdk_dir: PurePath | None
_dpdk_version: str | None
_app_compile_timeout: float
+ _dpdk_kill_session: OSSession | None
def __init__(self, node_config: NodeConfiguration):
super(SutNode, self).__init__(node_config)
+ self._dpdk_prefix_list = []
self._build_target_config = None
self._env_vars = EnvVarsDict()
self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
self.__remote_dpdk_dir = None
self._dpdk_version = None
self._app_compile_timeout = 90
+ self._dpdk_kill_session = None
+ self._dpdk_timestamp = (
+ f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"
+ )
@property
def _remote_dpdk_dir(self) -> PurePath:
@@ -169,3 +180,72 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
return self.main_session.join_remote_path(
self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
)
+
+ def kill_cleanup_dpdk_apps(self) -> None:
+ """
+ Kill all dpdk applications on the SUT. Cleanup hugepages.
+ """
+ if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
+ # we can use the session if it exists and responds
+ self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
+ else:
+ # otherwise, we need to (re)create it
+ self._dpdk_kill_session = self.create_session("dpdk_kill")
+ self._dpdk_prefix_list = []
+
+ def create_eal_parameters(
+ self,
+ lcore_filter_specifier: LogicalCoreAmount
+ | LogicalCoreList = LogicalCoreAmount(),
+ ascending_cores: bool = True,
+ prefix: str = "dpdk",
+ append_prefix_timestamp: bool = True,
+ no_pci: bool = False,
+ vdevs: list[VirtualDevice] = None,
+ other_eal_param: str = "",
+ ) -> EalParameters:
+ """
+ Generate eal parameters character string;
+ :param lcore_filter_specifier: an amount of lcores/cores/sockets to use
+ or a list of lcore ids to use.
+ The default will select one lcore for each of two cores
+ on one socket, in ascending order of core ids.
+ :param ascending_cores: True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the
+ highest id and continue in descending order. This ordering
+ affects which sockets to consider first as well.
+ :param prefix: set file prefix string, eg:
+ prefix='vf'
+ :param append_prefix_timestamp: if True, will append a timestamp to
+ DPDK file prefix.
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1']
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments'
+ :return: eal param string, eg:
+ '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+ """
+
+ lcore_list = LogicalCoreList(
+ self.filter_lcores(lcore_filter_specifier, ascending_cores)
+ )
+
+ if append_prefix_timestamp:
+ prefix = f"{prefix}_{self._dpdk_timestamp}"
+ prefix = self.main_session.get_dpdk_file_prefix(prefix)
+ if prefix:
+ self._dpdk_prefix_list.append(prefix)
+
+ if vdevs is None:
+ vdevs = []
+
+ return EalParameters(
+ lcore_list=lcore_list,
+ memory_channels=self.config.memory_channels,
+ prefix=prefix,
+ no_pci=no_pci,
+ vdevs=vdevs,
+ other_eal_param=other_eal_param,
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 611071604b..eebe76f16c 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -32,6 +32,26 @@ def skip_setup(func) -> Callable[..., None]:
return func
+def expand_range(range_str: str) -> list[int]:
+ """
+ Process range string into a list of integers. There are two possible formats:
+ n - a single integer
+ n-m - a range of integers
+
+ The returned range includes both n and m. Empty string returns an empty list.
+ """
+ expanded_range: list[int] = []
+ if range_str:
+ range_boundaries = range_str.split("-")
+ # will throw an exception when items in range_boundaries can't be converted,
+ # serving as type check
+ expanded_range.extend(
+ range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
+ )
+
+ return expanded_range
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 05/10] dts: add node memory setup
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (3 preceding siblings ...)
2023-01-17 15:49 ` [PATCH v3 04/10] dts: add dpdk execution handling Juraj Linkeš
@ 2023-01-17 15:49 ` Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 06/10] dts: add test suite module Juraj Linkeš
` (6 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:49 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
Setup hugepages on nodes. This is useful not only on SUT nodes, but
also on TG nodes which use TGs that utilize hugepages.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/config/__init__.py | 16 ++++
dts/framework/config/arch.py | 57 +++++++++++++
dts/framework/remote_session/linux_session.py | 85 +++++++++++++++++++
dts/framework/remote_session/os_session.py | 10 +++
dts/framework/testbed_model/node.py | 15 ++++
5 files changed, 183 insertions(+)
create mode 100644 dts/framework/config/arch.py
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 17b917f3b3..ce6e709c6f 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -19,6 +19,8 @@
from framework.settings import SETTINGS
+from .arch import PPC64, Arch, Arm64, i686, x86_32, x86_64
+
class StrEnum(Enum):
@staticmethod
@@ -176,3 +178,17 @@ def load_config() -> Configuration:
CONFIGURATION = load_config()
+
+
+def create_arch(node_config: NodeConfiguration) -> Arch:
+ match node_config.arch:
+ case Architecture.x86_64:
+ return x86_64()
+ case Architecture.x86_32:
+ return x86_32()
+ case Architecture.i686:
+ return i686()
+ case Architecture.ppc64le:
+ return PPC64()
+ case Architecture.arm64:
+ return Arm64()
diff --git a/dts/framework/config/arch.py b/dts/framework/config/arch.py
new file mode 100644
index 0000000000..a226b9a6a9
--- /dev/null
+++ b/dts/framework/config/arch.py
@@ -0,0 +1,57 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+
+class Arch(object):
+ """
+ Stores architecture-specific information.
+ """
+
+ @property
+ def default_hugepage_memory(self) -> int:
+ """
+ Return the default amount of memory allocated for hugepages DPDK will use.
+ The default is an amount equal to 256 2MB hugepages (512MB memory).
+ """
+ return 256 * 2048
+
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ """
+ An architecture may need to force configuration of hugepages to first socket.
+ """
+ return False
+
+
+class x86_64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 4096 * 2048
+
+
+class x86_32(Arch):
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ return True
+
+
+class i686(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 512 * 2048
+
+ @property
+ def hugepage_force_first_numa(self) -> bool:
+ return True
+
+
+class PPC64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 512 * 2048
+
+
+class Arm64(Arch):
+ @property
+ def default_hugepage_memory(self) -> int:
+ return 2048 * 2048
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index 6809102038..4dc52132d3 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,7 +2,9 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from framework.exception import RemoteCommandExecutionError
from framework.testbed_model import LogicalCore
+from framework.utils import expand_range
from .posix_session import PosixSession
@@ -27,3 +29,86 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
return dpdk_prefix
+
+ def setup_hugepages(
+ self, hugepage_amount: int, force_first_numa: bool = False
+ ) -> None:
+ self._logger.info("Getting Hugepage information.")
+ hugepage_size = self._get_hugepage_size()
+ hugepages_total = self._get_hugepages_total()
+ self._numa_nodes = self._get_numa_nodes()
+
+ target_hugepages_total = int(hugepage_amount / hugepage_size)
+ if hugepage_amount % hugepage_size:
+ target_hugepages_total += 1
+ if force_first_numa or hugepages_total != target_hugepages_total:
+ # when forcing numa, we need to clear existing hugepages regardless
+ # of size, so they can be moved to the first numa node
+ self._configure_huge_pages(
+ target_hugepages_total, hugepage_size, force_first_numa
+ )
+ else:
+ self._logger.info("Hugepages already configured.")
+ self._mount_huge_pages()
+
+ def _get_hugepage_size(self) -> int:
+ hugepage_size = self.remote_session.send_command(
+ "awk '/Hugepagesize/ {print $2}' /proc/meminfo"
+ ).stdout
+ return int(hugepage_size)
+
+ def _get_hugepages_total(self) -> int:
+ hugepages_total = self.remote_session.send_command(
+ "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ ).stdout
+ return int(hugepages_total)
+
+ def _get_numa_nodes(self) -> list[int]:
+ try:
+ numa_count = self.remote_session.send_command(
+ "cat /sys/devices/system/node/online", verify=True
+ ).stdout
+ numa_range = expand_range(numa_count)
+ except RemoteCommandExecutionError:
+ # the file doesn't exist, meaning the node doesn't support numa
+ numa_range = []
+ return numa_range
+
+ def _mount_huge_pages(self) -> None:
+ self._logger.info("Re-mounting Hugepages.")
+ hugapge_fs_cmd = "awk '/hugetlbfs/ { print $2 }' /proc/mounts"
+ self.remote_session.send_command(f"umount $({hugapge_fs_cmd})")
+ result = self.remote_session.send_command(hugapge_fs_cmd)
+ if result.stdout == "":
+ remote_mount_path = "/mnt/huge"
+ self.remote_session.send_command(f"mkdir -p {remote_mount_path}")
+ self.remote_session.send_command(
+ f"mount -t hugetlbfs nodev {remote_mount_path}"
+ )
+
+ def _supports_numa(self) -> bool:
+ # the system supports numa if self._numa_nodes is non-empty and there are more
+ # than one numa node (in the latter case it may actually support numa, but
+ # there's no reason to do any numa specific configuration)
+ return len(self._numa_nodes) > 1
+
+ def _configure_huge_pages(
+ self, amount: int, size: int, force_first_numa: bool
+ ) -> None:
+ self._logger.info("Configuring Hugepages.")
+ hugepage_config_path = (
+ f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+ )
+ if force_first_numa and self._supports_numa():
+ # clear non-numa hugepages
+ self.remote_session.send_command(
+ f"echo 0 | sudo tee {hugepage_config_path}"
+ )
+ hugepage_config_path = (
+ f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages"
+ f"/hugepages-{size}kB/nr_hugepages"
+ )
+
+ self.remote_session.send_command(
+ f"echo {amount} | sudo tee {hugepage_config_path}"
+ )
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index c30753e0b8..966b7f76d5 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -151,3 +151,13 @@ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
"""
Get the DPDK file prefix that will be used when running DPDK apps.
"""
+
+ @abstractmethod
+ def setup_hugepages(
+ self, hugepage_amount: int, force_first_numa: bool = False
+ ) -> None:
+ """
+ Get the node's Hugepage Size, configure the specified amount of hugepages
+ if needed and mount the hugepages if needed.
+ If force_first_numa is True, configure hugepages just on the first socket.
+ """
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index cf2af2ca72..d22bf3b7d2 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -8,9 +8,11 @@
"""
from framework.config import (
+ Arch,
BuildTargetConfiguration,
ExecutionConfiguration,
NodeConfiguration,
+ create_arch,
)
from framework.logger import DTSLOG, getLogger
from framework.remote_session import OSSession, create_session
@@ -37,6 +39,7 @@ class Node(object):
lcores: list[LogicalCore]
_logger: DTSLOG
_other_sessions: list[OSSession]
+ _arch: Arch
def __init__(self, node_config: NodeConfiguration):
self.config = node_config
@@ -51,6 +54,7 @@ def __init__(self, node_config: NodeConfiguration):
).filter()
self._other_sessions = []
+ self._arch = create_arch(self.config)
self._logger.info(f"Created node: {self.name}")
@@ -59,6 +63,7 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
Perform the execution setup that will be done for each execution
this node is part of.
"""
+ self._setup_hugepages()
self._set_up_execution(execution_config)
def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
@@ -153,6 +158,16 @@ def _get_remote_cpus(self) -> None:
self._logger.info("Getting CPU information.")
self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
+ def _setup_hugepages(self):
+ """
+ Setup hugepages on the Node. Different architectures can supply different
+ amounts of memory for hugepages and numa-based hugepage allocation may need
+ to be considered.
+ """
+ self.main_session.setup_hugepages(
+ self._arch.default_hugepage_memory, self._arch.hugepage_force_first_numa
+ )
+
def close(self) -> None:
"""
Close all connections and free other resources.
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 06/10] dts: add test suite module
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (4 preceding siblings ...)
2023-01-17 15:49 ` [PATCH v3 05/10] dts: add node memory setup Juraj Linkeš
@ 2023-01-17 15:49 ` Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 07/10] dts: add hello world testplan Juraj Linkeš
` (5 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:49 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The module implements the base class that all test suites inherit from.
It implements methods common to all test suites.
The derived test suites implement test cases and any particular setup
needed for the suite or tests.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 2 +
dts/framework/config/__init__.py | 4 +
dts/framework/config/conf_yaml_schema.json | 10 +
dts/framework/exception.py | 16 ++
dts/framework/settings.py | 24 +++
dts/framework/test_suite.py | 228 +++++++++++++++++++++
6 files changed, 284 insertions(+)
create mode 100644 dts/framework/test_suite.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1648e5c3c5..2111d537cf 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -8,6 +8,8 @@ executions:
cpu: native
compiler: gcc
compiler_wrapper: ccache
+ perf: false
+ func: true
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ce6e709c6f..ce3f20f6a9 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -119,6 +119,8 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
+ perf: bool
+ func: bool
system_under_test: NodeConfiguration
@staticmethod
@@ -131,6 +133,8 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
return ExecutionConfiguration(
build_targets=build_targets,
+ perf=d["perf"],
+ func=d["func"],
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 81f304da5e..abf15ebea8 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -142,6 +142,14 @@
},
"minimum": 1
},
+ "perf": {
+ "type": "boolean",
+ "description": "Enable performance testing."
+ },
+ "func": {
+ "type": "boolean",
+ "description": "Enable functional testing."
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -149,6 +157,8 @@
"additionalProperties": false,
"required": [
"build_targets",
+ "perf",
+ "func",
"system_under_test"
]
},
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index b4545a5a40..ca353d98fc 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -24,6 +24,7 @@ class ErrorSeverity(IntEnum):
REMOTE_CMD_EXEC_ERR = 3
SSH_ERR = 4
DPDK_BUILD_ERR = 10
+ TESTCASE_VERIFY_ERR = 20
class DTSError(Exception):
@@ -128,3 +129,18 @@ class DPDKBuildError(DTSError):
"""
severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
+
+
+class TestCaseVerifyError(DTSError):
+ """
+ Used in test cases to verify the expected behavior.
+ """
+
+ value: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
+
+ def __init__(self, value: str):
+ self.value = value
+
+ def __str__(self) -> str:
+ return repr(self.value)
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index a298b1eaac..5762bd2bee 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -66,6 +66,8 @@ class _Settings:
skip_setup: bool
dpdk_ref: Path | str
compile_timeout: float
+ test_cases: list
+ re_run: int
def _get_parser() -> argparse.ArgumentParser:
@@ -139,6 +141,26 @@ def _get_parser() -> argparse.ArgumentParser:
help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
)
+ parser.add_argument(
+ "--test-cases",
+ action=_env_arg("DTS_TESTCASES"),
+ default="",
+ required=False,
+ help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
+ "Unknown test cases will be silently ignored.",
+ )
+
+ parser.add_argument(
+ "--re-run",
+ "--re_run",
+ action=_env_arg("DTS_RERUN"),
+ default=0,
+ type=int,
+ required=False,
+ help="[DTS_RERUN] Re-run each test case the specified amount of times "
+ "if a test failure occurs",
+ )
+
return parser
@@ -162,6 +184,8 @@ def _get_settings() -> _Settings:
skip_setup=(parsed_args.skip_setup == "Y"),
dpdk_ref=parsed_args.dpdk_ref,
compile_timeout=parsed_args.compile_timeout,
+ test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+ re_run=parsed_args.re_run,
)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
new file mode 100644
index 0000000000..0972a70c14
--- /dev/null
+++ b/dts/framework/test_suite.py
@@ -0,0 +1,228 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Base class for creating DTS test cases.
+"""
+
+import inspect
+import re
+from collections.abc import MutableSequence
+from types import MethodType
+
+from .exception import SSHTimeoutError, TestCaseVerifyError
+from .logger import DTSLOG, getLogger
+from .settings import SETTINGS
+from .testbed_model import SutNode
+
+
+class TestSuite(object):
+ """
+ The base TestSuite class provides methods for handling basic flow of a test suite:
+ * test case filtering and collection
+ * test suite setup/cleanup
+ * test setup/cleanup
+ * test case execution
+ * error handling and results storage
+ Test cases are implemented by derived classes. Test cases are all methods
+ starting with test_, further divided into performance test cases
+ (starting with test_perf_) and functional test cases (all other test cases).
+ By default, all test cases will be executed. A list of testcase str names
+ may be specified in conf.yaml or on the command line
+ to filter which test cases to run.
+ The methods named [set_up|tear_down]_[suite|test_case] should be overridden
+ in derived classes if the appropriate suite/test case fixtures are needed.
+ """
+
+ sut_node: SutNode
+ _logger: DTSLOG
+ _test_cases_to_run: list[str]
+ _func: bool
+ _errors: MutableSequence[Exception]
+
+ def __init__(
+ self,
+ sut_node: SutNode,
+ test_cases: list[str],
+ func: bool,
+ errors: MutableSequence[Exception],
+ ):
+ self.sut_node = sut_node
+ self._logger = getLogger(self.__class__.__name__)
+ self._test_cases_to_run = test_cases
+ self._test_cases_to_run.extend(SETTINGS.test_cases)
+ self._func = func
+ self._errors = errors
+
+ def set_up_suite(self) -> None:
+ """
+ Set up test fixtures common to all test cases; this is done before
+ any test case is run.
+ """
+
+ def tear_down_suite(self) -> None:
+ """
+ Tear down the previously created test fixtures common to all test cases.
+ """
+
+ def set_up_test_case(self) -> None:
+ """
+ Set up test fixtures before each test case.
+ """
+
+ def tear_down_test_case(self) -> None:
+ """
+ Tear down the previously created test fixtures after each test case.
+ """
+
+ def verify(self, condition: bool, failure_description: str) -> None:
+ if not condition:
+ self._logger.debug(
+ "A test case failed, showing the last 10 commands executed on SUT:"
+ )
+ for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ self._logger.debug(command_res.command)
+ raise TestCaseVerifyError(failure_description)
+
+ def run(self) -> None:
+ """
+ Setup, execute and teardown the whole suite.
+ Suite execution consists of running all test cases scheduled to be executed.
+ A test cast run consists of setup, execution and teardown of said test case.
+ """
+ test_suite_name = self.__class__.__name__
+
+ try:
+ self._logger.info(f"Starting test suite setup: {test_suite_name}")
+ self.set_up_suite()
+ self._logger.info(f"Test suite setup successful: {test_suite_name}")
+ except Exception as e:
+ self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
+ self._errors.append(e)
+
+ else:
+ self._execute_test_suite()
+
+ finally:
+ try:
+ self.tear_down_suite()
+ self.sut_node.kill_cleanup_dpdk_apps()
+ except Exception as e:
+ self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
+ self._logger.warning(
+ f"Test suite '{test_suite_name}' teardown failed, "
+ f"the next test suite may be affected."
+ )
+ self._errors.append(e)
+
+ def _execute_test_suite(self) -> None:
+ """
+ Execute all test cases scheduled to be executed in this suite.
+ """
+ if self._func:
+ for test_case_method in self._get_functional_test_cases():
+ all_attempts = SETTINGS.re_run + 1
+ attempt_nr = 1
+ while (
+ not self._run_test_case(test_case_method)
+ and attempt_nr <= all_attempts
+ ):
+ attempt_nr += 1
+ self._logger.info(
+ f"Re-running FAILED test case '{test_case_method.__name__}'. "
+ f"Attempt number {attempt_nr} out of {all_attempts}."
+ )
+
+ def _get_functional_test_cases(self) -> list[MethodType]:
+ """
+ Get all functional test cases.
+ """
+ return self._get_test_cases(r"test_(?!perf_)")
+
+ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
+ """
+ Return a list of test cases matching test_case_regex.
+ """
+ self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
+ filtered_test_cases = []
+ for test_case_name, test_case in inspect.getmembers(self, inspect.ismethod):
+ if self._should_be_executed(test_case_name, test_case_regex):
+ filtered_test_cases.append(test_case)
+ cases_str = ", ".join((x.__name__ for x in filtered_test_cases))
+ self._logger.debug(
+ f"Found test cases '{cases_str}' in {self.__class__.__name__}."
+ )
+ return filtered_test_cases
+
+ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
+ """
+ Check whether the test case should be executed.
+ """
+ match = bool(re.match(test_case_regex, test_case_name))
+ if self._test_cases_to_run:
+ return match and test_case_name in self._test_cases_to_run
+
+ return match
+
+ def _run_test_case(self, test_case_method: MethodType) -> bool:
+ """
+ Setup, execute and teardown a test case in this suite.
+ Exceptions are caught and recorded in logs.
+ """
+ test_case_name = test_case_method.__name__
+ result = False
+
+ try:
+ # run set_up function for each case
+ self.set_up_test_case()
+ except SSHTimeoutError as e:
+ self._logger.exception(f"Test case setup FAILED: {test_case_name}")
+ self._errors.append(e)
+ except Exception as e:
+ self._logger.exception(f"Test case setup ERROR: {test_case_name}")
+ self._errors.append(e)
+
+ else:
+ # run test case if setup was successful
+ result = self._execute_test_case(test_case_method)
+
+ finally:
+ try:
+ self.tear_down_test_case()
+ except Exception as e:
+ self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
+ self._logger.warning(
+ f"Test case '{test_case_name}' teardown failed, "
+ f"the next test case may be affected."
+ )
+ self._errors.append(e)
+ result = False
+
+ return result
+
+ def _execute_test_case(self, test_case_method: MethodType) -> bool:
+ """
+ Execute one test case and handle failures.
+ """
+ test_case_name = test_case_method.__name__
+ result = False
+ try:
+ self._logger.info(f"Starting test case execution: {test_case_name}")
+ test_case_method()
+ result = True
+ self._logger.info(f"Test case execution PASSED: {test_case_name}")
+
+ except TestCaseVerifyError as e:
+ self._logger.exception(f"Test case execution FAILED: {test_case_name}")
+ self._errors.append(e)
+ except Exception as e:
+ self._logger.exception(f"Test case execution ERROR: {test_case_name}")
+ self._errors.append(e)
+ except KeyboardInterrupt:
+ self._logger.error(
+ f"Test case execution INTERRUPTED by user: {test_case_name}"
+ )
+ raise KeyboardInterrupt("Stop DTS")
+
+ return result
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 07/10] dts: add hello world testplan
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (5 preceding siblings ...)
2023-01-17 15:49 ` [PATCH v3 06/10] dts: add test suite module Juraj Linkeš
@ 2023-01-17 15:49 ` Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 08/10] dts: add hello world testsuite Juraj Linkeš
` (4 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:49 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The testplan describes the capabilities of the tested application along
with the description of testcases to test it.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/test_plans/hello_world_test_plan.rst | 68 ++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 dts/test_plans/hello_world_test_plan.rst
diff --git a/dts/test_plans/hello_world_test_plan.rst b/dts/test_plans/hello_world_test_plan.rst
new file mode 100644
index 0000000000..566a9bb10c
--- /dev/null
+++ b/dts/test_plans/hello_world_test_plan.rst
@@ -0,0 +1,68 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+
+=============================================
+Sample Application Tests: Hello World Example
+=============================================
+
+This example is one of the most simple RTE application that can be
+done. The program will just print a "helloworld" message on every
+enabled lcore.
+
+Command Usage::
+
+ ./dpdk-helloworld -c COREMASK [-m NB] [-r NUM] [-n NUM]
+
+ EAL option list:
+ -c COREMASK: hexadecimal bitmask of cores we are running on
+ -m MB : memory to allocate (default = size of hugemem)
+ -n NUM : force number of memory channels (don't detect)
+ -r NUM : force number of memory ranks (don't detect)
+ --huge-file: base filename for hugetlbfs entries
+ debug options:
+ --no-huge : use malloc instead of hugetlbfs
+ --no-pci : disable pci
+ --no-hpet : disable hpet
+ --no-shconf: no shared config (mmap'd files)
+
+
+Prerequisites
+=============
+
+Support igb_uio and vfio driver, if used vfio, kernel need 3.6+ and enable vt-d in bios.
+When used vfio , used "modprobe vfio" and "modprobe vfio-pci" insmod vfio driver, then used
+"./tools/dpdk_nic_bind.py --bind=vfio-pci device_bus_id" to bind vfio driver to test driver.
+
+To find out the mapping of lcores (processor) to core id and socket (physical
+id), the command below can be used::
+
+ $ grep "processor\|physical id\|core id\|^$" /proc/cpuinfo
+
+The total logical core number will be used as ``helloworld`` input parameters.
+
+
+Test Case: run hello world on single lcores
+===========================================
+
+To run example in single lcore ::
+
+ $ ./dpdk-helloworld -c 1
+ hello from core 0
+
+Check the output is exact the lcore 0
+
+
+Test Case: run hello world on every lcores
+==========================================
+
+To run the example in all the enabled lcore ::
+
+ $ ./dpdk-helloworld -cffffff
+ hello from core 1
+ hello from core 2
+ hello from core 3
+ ...
+ ...
+ hello from core 0
+
+Verify the output of according to all the core masks.
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 08/10] dts: add hello world testsuite
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (6 preceding siblings ...)
2023-01-17 15:49 ` [PATCH v3 07/10] dts: add hello world testplan Juraj Linkeš
@ 2023-01-17 15:49 ` Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 09/10] dts: add test suite config and runner Juraj Linkeš
` (3 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:49 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The test suite implements test cases defined in the corresponding test
plan.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/__init__.py | 2 +-
dts/framework/remote_session/os_session.py | 16 ++++-
.../remote_session/remote/__init__.py | 2 +-
dts/framework/testbed_model/__init__.py | 1 +
dts/framework/testbed_model/sut_node.py | 12 +++-
dts/tests/TestSuite_hello_world.py | 59 +++++++++++++++++++
6 files changed, 88 insertions(+), 4 deletions(-)
create mode 100644 dts/tests/TestSuite_hello_world.py
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 747316c78a..ee221503df 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,7 @@
from .linux_session import LinuxSession
from .os_session import OSSession
-from .remote import RemoteSession, SSHSession
+from .remote import CommandResult, RemoteSession, SSHSession
def create_session(
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 966b7f76d5..a869c7acde 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -12,7 +12,7 @@
from framework.testbed_model import LogicalCore, MesonArgs
from framework.utils import EnvVarsDict
-from .remote import RemoteSession, create_remote_session
+from .remote import CommandResult, RemoteSession, create_remote_session
class OSSession(ABC):
@@ -50,6 +50,20 @@ def is_alive(self) -> bool:
"""
return self.remote_session.is_alive()
+ def send_command(
+ self,
+ command: str,
+ timeout: float,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
+ ) -> CommandResult:
+ """
+ An all-purpose API in case the command to be executed is already
+ OS-agnostic, such as when the path to the executed command has been
+ constructed beforehand.
+ """
+ return self.remote_session.send_command(command, timeout, verify, env)
+
@abstractmethod
def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
"""
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
index f3092f8bbe..8a1512210a 100644
--- a/dts/framework/remote_session/remote/__init__.py
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -6,7 +6,7 @@
from framework.config import NodeConfiguration
from framework.logger import DTSLOG
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
from .ssh_session import SSHSession
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 2be5169dc8..efb463f2e2 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -13,6 +13,7 @@
from .hw import (
LogicalCore,
LogicalCoreAmount,
+ LogicalCoreAmountFilter,
LogicalCoreList,
LogicalCoreListFilter,
VirtualDevice,
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index ea0b96d6bf..6a0472a733 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -8,7 +8,7 @@
from pathlib import PurePath
from framework.config import BuildTargetConfiguration, NodeConfiguration
-from framework.remote_session import OSSession
+from framework.remote_session import CommandResult, OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, skip_setup
@@ -249,3 +249,13 @@ def create_eal_parameters(
vdevs=vdevs,
other_eal_param=other_eal_param,
)
+
+ def run_dpdk_app(
+ self, app_path: PurePath, eal_args: EalParameters, timeout: float = 30
+ ) -> CommandResult:
+ """
+ Run DPDK application on the remote node.
+ """
+ return self.main_session.send_command(
+ f"{app_path} {eal_args}", timeout, verify=True
+ )
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
new file mode 100644
index 0000000000..5ade941d31
--- /dev/null
+++ b/dts/tests/TestSuite_hello_world.py
@@ -0,0 +1,59 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+
+"""
+DPDK Test suite.
+Test HelloWorld example.
+"""
+
+from framework.test_suite import TestSuite
+from framework.testbed_model import (
+ LogicalCoreAmount,
+ LogicalCoreAmountFilter,
+ LogicalCoreList,
+)
+
+
+class TestHelloWorld(TestSuite):
+ def set_up_suite(self) -> None:
+ """
+ Run at the start of each test suite.
+ hello_world Prerequisites:
+ helloworld build pass
+ """
+ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
+
+ def test_hello_world_single_core(self) -> None:
+ """
+ Run hello world on single lcores
+ Only received hello message from core0
+ """
+
+ # get the mask for the first core
+ lcore_amount = LogicalCoreAmount(1, 1, 1)
+ lcores = LogicalCoreAmountFilter(self.sut_node.lcores, lcore_amount).filter()
+ eal_para = self.sut_node.create_eal_parameters(
+ lcore_filter_specifier=lcore_amount
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para)
+ self.verify(
+ f"hello from core {int(lcores[0])}" in result.stdout,
+ f"EAL not started on lcore{lcores[0]}",
+ )
+
+ def test_hello_world_all_cores(self) -> None:
+ """
+ Run hello world on all lcores
+ Received hello message from all lcores
+ """
+
+ # get the maximum logical core number
+ eal_para = self.sut_node.create_eal_parameters(
+ lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para, 50)
+ for lcore in self.sut_node.lcores:
+ self.verify(
+ f"hello from core {int(lcore)}" in result.stdout,
+ f"EAL not started on lcore{lcore}",
+ )
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 09/10] dts: add test suite config and runner
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (7 preceding siblings ...)
2023-01-17 15:49 ` [PATCH v3 08/10] dts: add hello world testsuite Juraj Linkeš
@ 2023-01-17 15:49 ` Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 10/10] dts: add test results module Juraj Linkeš
` (2 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:49 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The config allows users to specify which test suites and test cases
within test suites to run.
Also add test suite running capabilities to dts runner.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 2 ++
dts/framework/config/__init__.py | 29 +++++++++++++++-
dts/framework/config/conf_yaml_schema.json | 40 ++++++++++++++++++++++
dts/framework/dts.py | 19 ++++++++++
dts/framework/test_suite.py | 24 ++++++++++++-
5 files changed, 112 insertions(+), 2 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 2111d537cf..2c6ec84282 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -10,6 +10,8 @@ executions:
compiler_wrapper: ccache
perf: false
func: true
+ test_suites:
+ - hello_world
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ce3f20f6a9..058fbf58db 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -12,7 +12,7 @@
import pathlib
from dataclasses import dataclass
from enum import Enum, auto, unique
-from typing import Any
+from typing import Any, TypedDict
import warlock # type: ignore
import yaml
@@ -116,11 +116,34 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
)
+class TestSuiteConfigDict(TypedDict):
+ suite: str
+ cases: list[str]
+
+
+@dataclass(slots=True, frozen=True)
+class TestSuiteConfig:
+ test_suite: str
+ test_cases: list[str]
+
+ @staticmethod
+ def from_dict(
+ entry: str | TestSuiteConfigDict,
+ ) -> "TestSuiteConfig":
+ if isinstance(entry, str):
+ return TestSuiteConfig(test_suite=entry, test_cases=[])
+ elif isinstance(entry, dict):
+ return TestSuiteConfig(test_suite=entry["suite"], test_cases=entry["cases"])
+ else:
+ raise TypeError(f"{type(entry)} is not valid for a test suite config.")
+
+
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
perf: bool
func: bool
+ test_suites: list[TestSuiteConfig]
system_under_test: NodeConfiguration
@staticmethod
@@ -128,6 +151,9 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets: list[BuildTargetConfiguration] = list(
map(BuildTargetConfiguration.from_dict, d["build_targets"])
)
+ test_suites: list[TestSuiteConfig] = list(
+ map(TestSuiteConfig.from_dict, d["test_suites"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
@@ -135,6 +161,7 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets=build_targets,
perf=d["perf"],
func=d["func"],
+ test_suites=test_suites,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index abf15ebea8..c4a9e75251 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -75,6 +75,32 @@
"cpu",
"compiler"
]
+ },
+ "test_suite": {
+ "type": "string",
+ "enum": [
+ "hello_world"
+ ]
+ },
+ "test_target": {
+ "type": "object",
+ "properties": {
+ "suite": {
+ "$ref": "#/definitions/test_suite"
+ },
+ "cases": {
+ "type": "array",
+ "description": "If specified, only this subset of test suite's test cases will be run. Unknown test cases will be silently ignored.",
+ "items": {
+ "type": "string"
+ },
+ "minimum": 1
+ }
+ },
+ "required": [
+ "suite"
+ ],
+ "additionalProperties": false
}
},
"type": "object",
@@ -150,6 +176,19 @@
"type": "boolean",
"description": "Enable functional testing."
},
+ "test_suites": {
+ "type": "array",
+ "items": {
+ "oneOf": [
+ {
+ "$ref": "#/definitions/test_suite"
+ },
+ {
+ "$ref": "#/definitions/test_target"
+ }
+ ]
+ }
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -159,6 +198,7 @@
"build_targets",
"perf",
"func",
+ "test_suites",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 6ea7c6e736..f98000450f 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -8,6 +8,7 @@
from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .test_suite import get_test_suites
from .testbed_model import SutNode
from .utils import check_dts_python_version
@@ -132,6 +133,24 @@ def _run_suites(
with possibly only a subset of test cases.
If no subset is specified, run all test cases.
"""
+ for test_suite_config in execution.test_suites:
+ try:
+ full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
+ test_suite_classes = get_test_suites(full_suite_path)
+ suites_str = ", ".join((x.__name__ for x in test_suite_classes))
+ dts_logger.debug(
+ f"Found test suites '{suites_str}' in '{full_suite_path}'."
+ )
+ except Exception as e:
+ dts_logger.exception("An error occurred when searching for test suites.")
+ errors.append(e)
+
+ else:
+ for test_suite_class in test_suite_classes:
+ test_suite = test_suite_class(
+ sut_node, test_suite_config.test_cases, execution.func, errors
+ )
+ test_suite.run()
def _exit_dts() -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 0972a70c14..0cbedee478 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -6,12 +6,13 @@
Base class for creating DTS test cases.
"""
+import importlib
import inspect
import re
from collections.abc import MutableSequence
from types import MethodType
-from .exception import SSHTimeoutError, TestCaseVerifyError
+from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
from .testbed_model import SutNode
@@ -226,3 +227,24 @@ def _execute_test_case(self, test_case_method: MethodType) -> bool:
raise KeyboardInterrupt("Stop DTS")
return result
+
+
+def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+ def is_test_suite(object) -> bool:
+ try:
+ if issubclass(object, TestSuite) and object is not TestSuite:
+ return True
+ except TypeError:
+ return False
+ return False
+
+ try:
+ testcase_module = importlib.import_module(testsuite_module_path)
+ except ModuleNotFoundError as e:
+ raise ConfigurationError(
+ f"Testsuite '{testsuite_module_path}' not found."
+ ) from e
+ return [
+ test_suite_class
+ for _, test_suite_class in inspect.getmembers(testcase_module, is_test_suite)
+ ]
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v3 10/10] dts: add test results module
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (8 preceding siblings ...)
2023-01-17 15:49 ` [PATCH v3 09/10] dts: add test suite config and runner Juraj Linkeš
@ 2023-01-17 15:49 ` Juraj Linkeš
2023-01-19 16:16 ` [PATCH v3 00/10] dts: add hello world testcase Owen Hilyard
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-01-17 15:49 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu, bruce.richardson
Cc: dev, Juraj Linkeš
The module stores the results and errors from all executions, build
targets, test suites and test cases.
The result consist of the result of the setup and the teardown of each
testing stage (listed above) and the results of the inner stages. The
innermost stage is the case, which also contains the result of test case
itself.
The modules also produces a brief overview of the results and the
number of executed tests.
It also finds the proper return code to exit with from among the stored
errors.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/dts.py | 64 +++----
dts/framework/test_result.py | 316 +++++++++++++++++++++++++++++++++++
dts/framework/test_suite.py | 60 +++----
3 files changed, 382 insertions(+), 58 deletions(-)
create mode 100644 dts/framework/test_result.py
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index f98000450f..117b7cae83 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,14 +6,14 @@
import sys
from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
-from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
from .test_suite import get_test_suites
from .testbed_model import SutNode
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("dts_runner")
-errors = []
+result: DTSResult = DTSResult(dts_logger)
def run_all() -> None:
@@ -22,7 +22,7 @@ def run_all() -> None:
config file.
"""
global dts_logger
- global errors
+ global result
# check the python version of the server that run dts
check_dts_python_version()
@@ -39,29 +39,31 @@ def run_all() -> None:
# the SUT has not been initialized yet
try:
sut_node = SutNode(execution.system_under_test)
+ result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception(
f"Connection to node {execution.system_under_test} failed."
)
- errors.append(e)
+ result.update_setup(Result.FAIL, e)
else:
nodes[sut_node.name] = sut_node
if sut_node:
- _run_execution(sut_node, execution)
+ _run_execution(sut_node, execution, result)
except Exception as e:
dts_logger.exception("An unexpected error has occurred.")
- errors.append(e)
+ result.add_error(e)
raise
finally:
try:
for node in nodes.values():
node.close()
+ result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Final cleanup of nodes failed.")
- errors.append(e)
+ result.update_teardown(Result.ERROR, e)
# we need to put the sys.exit call outside the finally clause to make sure
# that unexpected exceptions will propagate
@@ -72,61 +74,72 @@ def run_all() -> None:
_exit_dts()
-def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+def _run_execution(
+ sut_node: SutNode, execution: ExecutionConfiguration, result: DTSResult
+) -> None:
"""
Run the given execution. This involves running the execution setup as well as
running all build targets in the given execution.
"""
dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+ execution_result = result.add_execution(sut_node.config)
try:
sut_node.set_up_execution(execution)
+ execution_result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception("Execution setup failed.")
- errors.append(e)
+ execution_result.update_setup(Result.FAIL, e)
else:
for build_target in execution.build_targets:
- _run_build_target(sut_node, build_target, execution)
+ _run_build_target(sut_node, build_target, execution, execution_result)
finally:
try:
sut_node.tear_down_execution()
+ execution_result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Execution teardown failed.")
- errors.append(e)
+ execution_result.update_teardown(Result.FAIL, e)
def _run_build_target(
sut_node: SutNode,
build_target: BuildTargetConfiguration,
execution: ExecutionConfiguration,
+ execution_result: ExecutionResult,
) -> None:
"""
Run the given build target.
"""
dts_logger.info(f"Running build target '{build_target.name}'.")
+ build_target_result = execution_result.add_build_target(build_target)
try:
sut_node.set_up_build_target(build_target)
+ result.dpdk_version = sut_node.dpdk_version
+ build_target_result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception("Build target setup failed.")
- errors.append(e)
+ build_target_result.update_setup(Result.FAIL, e)
else:
- _run_suites(sut_node, execution)
+ _run_suites(sut_node, execution, build_target_result)
finally:
try:
sut_node.tear_down_build_target()
+ build_target_result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Build target teardown failed.")
- errors.append(e)
+ build_target_result.update_teardown(Result.FAIL, e)
def _run_suites(
sut_node: SutNode,
execution: ExecutionConfiguration,
+ build_target_result: BuildTargetResult,
) -> None:
"""
Use the given build_target to run execution's test suites
@@ -143,12 +156,15 @@ def _run_suites(
)
except Exception as e:
dts_logger.exception("An error occurred when searching for test suites.")
- errors.append(e)
+ result.update_setup(Result.ERROR, e)
else:
for test_suite_class in test_suite_classes:
test_suite = test_suite_class(
- sut_node, test_suite_config.test_cases, execution.func, errors
+ sut_node,
+ test_suite_config.test_cases,
+ execution.func,
+ build_target_result,
)
test_suite.run()
@@ -157,20 +173,8 @@ def _exit_dts() -> None:
"""
Process all errors and exit with the proper exit code.
"""
- if errors and dts_logger:
- dts_logger.debug("Summary of errors:")
- for error in errors:
- dts_logger.debug(repr(error))
-
- return_code = ErrorSeverity.NO_ERR
- for error in errors:
- error_return_code = ErrorSeverity.GENERIC_ERR
- if isinstance(error, DTSError):
- error_return_code = error.severity
-
- if error_return_code > return_code:
- return_code = error_return_code
+ result.process()
if dts_logger:
dts_logger.info("DTS execution has ended.")
- sys.exit(return_code)
+ sys.exit(result.get_return_code())
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
new file mode 100644
index 0000000000..743919820c
--- /dev/null
+++ b/dts/framework/test_result.py
@@ -0,0 +1,316 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Generic result container and reporters
+"""
+
+import os.path
+from collections.abc import MutableSequence
+from enum import Enum, auto
+
+from .config import (
+ OS,
+ Architecture,
+ BuildTargetConfiguration,
+ Compiler,
+ CPUType,
+ NodeConfiguration,
+)
+from .exception import DTSError, ErrorSeverity
+from .logger import DTSLOG
+from .settings import SETTINGS
+
+
+class Result(Enum):
+ """
+ An Enum defining the possible states that
+ a setup, a teardown or a test case may end up in.
+ """
+
+ PASS = auto()
+ FAIL = auto()
+ ERROR = auto()
+ SKIP = auto()
+
+ def __bool__(self) -> bool:
+ return self is self.PASS
+
+
+class FixtureResult(object):
+ """
+ A record that stored the result of a setup or a teardown.
+ The default is FAIL because immediately after creating the object
+ the setup of the corresponding stage will be executed, which also guarantees
+ the execution of teardown.
+ """
+
+ result: Result
+ error: Exception | None = None
+
+ def __init__(
+ self,
+ result: Result = Result.FAIL,
+ error: Exception | None = None,
+ ):
+ self.result = result
+ self.error = error
+
+ def __bool__(self) -> bool:
+ return bool(self.result)
+
+
+class Statistics(dict):
+ """
+ A helper class used to store the number of test cases by its result
+ along a few other basic information.
+ Using a dict provides a convenient way to format the data.
+ """
+
+ def __init__(self, dpdk_version):
+ super(Statistics, self).__init__()
+ for result in Result:
+ self[result.name] = 0
+ self["PASS RATE"] = 0.0
+ self["DPDK VERSION"] = dpdk_version
+
+ def __iadd__(self, other: Result) -> "Statistics":
+ """
+ Add a Result to the final count.
+ """
+ self[other.name] += 1
+ self["PASS RATE"] = (
+ float(self[Result.PASS.name])
+ * 100
+ / sum(self[result.name] for result in Result)
+ )
+ return self
+
+ def __str__(self) -> str:
+ """
+ Provide a string representation of the data.
+ """
+ stats_str = ""
+ for key, value in self.items():
+ stats_str += f"{key:<12} = {value}\n"
+ # according to docs, we should use \n when writing to text files
+ # on all platforms
+ return stats_str
+
+
+class BaseResult(object):
+ """
+ The Base class for all results. Stores the results of
+ the setup and teardown portions of the corresponding stage
+ and a list of results from each inner stage in _inner_results.
+ """
+
+ setup_result: FixtureResult
+ teardown_result: FixtureResult
+ _inner_results: MutableSequence["BaseResult"]
+
+ def __init__(self):
+ self.setup_result = FixtureResult()
+ self.teardown_result = FixtureResult()
+ self._inner_results = []
+
+ def update_setup(self, result: Result, error: Exception | None = None) -> None:
+ self.setup_result.result = result
+ self.setup_result.error = error
+
+ def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+ self.teardown_result.result = result
+ self.teardown_result.error = error
+
+ def _get_setup_teardown_errors(self) -> list[Exception]:
+ errors = []
+ if self.setup_result.error:
+ errors.append(self.setup_result.error)
+ if self.teardown_result.error:
+ errors.append(self.teardown_result.error)
+ return errors
+
+ def _get_inner_errors(self) -> list[Exception]:
+ return [
+ error
+ for inner_result in self._inner_results
+ for error in inner_result.get_errors()
+ ]
+
+ def get_errors(self) -> list[Exception]:
+ return self._get_setup_teardown_errors() + self._get_inner_errors()
+
+ def add_stats(self, statistics: Statistics) -> None:
+ for inner_result in self._inner_results:
+ inner_result.add_stats(statistics)
+
+
+class TestCaseResult(BaseResult, FixtureResult):
+ """
+ The test case specific result.
+ Stores the result of the actual test case.
+ Also stores the test case name.
+ """
+
+ test_case_name: str
+
+ def __init__(self, test_case_name: str):
+ super(TestCaseResult, self).__init__()
+ self.test_case_name = test_case_name
+
+ def update(self, result: Result, error: Exception | None = None) -> None:
+ self.result = result
+ self.error = error
+
+ def _get_inner_errors(self) -> list[Exception]:
+ if self.error:
+ return [self.error]
+ return []
+
+ def add_stats(self, statistics: Statistics) -> None:
+ statistics += self.result
+
+ def __bool__(self) -> bool:
+ return (
+ bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
+ )
+
+
+class TestSuiteResult(BaseResult):
+ """
+ The test suite specific result.
+ The _inner_results list stores results of test cases in a given test suite.
+ Also stores the test suite name.
+ """
+
+ suite_name: str
+
+ def __init__(self, suite_name: str):
+ super(TestSuiteResult, self).__init__()
+ self.suite_name = suite_name
+
+ def add_test_case(self, test_case_name: str) -> TestCaseResult:
+ test_case_result = TestCaseResult(test_case_name)
+ self._inner_results.append(test_case_result)
+ return test_case_result
+
+
+class BuildTargetResult(BaseResult):
+ """
+ The build target specific result.
+ The _inner_results list stores results of test suites in a given build target.
+ Also stores build target specifics, such as compiler used to build DPDK.
+ """
+
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+
+ def __init__(self, build_target: BuildTargetConfiguration):
+ super(BuildTargetResult, self).__init__()
+ self.arch = build_target.arch
+ self.os = build_target.os
+ self.cpu = build_target.cpu
+ self.compiler = build_target.compiler
+
+ def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+ test_suite_result = TestSuiteResult(test_suite_name)
+ self._inner_results.append(test_suite_result)
+ return test_suite_result
+
+
+class ExecutionResult(BaseResult):
+ """
+ The execution specific result.
+ The _inner_results list stores results of build targets in a given execution.
+ Also stores the SUT node configuration.
+ """
+
+ sut_node: NodeConfiguration
+
+ def __init__(self, sut_node: NodeConfiguration):
+ super(ExecutionResult, self).__init__()
+ self.sut_node = sut_node
+
+ def add_build_target(
+ self, build_target: BuildTargetConfiguration
+ ) -> BuildTargetResult:
+ build_target_result = BuildTargetResult(build_target)
+ self._inner_results.append(build_target_result)
+ return build_target_result
+
+
+class DTSResult(BaseResult):
+ """
+ Stores environment information and test results from a DTS run, which are:
+ * Execution level information, such as SUT and TG hardware.
+ * Build target level information, such as compiler, target OS and cpu.
+ * Test suite results.
+ * All errors that are caught and recorded during DTS execution.
+
+ The information is stored in nested objects.
+
+ The class is capable of computing the return code used to exit DTS with
+ from the stored error.
+
+ It also provides a brief statistical summary of passed/failed test cases.
+ """
+
+ dpdk_version: str | None
+ _logger: DTSLOG
+ _errors: list[Exception]
+ _return_code: ErrorSeverity
+ _stats_result: Statistics | None
+ _stats_filename: str
+
+ def __init__(self, logger: DTSLOG):
+ super(DTSResult, self).__init__()
+ self.dpdk_version = None
+ self._logger = logger
+ self._errors = []
+ self._return_code = ErrorSeverity.NO_ERR
+ self._stats_result = None
+ self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
+
+ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+ execution_result = ExecutionResult(sut_node)
+ self._inner_results.append(execution_result)
+ return execution_result
+
+ def add_error(self, error) -> None:
+ self._errors.append(error)
+
+ def process(self) -> None:
+ """
+ Process the data after a DTS run.
+ The data is added to nested objects during runtime and this parent object
+ is not updated at that time. This requires us to process the nested data
+ after it's all been gathered.
+
+ The processing gathers all errors and the result statistics of test cases.
+ """
+ self._errors += self.get_errors()
+ if self._errors and self._logger:
+ self._logger.debug("Summary of errors:")
+ for error in self._errors:
+ self._logger.debug(repr(error))
+
+ self._stats_result = Statistics(self.dpdk_version)
+ self.add_stats(self._stats_result)
+ with open(self._stats_filename, "w+") as stats_file:
+ stats_file.write(str(self._stats_result))
+
+ def get_return_code(self) -> int:
+ """
+ Go through all stored Exceptions and return the highest error code found.
+ """
+ for error in self._errors:
+ error_return_code = ErrorSeverity.GENERIC_ERR
+ if isinstance(error, DTSError):
+ error_return_code = error.severity
+
+ if error_return_code > self._return_code:
+ self._return_code = error_return_code
+
+ return int(self._return_code)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 0cbedee478..6cd142ef2f 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -9,12 +9,12 @@
import importlib
import inspect
import re
-from collections.abc import MutableSequence
from types import MethodType
from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
+from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
from .testbed_model import SutNode
@@ -40,21 +40,21 @@ class TestSuite(object):
_logger: DTSLOG
_test_cases_to_run: list[str]
_func: bool
- _errors: MutableSequence[Exception]
+ _result: TestSuiteResult
def __init__(
self,
sut_node: SutNode,
test_cases: list[str],
func: bool,
- errors: MutableSequence[Exception],
+ build_target_result: BuildTargetResult,
):
self.sut_node = sut_node
self._logger = getLogger(self.__class__.__name__)
self._test_cases_to_run = test_cases
self._test_cases_to_run.extend(SETTINGS.test_cases)
self._func = func
- self._errors = errors
+ self._result = build_target_result.add_test_suite(self.__class__.__name__)
def set_up_suite(self) -> None:
"""
@@ -97,10 +97,11 @@ def run(self) -> None:
try:
self._logger.info(f"Starting test suite setup: {test_suite_name}")
self.set_up_suite()
+ self._result.update_setup(Result.PASS)
self._logger.info(f"Test suite setup successful: {test_suite_name}")
except Exception as e:
self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
- self._errors.append(e)
+ self._result.update_setup(Result.ERROR, e)
else:
self._execute_test_suite()
@@ -109,13 +110,14 @@ def run(self) -> None:
try:
self.tear_down_suite()
self.sut_node.kill_cleanup_dpdk_apps()
+ self._result.update_teardown(Result.PASS)
except Exception as e:
self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
self._logger.warning(
f"Test suite '{test_suite_name}' teardown failed, "
f"the next test suite may be affected."
)
- self._errors.append(e)
+ self._result.update_setup(Result.ERROR, e)
def _execute_test_suite(self) -> None:
"""
@@ -123,17 +125,18 @@ def _execute_test_suite(self) -> None:
"""
if self._func:
for test_case_method in self._get_functional_test_cases():
+ test_case_name = test_case_method.__name__
+ test_case_result = self._result.add_test_case(test_case_name)
all_attempts = SETTINGS.re_run + 1
attempt_nr = 1
- while (
- not self._run_test_case(test_case_method)
- and attempt_nr <= all_attempts
- ):
+ self._run_test_case(test_case_method, test_case_result)
+ while not test_case_result and attempt_nr <= all_attempts:
attempt_nr += 1
self._logger.info(
- f"Re-running FAILED test case '{test_case_method.__name__}'. "
+ f"Re-running FAILED test case '{test_case_name}'. "
f"Attempt number {attempt_nr} out of {all_attempts}."
)
+ self._run_test_case(test_case_method, test_case_result)
def _get_functional_test_cases(self) -> list[MethodType]:
"""
@@ -166,68 +169,69 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
return match
- def _run_test_case(self, test_case_method: MethodType) -> bool:
+ def _run_test_case(
+ self, test_case_method: MethodType, test_case_result: TestCaseResult
+ ) -> None:
"""
Setup, execute and teardown a test case in this suite.
- Exceptions are caught and recorded in logs.
+ Exceptions are caught and recorded in logs and results.
"""
test_case_name = test_case_method.__name__
- result = False
try:
# run set_up function for each case
self.set_up_test_case()
+ test_case_result.update_setup(Result.PASS)
except SSHTimeoutError as e:
self._logger.exception(f"Test case setup FAILED: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update_setup(Result.FAIL, e)
except Exception as e:
self._logger.exception(f"Test case setup ERROR: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update_setup(Result.ERROR, e)
else:
# run test case if setup was successful
- result = self._execute_test_case(test_case_method)
+ self._execute_test_case(test_case_method, test_case_result)
finally:
try:
self.tear_down_test_case()
+ test_case_result.update_teardown(Result.PASS)
except Exception as e:
self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
self._logger.warning(
f"Test case '{test_case_name}' teardown failed, "
f"the next test case may be affected."
)
- self._errors.append(e)
- result = False
+ test_case_result.update_teardown(Result.ERROR, e)
+ test_case_result.update(Result.ERROR)
- return result
-
- def _execute_test_case(self, test_case_method: MethodType) -> bool:
+ def _execute_test_case(
+ self, test_case_method: MethodType, test_case_result: TestCaseResult
+ ) -> None:
"""
Execute one test case and handle failures.
"""
test_case_name = test_case_method.__name__
- result = False
try:
self._logger.info(f"Starting test case execution: {test_case_name}")
test_case_method()
- result = True
+ test_case_result.update(Result.PASS)
self._logger.info(f"Test case execution PASSED: {test_case_name}")
except TestCaseVerifyError as e:
self._logger.exception(f"Test case execution FAILED: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update(Result.FAIL, e)
except Exception as e:
self._logger.exception(f"Test case execution ERROR: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update(Result.ERROR, e)
except KeyboardInterrupt:
self._logger.error(
f"Test case execution INTERRUPTED by user: {test_case_name}"
)
+ test_case_result.update(Result.SKIP)
raise KeyboardInterrupt("Stop DTS")
- return result
-
def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
def is_test_suite(object) -> bool:
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v3 00/10] dts: add hello world testcase
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (9 preceding siblings ...)
2023-01-17 15:49 ` [PATCH v3 10/10] dts: add test results module Juraj Linkeš
@ 2023-01-19 16:16 ` Owen Hilyard
2023-02-09 16:47 ` Patrick Robb
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
11 siblings, 1 reply; 97+ messages in thread
From: Owen Hilyard @ 2023-01-19 16:16 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 5093 bytes --]
Everything looks good to me with the exception of some issues I ran into
with terminal codes. Setting TERM=dumb before running fixed it, but we
might want to set that inside of DTS since I can't think of a reason why we
would need colors or any of the other "fancy" features of the vt220, and
setting everything to teletype mode should make our lives easier when
parsing output.
I think that later on it might make sense to have "CPU" be a device class
like NIC or Cryptodev, but that can be revisited once we get closer to
interacting with hardware.
On Tue, Jan 17, 2023 at 10:49 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Add code needed to run the HelloWorld testcase which just runs the hello
> world dpdk application.
>
> The patchset currently heavily refactors this original DTS code needed
> to run the testcase:
> * The whole architecture has been redone into more sensible class
> hierarchy
> * DPDK build on the System under Test
> * DPDK eal args construction, app running and shutting down
> * SUT hugepage memory configuration
> * Test runner
> * Test results
> * TestSuite class
> * Test runner parts interfacing with TestSuite
> * The HelloWorld testsuite itself
>
> The code is divided into sub-packages, some of which are divided
> further.
>
> There patch may need to be divided into smaller chunks. If so, proposals
> on where exactly to split it would be very helpful.
>
> v3:
> Finished refactoring everything in this patch, with test suite and test
> results being the last parts.
> Also changed the directory structure. It's now simplified and the
> imports look much better.
> I've also many many minor changes such as renaming variables here and
> there.
>
> Juraj Linkeš (10):
> dts: add node and os abstractions
> dts: add ssh command verification
> dts: add dpdk build on sut
> dts: add dpdk execution handling
> dts: add node memory setup
> dts: add test suite module
> dts: add hello world testplan
> dts: add hello world testsuite
> dts: add test suite config and runner
> dts: add test results module
>
> dts/conf.yaml | 19 +-
> dts/framework/config/__init__.py | 132 +++++++-
> dts/framework/config/arch.py | 57 ++++
> dts/framework/config/conf_yaml_schema.json | 150 ++++++++-
> dts/framework/dts.py | 185 ++++++++--
> dts/framework/exception.py | 100 +++++-
> dts/framework/logger.py | 24 +-
> dts/framework/remote_session/__init__.py | 30 +-
> dts/framework/remote_session/linux_session.py | 114 +++++++
> dts/framework/remote_session/os_session.py | 177 ++++++++++
> dts/framework/remote_session/posix_session.py | 221 ++++++++++++
> .../remote_session/remote/__init__.py | 16 +
> .../remote_session/remote/remote_session.py | 155 +++++++++
> .../{ => remote}/ssh_session.py | 91 ++++-
> .../remote_session/remote_session.py | 95 ------
> dts/framework/settings.py | 79 ++++-
> dts/framework/test_result.py | 316 ++++++++++++++++++
> dts/framework/test_suite.py | 254 ++++++++++++++
> dts/framework/testbed_model/__init__.py | 20 +-
> dts/framework/testbed_model/dpdk.py | 78 +++++
> dts/framework/testbed_model/hw/__init__.py | 27 ++
> dts/framework/testbed_model/hw/cpu.py | 253 ++++++++++++++
> .../testbed_model/hw/virtual_device.py | 16 +
> dts/framework/testbed_model/node.py | 165 +++++++--
> dts/framework/testbed_model/sut_node.py | 261 +++++++++++++++
> dts/framework/utils.py | 39 ++-
> dts/test_plans/hello_world_test_plan.rst | 68 ++++
> dts/tests/TestSuite_hello_world.py | 59 ++++
> 28 files changed, 2998 insertions(+), 203 deletions(-)
> create mode 100644 dts/framework/config/arch.py
> create mode 100644 dts/framework/remote_session/linux_session.py
> create mode 100644 dts/framework/remote_session/os_session.py
> create mode 100644 dts/framework/remote_session/posix_session.py
> create mode 100644 dts/framework/remote_session/remote/__init__.py
> create mode 100644 dts/framework/remote_session/remote/remote_session.py
> rename dts/framework/remote_session/{ => remote}/ssh_session.py (65%)
> delete mode 100644 dts/framework/remote_session/remote_session.py
> create mode 100644 dts/framework/test_result.py
> create mode 100644 dts/framework/test_suite.py
> create mode 100644 dts/framework/testbed_model/dpdk.py
> create mode 100644 dts/framework/testbed_model/hw/__init__.py
> create mode 100644 dts/framework/testbed_model/hw/cpu.py
> create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
> create mode 100644 dts/framework/testbed_model/sut_node.py
> create mode 100644 dts/test_plans/hello_world_test_plan.rst
> create mode 100644 dts/tests/TestSuite_hello_world.py
>
> --
> 2.30.2
>
>
[-- Attachment #2: Type: text/html, Size: 5821 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v3 00/10] dts: add hello world testcase
2023-01-19 16:16 ` [PATCH v3 00/10] dts: add hello world testcase Owen Hilyard
@ 2023-02-09 16:47 ` Patrick Robb
0 siblings, 0 replies; 97+ messages in thread
From: Patrick Robb @ 2023-02-09 16:47 UTC (permalink / raw)
To: Owen Hilyard
Cc: Juraj Linkeš,
thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 5485 bytes --]
On Thu, Jan 19, 2023 at 11:16 AM Owen Hilyard <ohilyard@iol.unh.edu> wrote:
> Everything looks good to me with the exception of some issues I ran into
> with terminal codes. Setting TERM=dumb before running fixed it, but we
> might want to set that inside of DTS since I can't think of a reason why we
> would need colors or any of the other "fancy" features of the vt220, and
> setting everything to teletype mode should make our lives easier when
> parsing output.
>
> I think that later on it might make sense to have "CPU" be a device class
> like NIC or Cryptodev, but that can be revisited once we get closer to
> interacting with hardware.
>
> On Tue, Jan 17, 2023 at 10:49 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
> wrote:
>
>> Add code needed to run the HelloWorld testcase which just runs the hello
>> world dpdk application.
>>
>> The patchset currently heavily refactors this original DTS code needed
>> to run the testcase:
>> * The whole architecture has been redone into more sensible class
>> hierarchy
>> * DPDK build on the System under Test
>> * DPDK eal args construction, app running and shutting down
>> * SUT hugepage memory configuration
>> * Test runner
>> * Test results
>> * TestSuite class
>> * Test runner parts interfacing with TestSuite
>> * The HelloWorld testsuite itself
>>
>> The code is divided into sub-packages, some of which are divided
>> further.
>>
>> There patch may need to be divided into smaller chunks. If so, proposals
>> on where exactly to split it would be very helpful.
>>
>> v3:
>> Finished refactoring everything in this patch, with test suite and test
>> results being the last parts.
>> Also changed the directory structure. It's now simplified and the
>> imports look much better.
>> I've also many many minor changes such as renaming variables here and
>> there.
>>
>> Juraj Linkeš (10):
>> dts: add node and os abstractions
>> dts: add ssh command verification
>> dts: add dpdk build on sut
>> dts: add dpdk execution handling
>> dts: add node memory setup
>> dts: add test suite module
>> dts: add hello world testplan
>> dts: add hello world testsuite
>> dts: add test suite config and runner
>> dts: add test results module
>>
>> dts/conf.yaml | 19 +-
>> dts/framework/config/__init__.py | 132 +++++++-
>> dts/framework/config/arch.py | 57 ++++
>> dts/framework/config/conf_yaml_schema.json | 150 ++++++++-
>> dts/framework/dts.py | 185 ++++++++--
>> dts/framework/exception.py | 100 +++++-
>> dts/framework/logger.py | 24 +-
>> dts/framework/remote_session/__init__.py | 30 +-
>> dts/framework/remote_session/linux_session.py | 114 +++++++
>> dts/framework/remote_session/os_session.py | 177 ++++++++++
>> dts/framework/remote_session/posix_session.py | 221 ++++++++++++
>> .../remote_session/remote/__init__.py | 16 +
>> .../remote_session/remote/remote_session.py | 155 +++++++++
>> .../{ => remote}/ssh_session.py | 91 ++++-
>> .../remote_session/remote_session.py | 95 ------
>> dts/framework/settings.py | 79 ++++-
>> dts/framework/test_result.py | 316 ++++++++++++++++++
>> dts/framework/test_suite.py | 254 ++++++++++++++
>> dts/framework/testbed_model/__init__.py | 20 +-
>> dts/framework/testbed_model/dpdk.py | 78 +++++
>> dts/framework/testbed_model/hw/__init__.py | 27 ++
>> dts/framework/testbed_model/hw/cpu.py | 253 ++++++++++++++
>> .../testbed_model/hw/virtual_device.py | 16 +
>> dts/framework/testbed_model/node.py | 165 +++++++--
>> dts/framework/testbed_model/sut_node.py | 261 +++++++++++++++
>> dts/framework/utils.py | 39 ++-
>> dts/test_plans/hello_world_test_plan.rst | 68 ++++
>> dts/tests/TestSuite_hello_world.py | 59 ++++
>> 28 files changed, 2998 insertions(+), 203 deletions(-)
>> create mode 100644 dts/framework/config/arch.py
>> create mode 100644 dts/framework/remote_session/linux_session.py
>> create mode 100644 dts/framework/remote_session/os_session.py
>> create mode 100644 dts/framework/remote_session/posix_session.py
>> create mode 100644 dts/framework/remote_session/remote/__init__.py
>> create mode 100644 dts/framework/remote_session/remote/remote_session.py
>> rename dts/framework/remote_session/{ => remote}/ssh_session.py (65%)
>> delete mode 100644 dts/framework/remote_session/remote_session.py
>> create mode 100644 dts/framework/test_result.py
>> create mode 100644 dts/framework/test_suite.py
>> create mode 100644 dts/framework/testbed_model/dpdk.py
>> create mode 100644 dts/framework/testbed_model/hw/__init__.py
>> create mode 100644 dts/framework/testbed_model/hw/cpu.py
>> create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
>> create mode 100644 dts/framework/testbed_model/sut_node.py
>> create mode 100644 dts/test_plans/hello_world_test_plan.rst
>> create mode 100644 dts/tests/TestSuite_hello_world.py
>>
>> --
>> 2.30.2
>>
>>
Tested-by: Patrick Robb <probb@iol.unh.edu>
--
Patrick Robb
Technical Service Manager
UNH InterOperability Laboratory
21 Madbury Rd, Suite 100, Durham, NH 03824
www.iol.unh.edu
[-- Attachment #2: Type: text/html, Size: 8307 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 00/10] dts: add hello world testcase
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
` (10 preceding siblings ...)
2023-01-19 16:16 ` [PATCH v3 00/10] dts: add hello world testcase Owen Hilyard
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 01/10] dts: add node and os abstractions Juraj Linkeš
` (11 more replies)
11 siblings, 12 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
Add code needed to run the HelloWorld testcase which just runs the hello
world dpdk application.
The patchset currently heavily refactors this original DTS code needed
to run the testcase:
* The whole architecture has been redone into more sensible class
hierarchy
* DPDK build on the System under Test
* DPDK eal args construction, app running and shutting down
* Optional SUT hugepage memory configuration
* Test runner
* Test results
* TestSuite class
* Test runner parts interfacing with TestSuite
* The HelloWorld testsuite itself
The code is divided into sub-packages, some of which are divided
further.
There patch may need to be divided into smaller chunks. If so, proposals
on where exactly to split it would be very helpful.
v3:
Finished refactoring everything in this patch, with test suite and test
results being the last parts.
Also changed the directory structure. It's now simplified and the
imports look much better.
I've also many many minor changes such as renaming variables here and
there.
v4:
Made hugepage config optional, users may now specify that in the main
config file.
Removed HelloWorld test plan and incorporated parts of it into the test
suite python file.
Updated documentation.
Juraj Linkeš (10):
dts: add node and os abstractions
dts: add ssh command verification
dts: add dpdk build on sut
dts: add dpdk execution handling
dts: add node memory setup
dts: add test suite module
dts: add hello world testsuite
dts: add test suite config and runner
dts: add test results module
doc: update DTS setup and test suite cookbook
doc/guides/tools/dts.rst | 145 +++++++-
dts/conf.yaml | 22 +-
dts/framework/config/__init__.py | 130 ++++++-
dts/framework/config/conf_yaml_schema.json | 172 +++++++++-
dts/framework/dts.py | 185 ++++++++--
dts/framework/exception.py | 100 +++++-
dts/framework/logger.py | 24 +-
dts/framework/remote_session/__init__.py | 30 +-
dts/framework/remote_session/linux_session.py | 107 ++++++
dts/framework/remote_session/os_session.py | 175 ++++++++++
dts/framework/remote_session/posix_session.py | 221 ++++++++++++
.../remote_session/remote/__init__.py | 16 +
.../remote_session/remote/remote_session.py | 155 +++++++++
.../{ => remote}/ssh_session.py | 91 ++++-
.../remote_session/remote_session.py | 95 ------
dts/framework/settings.py | 81 ++++-
dts/framework/test_result.py | 316 ++++++++++++++++++
dts/framework/test_suite.py | 254 ++++++++++++++
dts/framework/testbed_model/__init__.py | 20 +-
dts/framework/testbed_model/dpdk.py | 78 +++++
dts/framework/testbed_model/hw/__init__.py | 27 ++
dts/framework/testbed_model/hw/cpu.py | 253 ++++++++++++++
.../testbed_model/hw/virtual_device.py | 16 +
dts/framework/testbed_model/node.py | 162 +++++++--
dts/framework/testbed_model/sut_node.py | 261 +++++++++++++++
dts/framework/utils.py | 39 ++-
dts/tests/TestSuite_hello_world.py | 64 ++++
27 files changed, 3030 insertions(+), 209 deletions(-)
create mode 100644 dts/framework/remote_session/linux_session.py
create mode 100644 dts/framework/remote_session/os_session.py
create mode 100644 dts/framework/remote_session/posix_session.py
create mode 100644 dts/framework/remote_session/remote/__init__.py
create mode 100644 dts/framework/remote_session/remote/remote_session.py
rename dts/framework/remote_session/{ => remote}/ssh_session.py (65%)
delete mode 100644 dts/framework/remote_session/remote_session.py
create mode 100644 dts/framework/test_result.py
create mode 100644 dts/framework/test_suite.py
create mode 100644 dts/framework/testbed_model/dpdk.py
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
create mode 100644 dts/framework/testbed_model/sut_node.py
create mode 100644 dts/tests/TestSuite_hello_world.py
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 01/10] dts: add node and os abstractions
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-17 17:44 ` Bruce Richardson
2023-02-13 15:28 ` [PATCH v4 02/10] dts: add ssh command verification Juraj Linkeš
` (10 subsequent siblings)
11 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
The abstraction model in DTS is as follows:
Node, defining and implementing methods common to and the base of SUT
(system under test) Node and TG (traffic generator) Node.
Remote Session, defining and implementing methods common to any remote
session implementation, such as SSH Session.
OSSession, defining and implementing methods common to any operating
system/distribution, such as Linux.
OSSession uses a derived Remote Session and Node in turn uses a derived
OSSession. This split delegates OS-specific and connection-specific code
to specialized classes designed to handle the differences.
The base classes implement the methods or parts of methods that are
common to all implementations and defines abstract methods that must be
implemented by derived classes.
Part of the abstractions is the DTS test execution skeleton:
execution setup, build setup and then test execution.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 11 +-
dts/framework/config/__init__.py | 73 +++++++-
dts/framework/config/conf_yaml_schema.json | 76 +++++++-
dts/framework/dts.py | 162 ++++++++++++++----
dts/framework/exception.py | 46 ++++-
dts/framework/logger.py | 24 +--
dts/framework/remote_session/__init__.py | 30 +++-
dts/framework/remote_session/linux_session.py | 11 ++
dts/framework/remote_session/os_session.py | 46 +++++
dts/framework/remote_session/posix_session.py | 12 ++
.../remote_session/remote/__init__.py | 16 ++
.../{ => remote}/remote_session.py | 41 +++--
.../{ => remote}/ssh_session.py | 20 +--
dts/framework/settings.py | 15 +-
dts/framework/testbed_model/__init__.py | 10 +-
dts/framework/testbed_model/node.py | 104 ++++++++---
dts/framework/testbed_model/sut_node.py | 13 ++
17 files changed, 591 insertions(+), 119 deletions(-)
create mode 100644 dts/framework/remote_session/linux_session.py
create mode 100644 dts/framework/remote_session/os_session.py
create mode 100644 dts/framework/remote_session/posix_session.py
create mode 100644 dts/framework/remote_session/remote/__init__.py
rename dts/framework/remote_session/{ => remote}/remote_session.py (61%)
rename dts/framework/remote_session/{ => remote}/ssh_session.py (91%)
create mode 100644 dts/framework/testbed_model/sut_node.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1aaa593612..03696d2bab 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -1,9 +1,16 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2022 The DPDK contributors
+# Copyright 2022-2023 The DPDK contributors
executions:
- - system_under_test: "SUT 1"
+ - build_targets:
+ - arch: x86_64
+ os: linux
+ cpu: native
+ compiler: gcc
+ compiler_wrapper: ccache
+ system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ os: linux
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 214be8e7f4..e3e2d74eac 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -1,15 +1,17 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2021 Intel Corporation
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
"""
-Generic port and topology nodes configuration file load function
+Yaml config parsing methods
"""
import json
import os.path
import pathlib
from dataclasses import dataclass
+from enum import Enum, auto, unique
from typing import Any
import warlock # type: ignore
@@ -18,6 +20,47 @@
from framework.settings import SETTINGS
+class StrEnum(Enum):
+ @staticmethod
+ def _generate_next_value_(
+ name: str, start: int, count: int, last_values: object
+ ) -> str:
+ return name
+
+
+@unique
+class Architecture(StrEnum):
+ i686 = auto()
+ x86_64 = auto()
+ x86_32 = auto()
+ arm64 = auto()
+ ppc64le = auto()
+
+
+@unique
+class OS(StrEnum):
+ linux = auto()
+ freebsd = auto()
+ windows = auto()
+
+
+@unique
+class CPUType(StrEnum):
+ native = auto()
+ armv8a = auto()
+ dpaa2 = auto()
+ thunderx = auto()
+ xgene1 = auto()
+
+
+@unique
+class Compiler(StrEnum):
+ gcc = auto()
+ clang = auto()
+ icc = auto()
+ msvc = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -29,6 +72,7 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ os: OS
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -37,19 +81,44 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ os=OS(d["os"]),
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class BuildTargetConfiguration:
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+ name: str
+
+ @staticmethod
+ def from_dict(d: dict) -> "BuildTargetConfiguration":
+ return BuildTargetConfiguration(
+ arch=Architecture(d["arch"]),
+ os=OS(d["os"]),
+ cpu=CPUType(d["cpu"]),
+ compiler=Compiler(d["compiler"]),
+ name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
+ build_targets: list[BuildTargetConfiguration]
system_under_test: NodeConfiguration
@staticmethod
def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
+ build_targets: list[BuildTargetConfiguration] = list(
+ map(BuildTargetConfiguration.from_dict, d["build_targets"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
return ExecutionConfiguration(
+ build_targets=build_targets,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 6b8d6ccd05..9170307fbe 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -5,6 +5,68 @@
"node_name": {
"type": "string",
"description": "A unique identifier for a node"
+ },
+ "OS": {
+ "type": "string",
+ "enum": [
+ "linux"
+ ]
+ },
+ "cpu": {
+ "type": "string",
+ "description": "Native should be the default on x86",
+ "enum": [
+ "native",
+ "armv8a",
+ "dpaa2",
+ "thunderx",
+ "xgene1"
+ ]
+ },
+ "compiler": {
+ "type": "string",
+ "enum": [
+ "gcc",
+ "clang",
+ "icc",
+ "mscv"
+ ]
+ },
+ "build_target": {
+ "type": "object",
+ "description": "Targets supported by DTS",
+ "properties": {
+ "arch": {
+ "type": "string",
+ "enum": [
+ "ALL",
+ "x86_64",
+ "arm64",
+ "ppc64le",
+ "other"
+ ]
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
+ },
+ "cpu": {
+ "$ref": "#/definitions/cpu"
+ },
+ "compiler": {
+ "$ref": "#/definitions/compiler"
+ },
+ "compiler_wrapper": {
+ "type": "string",
+ "description": "This will be added before compiler to the CC variable when building DPDK. Optional."
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "arch",
+ "os",
+ "cpu",
+ "compiler"
+ ]
}
},
"type": "object",
@@ -29,13 +91,17 @@
"password": {
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
}
},
"additionalProperties": false,
"required": [
"name",
"hostname",
- "user"
+ "user",
+ "os"
]
},
"minimum": 1
@@ -45,12 +111,20 @@
"items": {
"type": "object",
"properties": {
+ "build_targets": {
+ "type": "array",
+ "items": {
+ "$ref": "#/definitions/build_target"
+ },
+ "minimum": 1
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
},
"additionalProperties": false,
"required": [
+ "build_targets",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index d23cfc4526..6ea7c6e736 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -1,67 +1,157 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2019 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import sys
-import traceback
-from collections.abc import Iterable
-from framework.testbed_model.node import Node
-
-from .config import CONFIGURATION
+from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
+from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .testbed_model import SutNode
from .utils import check_dts_python_version
-dts_logger: DTSLOG | None = None
+dts_logger: DTSLOG = getLogger("dts_runner")
+errors = []
def run_all() -> None:
"""
- Main process of DTS, it will run all test suites in the config file.
+ The main process of DTS. Runs all build targets in all executions from the main
+ config file.
"""
-
global dts_logger
+ global errors
# check the python version of the server that run dts
check_dts_python_version()
- dts_logger = getLogger("dts")
-
- nodes = {}
- # This try/finally block means "Run the try block, if there is an exception,
- # run the finally block before passing it upward. If there is not an exception,
- # run the finally block after the try block is finished." This helps avoid the
- # problem of python's interpreter exit context, which essentially prevents you
- # from making certain system calls. This makes cleaning up resources difficult,
- # since most of the resources in DTS are network-based, which is restricted.
+ nodes: dict[str, SutNode] = {}
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
- sut_config = execution.system_under_test
- if sut_config.name not in nodes:
- node = Node(sut_config)
- nodes[sut_config.name] = node
- node.send_command("echo Hello World")
+ sut_node = None
+ if execution.system_under_test.name in nodes:
+ # a Node with the same name already exists
+ sut_node = nodes[execution.system_under_test.name]
+ else:
+ # the SUT has not been initialized yet
+ try:
+ sut_node = SutNode(execution.system_under_test)
+ except Exception as e:
+ dts_logger.exception(
+ f"Connection to node {execution.system_under_test} failed."
+ )
+ errors.append(e)
+ else:
+ nodes[sut_node.name] = sut_node
+
+ if sut_node:
+ _run_execution(sut_node, execution)
+
+ except Exception as e:
+ dts_logger.exception("An unexpected error has occurred.")
+ errors.append(e)
+ raise
+
+ finally:
+ try:
+ for node in nodes.values():
+ node.close()
+ except Exception as e:
+ dts_logger.exception("Final cleanup of nodes failed.")
+ errors.append(e)
+ # we need to put the sys.exit call outside the finally clause to make sure
+ # that unexpected exceptions will propagate
+ # in that case, the error that should be reported is the uncaught exception as
+ # that is a severe error originating from the framework
+ # at that point, we'll only have partial results which could be impacted by the
+ # error causing the uncaught exception, making them uninterpretable
+ _exit_dts()
+
+
+def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+ """
+ Run the given execution. This involves running the execution setup as well as
+ running all build targets in the given execution.
+ """
+ dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+
+ try:
+ sut_node.set_up_execution(execution)
except Exception as e:
- # sys.exit() doesn't produce a stack trace, need to print it explicitly
- traceback.print_exc()
- raise e
+ dts_logger.exception("Execution setup failed.")
+ errors.append(e)
+
+ else:
+ for build_target in execution.build_targets:
+ _run_build_target(sut_node, build_target, execution)
finally:
- quit_execution(nodes.values())
+ try:
+ sut_node.tear_down_execution()
+ except Exception as e:
+ dts_logger.exception("Execution teardown failed.")
+ errors.append(e)
-def quit_execution(sut_nodes: Iterable[Node]) -> None:
+def _run_build_target(
+ sut_node: SutNode,
+ build_target: BuildTargetConfiguration,
+ execution: ExecutionConfiguration,
+) -> None:
"""
- Close session to SUT and TG before quit.
- Return exit status when failure occurred.
+ Run the given build target.
"""
- for sut_node in sut_nodes:
- # close all session
- sut_node.node_exit()
+ dts_logger.info(f"Running build target '{build_target.name}'.")
+
+ try:
+ sut_node.set_up_build_target(build_target)
+ except Exception as e:
+ dts_logger.exception("Build target setup failed.")
+ errors.append(e)
+
+ else:
+ _run_suites(sut_node, execution)
+
+ finally:
+ try:
+ sut_node.tear_down_build_target()
+ except Exception as e:
+ dts_logger.exception("Build target teardown failed.")
+ errors.append(e)
+
+
+def _run_suites(
+ sut_node: SutNode,
+ execution: ExecutionConfiguration,
+) -> None:
+ """
+ Use the given build_target to run execution's test suites
+ with possibly only a subset of test cases.
+ If no subset is specified, run all test cases.
+ """
+
+
+def _exit_dts() -> None:
+ """
+ Process all errors and exit with the proper exit code.
+ """
+ if errors and dts_logger:
+ dts_logger.debug("Summary of errors:")
+ for error in errors:
+ dts_logger.debug(repr(error))
+
+ return_code = ErrorSeverity.NO_ERR
+ for error in errors:
+ error_return_code = ErrorSeverity.GENERIC_ERR
+ if isinstance(error, DTSError):
+ error_return_code = error.severity
+
+ if error_return_code > return_code:
+ return_code = error_return_code
- if dts_logger is not None:
+ if dts_logger:
dts_logger.info("DTS execution has ended.")
- sys.exit(0)
+ sys.exit(return_code)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 8b2f08a8f0..121a0f7296 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -1,20 +1,46 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
User-defined exceptions used across the framework.
"""
+from enum import IntEnum, unique
+from typing import ClassVar
-class SSHTimeoutError(Exception):
+
+@unique
+class ErrorSeverity(IntEnum):
+ """
+ The severity of errors that occur during DTS execution.
+ All exceptions are caught and the most severe error is used as return code.
+ """
+
+ NO_ERR = 0
+ GENERIC_ERR = 1
+ CONFIG_ERR = 2
+ SSH_ERR = 3
+
+
+class DTSError(Exception):
+ """
+ The base exception from which all DTS exceptions are derived.
+ Stores error severity.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
+
+
+class SSHTimeoutError(DTSError):
"""
Command execution timeout.
"""
command: str
output: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, command: str, output: str):
self.command = command
@@ -27,12 +53,13 @@ def get_output(self) -> str:
return self.output
-class SSHConnectionError(Exception):
+class SSHConnectionError(DTSError):
"""
SSH connection error.
"""
host: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, host: str):
self.host = host
@@ -41,16 +68,25 @@ def __str__(self) -> str:
return f"Error trying to connect with {self.host}"
-class SSHSessionDeadError(Exception):
+class SSHSessionDeadError(DTSError):
"""
SSH session is not alive.
It can no longer be used.
"""
host: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, host: str):
self.host = host
def __str__(self) -> str:
return f"SSH session with {self.host} has died"
+
+
+class ConfigurationError(DTSError):
+ """
+ Raised when an invalid configuration is encountered.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index a31fcc8242..bb2991e994 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
DTS logger module with several log level. DTS framework and TestSuite logs
@@ -33,17 +33,17 @@ class DTSLOG(logging.LoggerAdapter):
DTS log class for framework and testsuite.
"""
- logger: logging.Logger
+ _logger: logging.Logger
node: str
sh: logging.StreamHandler
fh: logging.FileHandler
verbose_fh: logging.FileHandler
def __init__(self, logger: logging.Logger, node: str = "suite"):
- self.logger = logger
+ self._logger = logger
# 1 means log everything, this will be used by file handlers if their level
# is not set
- self.logger.setLevel(1)
+ self._logger.setLevel(1)
self.node = node
@@ -55,9 +55,13 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
if SETTINGS.verbose is True:
sh.setLevel(logging.DEBUG)
- self.logger.addHandler(sh)
+ self._logger.addHandler(sh)
self.sh = sh
+ # prepare the output folder
+ if not os.path.exists(SETTINGS.output_dir):
+ os.mkdir(SETTINGS.output_dir)
+
logging_path_prefix = os.path.join(SETTINGS.output_dir, node)
fh = logging.FileHandler(f"{logging_path_prefix}.log")
@@ -68,7 +72,7 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
)
)
- self.logger.addHandler(fh)
+ self._logger.addHandler(fh)
self.fh = fh
# This outputs EVERYTHING, intended for post-mortem debugging
@@ -82,10 +86,10 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
)
)
- self.logger.addHandler(verbose_fh)
+ self._logger.addHandler(verbose_fh)
self.verbose_fh = verbose_fh
- super(DTSLOG, self).__init__(self.logger, dict(node=self.node))
+ super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
def logger_exit(self) -> None:
"""
@@ -93,7 +97,7 @@ def logger_exit(self) -> None:
"""
for handler in (self.sh, self.fh, self.verbose_fh):
handler.flush()
- self.logger.removeHandler(handler)
+ self._logger.removeHandler(handler)
def getLogger(name: str, node: str = "suite") -> DTSLOG:
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index a227d8db22..747316c78a 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -1,14 +1,30 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
-from framework.config import NodeConfiguration
+"""
+The package provides modules for managing remote connections to a remote host (node),
+differentiated by OS.
+The package provides a factory function, create_session, that returns the appropriate
+remote connection based on the passed configuration. The differences are in the
+underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""
+
+# pylama:ignore=W0611
+
+from framework.config import OS, NodeConfiguration
+from framework.exception import ConfigurationError
from framework.logger import DTSLOG
-from .remote_session import RemoteSession
-from .ssh_session import SSHSession
+from .linux_session import LinuxSession
+from .os_session import OSSession
+from .remote import RemoteSession, SSHSession
-def create_remote_session(
+def create_session(
node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
- return SSHSession(node_config, name, logger)
+) -> OSSession:
+ match node_config.os:
+ case OS.linux:
+ return LinuxSession(node_config, name, logger)
+ case _:
+ raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
new file mode 100644
index 0000000000..9d14166077
--- /dev/null
+++ b/dts/framework/remote_session/linux_session.py
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from .posix_session import PosixSession
+
+
+class LinuxSession(PosixSession):
+ """
+ The implementation of non-Posix compliant parts of Linux remote sessions.
+ """
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
new file mode 100644
index 0000000000..7a4cc5e669
--- /dev/null
+++ b/dts/framework/remote_session/os_session.py
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from abc import ABC
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote import RemoteSession, create_remote_session
+
+
+class OSSession(ABC):
+ """
+ The OS classes create a DTS node remote session and implement OS specific
+ behavior. There a few control methods implemented by the base class, the rest need
+ to be implemented by derived classes.
+ """
+
+ _config: NodeConfiguration
+ name: str
+ _logger: DTSLOG
+ remote_session: RemoteSession
+
+ def __init__(
+ self,
+ node_config: NodeConfiguration,
+ name: str,
+ logger: DTSLOG,
+ ):
+ self._config = node_config
+ self.name = name
+ self._logger = logger
+ self.remote_session = create_remote_session(node_config, name, logger)
+
+ def close(self, force: bool = False) -> None:
+ """
+ Close the remote session.
+ """
+ self.remote_session.close(force)
+
+ def is_alive(self) -> bool:
+ """
+ Check whether the remote session is still responding.
+ """
+ return self.remote_session.is_alive()
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
new file mode 100644
index 0000000000..110b6a4804
--- /dev/null
+++ b/dts/framework/remote_session/posix_session.py
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from .os_session import OSSession
+
+
+class PosixSession(OSSession):
+ """
+ An intermediary class implementing the Posix compliant parts of
+ Linux and other OS remote sessions.
+ """
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
new file mode 100644
index 0000000000..f3092f8bbe
--- /dev/null
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote_session import RemoteSession
+from .ssh_session import SSHSession
+
+
+def create_remote_session(
+ node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+ return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
similarity index 61%
rename from dts/framework/remote_session/remote_session.py
rename to dts/framework/remote_session/remote/remote_session.py
index 33047d9d0a..7c7b30225f 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import dataclasses
from abc import ABC, abstractmethod
@@ -19,14 +19,23 @@ class HistoryRecord:
class RemoteSession(ABC):
+ """
+ The base class for defining which methods must be implemented in order to connect
+ to a remote host (node) and maintain a remote session. The derived classes are
+ supposed to implement/use some underlying transport protocol (e.g. SSH) to
+ implement the methods. On top of that, it provides some basic services common to
+ all derived classes, such as keeping history and logging what's being executed
+ on the remote node.
+ """
+
name: str
hostname: str
ip: str
port: int | None
username: str
password: str
- logger: DTSLOG
history: list[HistoryRecord]
+ _logger: DTSLOG
_node_config: NodeConfiguration
def __init__(
@@ -46,31 +55,34 @@ def __init__(
self.port = int(port)
self.username = node_config.user
self.password = node_config.password or ""
- self.logger = logger
self.history = []
- self.logger.info(f"Connecting to {self.username}@{self.hostname}.")
+ self._logger = logger
+ self._logger.info(f"Connecting to {self.username}@{self.hostname}.")
self._connect()
- self.logger.info(f"Connection to {self.username}@{self.hostname} successful.")
+ self._logger.info(f"Connection to {self.username}@{self.hostname} successful.")
@abstractmethod
def _connect(self) -> None:
"""
Create connection to assigned node.
"""
- pass
def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
- self.logger.info(f"Sending: {command}")
+ """
+ Send a command and return the output.
+ """
+ self._logger.info(f"Sending: {command}")
out = self._send_command(command, timeout)
- self.logger.debug(f"Received from {command}: {out}")
+ self._logger.debug(f"Received from {command}: {out}")
self._history_add(command=command, output=out)
return out
@abstractmethod
def _send_command(self, command: str, timeout: float) -> str:
"""
- Send a command and return the output.
+ Use the underlying protocol to execute the command and return the output
+ of the command.
"""
def _history_add(self, command: str, output: str) -> None:
@@ -79,17 +91,20 @@ def _history_add(self, command: str, output: str) -> None:
)
def close(self, force: bool = False) -> None:
- self.logger.logger_exit()
+ """
+ Close the remote session and free all used resources.
+ """
+ self._logger.logger_exit()
self._close(force)
@abstractmethod
def _close(self, force: bool = False) -> None:
"""
- Close the remote session, freeing all used resources.
+ Execute protocol specific steps needed to close the session properly.
"""
@abstractmethod
def is_alive(self) -> bool:
"""
- Check whether the session is still responding.
+ Check whether the remote session is still responding.
"""
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/ssh_session.py
rename to dts/framework/remote_session/remote/ssh_session.py
index 7ec327054d..96175f5284 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import time
@@ -17,7 +17,7 @@
class SSHSession(RemoteSession):
"""
- Module for creating Pexpect SSH sessions to a node.
+ Module for creating Pexpect SSH remote sessions.
"""
session: pxssh.pxssh
@@ -56,9 +56,9 @@ def _connect(self) -> None:
)
break
except Exception as e:
- self.logger.warning(e)
+ self._logger.warning(e)
time.sleep(2)
- self.logger.info(
+ self._logger.info(
f"Retrying connection: retry number {retry_attempt + 1}."
)
else:
@@ -67,13 +67,13 @@ def _connect(self) -> None:
self.send_expect("stty -echo", "#")
self.send_expect("stty columns 1000", "#")
except Exception as e:
- self.logger.error(RED(str(e)))
+ self._logger.error(RED(str(e)))
if getattr(self, "port", None):
suggestion = (
f"\nSuggestion: Check if the firewall on {self.hostname} is "
f"stopped.\n"
)
- self.logger.info(GREEN(suggestion))
+ self._logger.info(GREEN(suggestion))
raise SSHConnectionError(self.hostname)
@@ -87,8 +87,8 @@ def send_expect(
try:
retval = int(ret_status)
if retval:
- self.logger.error(f"Command: {command} failure!")
- self.logger.error(ret)
+ self._logger.error(f"Command: {command} failure!")
+ self._logger.error(ret)
return retval
else:
return ret
@@ -97,7 +97,7 @@ def send_expect(
else:
return ret
except Exception as e:
- self.logger.error(
+ self._logger.error(
f"Exception happened in [{command}] and output is "
f"[{self._get_output()}]"
)
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 800f2c7b7f..6422b23499 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2021 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
# Copyright(c) 2022 University of New Hampshire
import argparse
@@ -23,7 +23,7 @@ def __init__(
default: str = None,
type: Callable[[str], _T | argparse.FileType | None] = None,
choices: Iterable[_T] | None = None,
- required: bool = True,
+ required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None:
@@ -63,13 +63,17 @@ class _Settings:
def _get_parser() -> argparse.ArgumentParser:
- parser = argparse.ArgumentParser(description="DPDK test framework.")
+ parser = argparse.ArgumentParser(
+ description="Run DPDK test suites. All options may be specified with "
+ "the environment variables provided in brackets. "
+ "Command line arguments have higher priority.",
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter,
+ )
parser.add_argument(
"--config-file",
action=_env_arg("DTS_CFG_FILE"),
default="conf.yaml",
- required=False,
help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
"and targets.",
)
@@ -79,7 +83,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--output",
action=_env_arg("DTS_OUTPUT_DIR"),
default="output",
- required=False,
help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
)
@@ -88,7 +91,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--timeout",
action=_env_arg("DTS_TIMEOUT"),
default=15,
- required=False,
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
"compiling DPDK.",
)
@@ -98,7 +100,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--verbose",
action=_env_arg("DTS_VERBOSE"),
default="N",
- required=False,
help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
"to the console.",
)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index c5512e5812..8ead9db482 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -1,7 +1,13 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
"""
-This module contains the classes used to model the physical traffic generator,
+This package contains the classes used to model the physical traffic generator,
system under test and any other components that need to be interacted with.
"""
+
+# pylama:ignore=W0611
+
+from .node import Node
+from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 8437975416..a37f7921e0 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -1,62 +1,118 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
A node is a generic host that DTS connects to and manages.
"""
-from framework.config import NodeConfiguration
+from framework.config import (
+ BuildTargetConfiguration,
+ ExecutionConfiguration,
+ NodeConfiguration,
+)
from framework.logger import DTSLOG, getLogger
-from framework.remote_session import RemoteSession, create_remote_session
-from framework.settings import SETTINGS
+from framework.remote_session import OSSession, create_session
class Node(object):
"""
- Basic module for node management. This module implements methods that
+ Basic class for node management. This class implements methods that
manage a node, such as information gathering (of CPU/PCI/NIC) and
environment setup.
"""
+ main_session: OSSession
+ config: NodeConfiguration
name: str
- main_session: RemoteSession
- logger: DTSLOG
- _config: NodeConfiguration
- _other_sessions: list[RemoteSession]
+ _logger: DTSLOG
+ _other_sessions: list[OSSession]
def __init__(self, node_config: NodeConfiguration):
- self._config = node_config
+ self.config = node_config
+ self.name = node_config.name
+ self._logger = getLogger(self.name)
+ self.main_session = create_session(self.config, self.name, self._logger)
+
self._other_sessions = []
- self.name = node_config.name
- self.logger = getLogger(self.name)
- self.logger.info(f"Created node: {self.name}")
- self.main_session = create_remote_session(self._config, self.name, self.logger)
+ self._logger.info(f"Created node: {self.name}")
- def send_command(self, cmds: str, timeout: float = SETTINGS.timeout) -> str:
+ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
"""
- Send commands to node and return string before timeout.
+ Perform the execution setup that will be done for each execution
+ this node is part of.
"""
+ self._set_up_execution(execution_config)
- return self.main_session.send_command(cmds, timeout)
+ def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
- def create_session(self, name: str) -> RemoteSession:
- connection = create_remote_session(
- self._config,
+ def tear_down_execution(self) -> None:
+ """
+ Perform the execution teardown that will be done after each execution
+ this node is part of concludes.
+ """
+ self._tear_down_execution()
+
+ def _tear_down_execution(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Perform the build target setup that will be done for each build target
+ tested on this node.
+ """
+ self._set_up_build_target(build_target_config)
+
+ def _set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def tear_down_build_target(self) -> None:
+ """
+ Perform the build target teardown that will be done after each build target
+ tested on this node.
+ """
+ self._tear_down_build_target()
+
+ def _tear_down_build_target(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def create_session(self, name: str) -> OSSession:
+ """
+ Create and return a new OSSession tailored to the remote OS.
+ """
+ connection = create_session(
+ self.config,
name,
getLogger(name, node=self.name),
)
self._other_sessions.append(connection)
return connection
- def node_exit(self) -> None:
+ def close(self) -> None:
"""
- Recover all resource before node exit
+ Close all connections and free other resources.
"""
if self.main_session:
self.main_session.close()
for session in self._other_sessions:
session.close()
- self.logger.logger_exit()
+ self._logger.logger_exit()
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
new file mode 100644
index 0000000000..42acb6f9b2
--- /dev/null
+++ b/dts/framework/testbed_model/sut_node.py
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from .node import Node
+
+
+class SutNode(Node):
+ """
+ A class for managing connections to the System under Test, providing
+ methods that retrieve the necessary information about the node (such as
+ CPU, memory and NIC details) and configuration capabilities.
+ """
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 02/10] dts: add ssh command verification
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 01/10] dts: add node and os abstractions Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 03/10] dts: add dpdk build on sut Juraj Linkeš
` (9 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
This is a basic capability needed to check whether the command execution
was successful or not. If not, raise a RemoteCommandExecutionError. When
a failure is expected, the caller is supposed to catch the exception.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/exception.py | 23 +++++++-
.../remote_session/remote/remote_session.py | 55 +++++++++++++------
.../remote_session/remote/ssh_session.py | 11 +++-
3 files changed, 68 insertions(+), 21 deletions(-)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 121a0f7296..e776b42bd9 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -21,7 +21,8 @@ class ErrorSeverity(IntEnum):
NO_ERR = 0
GENERIC_ERR = 1
CONFIG_ERR = 2
- SSH_ERR = 3
+ REMOTE_CMD_EXEC_ERR = 3
+ SSH_ERR = 4
class DTSError(Exception):
@@ -90,3 +91,23 @@ class ConfigurationError(DTSError):
"""
severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
+
+
+class RemoteCommandExecutionError(DTSError):
+ """
+ Raised when a command executed on a Node returns a non-zero exit status.
+ """
+
+ command: str
+ command_return_code: int
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+
+ def __init__(self, command: str, command_return_code: int):
+ self.command = command
+ self.command_return_code = command_return_code
+
+ def __str__(self) -> str:
+ return (
+ f"Command {self.command} returned a non-zero exit code: "
+ f"{self.command_return_code}"
+ )
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 7c7b30225f..5ac395ec79 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -7,15 +7,29 @@
from abc import ABC, abstractmethod
from framework.config import NodeConfiguration
+from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
@dataclasses.dataclass(slots=True, frozen=True)
-class HistoryRecord:
+class CommandResult:
+ """
+ The result of remote execution of a command.
+ """
+
name: str
command: str
- output: str | int
+ stdout: str
+ stderr: str
+ return_code: int
+
+ def __str__(self) -> str:
+ return (
+ f"stdout: '{self.stdout}'\n"
+ f"stderr: '{self.stderr}'\n"
+ f"return_code: '{self.return_code}'"
+ )
class RemoteSession(ABC):
@@ -34,7 +48,7 @@ class RemoteSession(ABC):
port: int | None
username: str
password: str
- history: list[HistoryRecord]
+ history: list[CommandResult]
_logger: DTSLOG
_node_config: NodeConfiguration
@@ -68,28 +82,33 @@ def _connect(self) -> None:
Create connection to assigned node.
"""
- def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
+ def send_command(
+ self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ ) -> CommandResult:
"""
- Send a command and return the output.
+ Send a command to the connected node and return CommandResult.
+ If verify is True, check the return code of the executed command
+ and raise a RemoteCommandExecutionError if the command failed.
"""
- self._logger.info(f"Sending: {command}")
- out = self._send_command(command, timeout)
- self._logger.debug(f"Received from {command}: {out}")
- self._history_add(command=command, output=out)
- return out
+ self._logger.info(f"Sending: '{command}'")
+ result = self._send_command(command, timeout)
+ if verify and result.return_code:
+ self._logger.debug(
+ f"Command '{command}' failed with return code '{result.return_code}'"
+ )
+ self._logger.debug(f"stdout: '{result.stdout}'")
+ self._logger.debug(f"stderr: '{result.stderr}'")
+ raise RemoteCommandExecutionError(command, result.return_code)
+ self._logger.debug(f"Received from '{command}':\n{result}")
+ self.history.append(result)
+ return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return the output
- of the command.
+ Use the underlying protocol to execute the command and return CommandResult.
"""
- def _history_add(self, command: str, output: str) -> None:
- self.history.append(
- HistoryRecord(name=self.name, command=command, output=output)
- )
-
def close(self, force: bool = False) -> None:
"""
Close the remote session and free all used resources.
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index 96175f5284..6da5be9fff 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -12,7 +12,7 @@
from framework.logger import DTSLOG
from framework.utils import GREEN, RED
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
class SSHSession(RemoteSession):
@@ -163,7 +163,14 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
+ output = self._send_command_get_output(command, timeout)
+ return_code = int(self._send_command_get_output("echo $?", timeout))
+
+ # we're capturing only stdout
+ return CommandResult(self.name, command, output, "", return_code)
+
+ def _send_command_get_output(self, command: str, timeout: float) -> str:
try:
self._clean_session()
self._send_line(command)
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 03/10] dts: add dpdk build on sut
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 01/10] dts: add node and os abstractions Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 02/10] dts: add ssh command verification Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-22 16:44 ` Bruce Richardson
2023-02-13 15:28 ` [PATCH v4 04/10] dts: add dpdk execution handling Juraj Linkeš
` (8 subsequent siblings)
11 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
Add the ability to build DPDK and apps on the SUT, using a configured
target.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/config/__init__.py | 2 +
dts/framework/exception.py | 17 ++
dts/framework/remote_session/os_session.py | 90 +++++++++-
dts/framework/remote_session/posix_session.py | 126 ++++++++++++++
.../remote_session/remote/remote_session.py | 38 ++++-
.../remote_session/remote/ssh_session.py | 68 +++++++-
dts/framework/settings.py | 44 ++++-
dts/framework/testbed_model/__init__.py | 1 +
dts/framework/testbed_model/dpdk.py | 33 ++++
dts/framework/testbed_model/sut_node.py | 158 ++++++++++++++++++
dts/framework/utils.py | 19 ++-
11 files changed, 580 insertions(+), 16 deletions(-)
create mode 100644 dts/framework/testbed_model/dpdk.py
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index e3e2d74eac..ca61cb10fe 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -91,6 +91,7 @@ class BuildTargetConfiguration:
os: OS
cpu: CPUType
compiler: Compiler
+ compiler_wrapper: str
name: str
@staticmethod
@@ -100,6 +101,7 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
os=OS(d["os"]),
cpu=CPUType(d["cpu"]),
compiler=Compiler(d["compiler"]),
+ compiler_wrapper=d.get("compiler_wrapper", ""),
name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index e776b42bd9..b4545a5a40 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -23,6 +23,7 @@ class ErrorSeverity(IntEnum):
CONFIG_ERR = 2
REMOTE_CMD_EXEC_ERR = 3
SSH_ERR = 4
+ DPDK_BUILD_ERR = 10
class DTSError(Exception):
@@ -111,3 +112,19 @@ def __str__(self) -> str:
f"Command {self.command} returned a non-zero exit code: "
f"{self.command_return_code}"
)
+
+
+class RemoteDirectoryExistsError(DTSError):
+ """
+ Raised when a remote directory to be created already exists.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+
+
+class DPDKBuildError(DTSError):
+ """
+ Raised when DPDK build fails for any reason.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 7a4cc5e669..06d1ffefdd 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -2,10 +2,14 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
-from abc import ABC
+from abc import ABC, abstractmethod
+from pathlib import PurePath
-from framework.config import NodeConfiguration
+from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
+from framework.settings import SETTINGS
+from framework.testbed_model import MesonArgs
+from framework.utils import EnvVarsDict
from .remote import RemoteSession, create_remote_session
@@ -44,3 +48,85 @@ def is_alive(self) -> bool:
Check whether the remote session is still responding.
"""
return self.remote_session.is_alive()
+
+ @abstractmethod
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+ """
+ Try to find DPDK remote dir in remote_dir.
+ """
+
+ @abstractmethod
+ def get_remote_tmp_dir(self) -> PurePath:
+ """
+ Get the path of the temporary directory of the remote OS.
+ """
+
+ @abstractmethod
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for the target architecture. Get
+ information from the node if needed.
+ """
+
+ @abstractmethod
+ def join_remote_path(self, *args: str | PurePath) -> PurePath:
+ """
+ Join path parts using the path separator that fits the remote OS.
+ """
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local filesystem to destination_file
+ on the remote Node associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated remote Node to destination_file on local storage.
+ """
+
+ @abstractmethod
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ """
+ Remove remote directory, by default remove recursively and forcefully.
+ """
+
+ @abstractmethod
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ """
+ Extract remote tarball in place. If expected_dir is a non-empty string, check
+ whether the dir exists after extracting the archive.
+ """
+
+ @abstractmethod
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: MesonArgs,
+ remote_dpdk_dir: str | PurePath,
+ remote_dpdk_build_dir: str | PurePath,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> None:
+ """
+ Build DPDK in the input dir with specified environment variables and meson
+ arguments.
+ """
+
+ @abstractmethod
+ def get_dpdk_version(self, version_path: str | PurePath) -> str:
+ """
+ Inspect DPDK version on the remote node from version_path.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index 110b6a4804..d4da9f114e 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -2,6 +2,14 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from pathlib import PurePath, PurePosixPath
+
+from framework.config import Architecture
+from framework.exception import DPDKBuildError, RemoteCommandExecutionError
+from framework.settings import SETTINGS
+from framework.testbed_model import MesonArgs
+from framework.utils import EnvVarsDict
+
from .os_session import OSSession
@@ -10,3 +18,121 @@ class PosixSession(OSSession):
An intermediary class implementing the Posix compliant parts of
Linux and other OS remote sessions.
"""
+
+ @staticmethod
+ def combine_short_options(**opts: bool) -> str:
+ ret_opts = ""
+ for opt, include in opts.items():
+ if include:
+ ret_opts = f"{ret_opts}{opt}"
+
+ if ret_opts:
+ ret_opts = f" -{ret_opts}"
+
+ return ret_opts
+
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+ remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
+ result = self.remote_session.send_command(f"ls -d {remote_guess} | tail -1")
+ return PurePosixPath(result.stdout)
+
+ def get_remote_tmp_dir(self) -> PurePosixPath:
+ return PurePosixPath("/tmp")
+
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for i686 arch build. Get information
+ from the node if needed.
+ """
+ env_vars = {}
+ if arch == Architecture.i686:
+ # find the pkg-config path and store it in PKG_CONFIG_LIBDIR
+ out = self.remote_session.send_command("find /usr -type d -name pkgconfig")
+ pkg_path = ""
+ res_path = out.stdout.split("\r\n")
+ for cur_path in res_path:
+ if "i386" in cur_path:
+ pkg_path = cur_path
+ break
+ assert pkg_path != "", "i386 pkg-config path not found"
+
+ env_vars["CFLAGS"] = "-m32"
+ env_vars["PKG_CONFIG_LIBDIR"] = pkg_path
+
+ return env_vars
+
+ def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+ return PurePosixPath(*args)
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ self.remote_session.copy_file(source_file, destination_file, source_remote)
+
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ opts = PosixSession.combine_short_options(r=recursive, f=force)
+ self.remote_session.send_command(f"rm{opts} {remote_dir_path}")
+
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ self.remote_session.send_command(
+ f"tar xfm {remote_tarball_path} "
+ f"-C {PurePosixPath(remote_tarball_path).parent}",
+ 60,
+ )
+ if expected_dir:
+ self.remote_session.send_command(f"ls {expected_dir}", verify=True)
+
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: MesonArgs,
+ remote_dpdk_dir: str | PurePath,
+ remote_dpdk_build_dir: str | PurePath,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> None:
+ try:
+ if rebuild:
+ # reconfigure, then build
+ self._logger.info("Reconfiguring DPDK build.")
+ self.remote_session.send_command(
+ f"meson configure {meson_args} {remote_dpdk_build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+ else:
+ # fresh build - remove target dir first, then build from scratch
+ self._logger.info("Configuring DPDK build from scratch.")
+ self.remove_remote_dir(remote_dpdk_build_dir)
+ self.remote_session.send_command(
+ f"meson {meson_args} {remote_dpdk_dir} {remote_dpdk_build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+
+ self._logger.info("Building DPDK.")
+ self.remote_session.send_command(
+ f"ninja -C {remote_dpdk_build_dir}", timeout, verify=True, env=env_vars
+ )
+ except RemoteCommandExecutionError as e:
+ raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
+
+ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+ out = self.remote_session.send_command(
+ f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
+ )
+ return out.stdout
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 5ac395ec79..91dee3cb4f 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -5,11 +5,13 @@
import dataclasses
from abc import ABC, abstractmethod
+from pathlib import PurePath
from framework.config import NodeConfiguration
from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict
@dataclasses.dataclass(slots=True, frozen=True)
@@ -83,15 +85,22 @@ def _connect(self) -> None:
"""
def send_command(
- self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ self,
+ command: str,
+ timeout: float = SETTINGS.timeout,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
) -> CommandResult:
"""
- Send a command to the connected node and return CommandResult.
+ Send a command to the connected node using optional env vars
+ and return CommandResult.
If verify is True, check the return code of the executed command
and raise a RemoteCommandExecutionError if the command failed.
"""
- self._logger.info(f"Sending: '{command}'")
- result = self._send_command(command, timeout)
+ self._logger.info(
+ f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
+ )
+ result = self._send_command(command, timeout, env)
if verify and result.return_code:
self._logger.debug(
f"Command '{command}' failed with return code '{result.return_code}'"
@@ -104,9 +113,12 @@ def send_command(
return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> CommandResult:
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return CommandResult.
+ Use the underlying protocol to execute the command using optional env vars
+ and return CommandResult.
"""
def close(self, force: bool = False) -> None:
@@ -127,3 +139,17 @@ def is_alive(self) -> bool:
"""
Check whether the remote session is still responding.
"""
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local filesystem to destination_file on the remote Node
+ associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated Node to destination_file on local filesystem.
+ """
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index 6da5be9fff..d0863d8791 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -4,13 +4,15 @@
# Copyright(c) 2022-2023 University of New Hampshire
import time
+from pathlib import PurePath
+import pexpect # type: ignore
from pexpect import pxssh # type: ignore
from framework.config import NodeConfiguration
from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
from framework.logger import DTSLOG
-from framework.utils import GREEN, RED
+from framework.utils import GREEN, RED, EnvVarsDict
from .remote_session import CommandResult, RemoteSession
@@ -163,16 +165,22 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> CommandResult:
- output = self._send_command_get_output(command, timeout)
- return_code = int(self._send_command_get_output("echo $?", timeout))
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
+ output = self._send_command_get_output(command, timeout, env)
+ return_code = int(self._send_command_get_output("echo $?", timeout, None))
# we're capturing only stdout
return CommandResult(self.name, command, output, "", return_code)
- def _send_command_get_output(self, command: str, timeout: float) -> str:
+ def _send_command_get_output(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> str:
try:
self._clean_session()
+ if env:
+ command = f"{env} {command}"
self._send_line(command)
except Exception as e:
raise e
@@ -189,3 +197,53 @@ def _close(self, force: bool = False) -> None:
else:
if self.is_alive():
self.session.logout()
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Send a local file to a remote host.
+ """
+ if source_remote:
+ source_file = f"{self.username}@{self.ip}:{source_file}"
+ else:
+ destination_file = f"{self.username}@{self.ip}:{destination_file}"
+
+ port = ""
+ if self.port:
+ port = f" -P {self.port}"
+
+ # this is not OS agnostic, find a Pythonic (and thus OS agnostic) way
+ # TODO Fabric should handle this
+ command = (
+ f"scp -v{port} -o NoHostAuthenticationForLocalhost=yes"
+ f" {source_file} {destination_file}"
+ )
+
+ self._spawn_scp(command)
+
+ def _spawn_scp(self, scp_cmd: str) -> None:
+ """
+ Transfer a file with SCP
+ """
+ self._logger.info(scp_cmd)
+ p: pexpect.spawn = pexpect.spawn(scp_cmd)
+ time.sleep(0.5)
+ ssh_newkey: str = "Are you sure you want to continue connecting"
+ i: int = p.expect(
+ [ssh_newkey, "[pP]assword", "# ", pexpect.EOF, pexpect.TIMEOUT], 120
+ )
+ if i == 0: # add once in trust list
+ p.sendline("yes")
+ i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2)
+
+ if i == 1:
+ time.sleep(0.5)
+ p.sendline(self.password)
+ p.expect("Exit status 0", 60)
+ if i == 4:
+ self._logger.error("SCP TIMEOUT error %d" % i)
+ p.close()
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 6422b23499..f787187ade 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -7,8 +7,11 @@
import os
from collections.abc import Callable, Iterable, Sequence
from dataclasses import dataclass
+from pathlib import Path
from typing import Any, TypeVar
+from .exception import ConfigurationError
+
_T = TypeVar("_T")
@@ -60,6 +63,9 @@ class _Settings:
output_dir: str
timeout: float
verbose: bool
+ skip_setup: bool
+ dpdk_tarball_path: Path
+ compile_timeout: float
def _get_parser() -> argparse.ArgumentParser:
@@ -91,6 +97,7 @@ def _get_parser() -> argparse.ArgumentParser:
"--timeout",
action=_env_arg("DTS_TIMEOUT"),
default=15,
+ type=float,
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
"compiling DPDK.",
)
@@ -104,16 +111,51 @@ def _get_parser() -> argparse.ArgumentParser:
"to the console.",
)
+ parser.add_argument(
+ "-s",
+ "--skip-setup",
+ action=_env_arg("DTS_SKIP_SETUP"),
+ default="N",
+ help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+ )
+
+ parser.add_argument(
+ "--tarball",
+ "--snapshot",
+ action=_env_arg("DTS_DPDK_TARBALL"),
+ default="dpdk.tar.xz",
+ type=Path,
+ help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball "
+ "which will be used in testing.",
+ )
+
+ parser.add_argument(
+ "--compile-timeout",
+ action=_env_arg("DTS_COMPILE_TIMEOUT"),
+ default=1200,
+ type=float,
+ help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
+ )
+
return parser
+def _check_tarball_path(parsed_args: argparse.Namespace) -> None:
+ if not os.path.exists(parsed_args.tarball):
+ raise ConfigurationError(f"DPDK tarball '{parsed_args.tarball}' doesn't exist.")
+
+
def _get_settings() -> _Settings:
parsed_args = _get_parser().parse_args()
+ _check_tarball_path(parsed_args)
return _Settings(
config_file_path=parsed_args.config_file,
output_dir=parsed_args.output_dir,
- timeout=float(parsed_args.timeout),
+ timeout=parsed_args.timeout,
verbose=(parsed_args.verbose == "Y"),
+ skip_setup=(parsed_args.skip_setup == "Y"),
+ dpdk_tarball_path=parsed_args.tarball,
+ compile_timeout=parsed_args.compile_timeout,
)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ead9db482..96e2ab7c3f 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,5 +9,6 @@
# pylama:ignore=W0611
+from .dpdk import MesonArgs
from .node import Node
from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/dpdk.py b/dts/framework/testbed_model/dpdk.py
new file mode 100644
index 0000000000..0526974f72
--- /dev/null
+++ b/dts/framework/testbed_model/dpdk.py
@@ -0,0 +1,33 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Various utilities used for configuring, building and running DPDK.
+"""
+
+
+class MesonArgs(object):
+ """
+ Aggregate the arguments needed to build DPDK:
+ default_library: Default library type, Meson allows "shared", "static" and "both".
+ Defaults to None, in which case the argument won't be used.
+ Keyword arguments: The arguments found in meson_option.txt in root DPDK directory.
+ Do not use -D with them, for example: enable_kmods=True.
+ """
+
+ default_library: str
+
+ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+ self.default_library = (
+ f"--default-library={default_library}" if default_library else ""
+ )
+ self.dpdk_args = " ".join(
+ (
+ f"-D{dpdk_arg_name}={dpdk_arg_value}"
+ for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
+ )
+ )
+
+ def __str__(self) -> str:
+ return " ".join(f"{self.default_library} {self.dpdk_args}".split())
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 42acb6f9b2..442a41bdc8 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -2,6 +2,15 @@
# Copyright(c) 2010-2014 Intel Corporation
# Copyright(c) 2023 PANTHEON.tech s.r.o.
+import os
+import tarfile
+from pathlib import PurePath
+
+from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict, skip_setup
+
+from .dpdk import MesonArgs
from .node import Node
@@ -10,4 +19,153 @@ class SutNode(Node):
A class for managing connections to the System under Test, providing
methods that retrieve the necessary information about the node (such as
CPU, memory and NIC details) and configuration capabilities.
+ Another key capability is building DPDK according to given build target.
"""
+
+ _build_target_config: BuildTargetConfiguration | None
+ _env_vars: EnvVarsDict
+ _remote_tmp_dir: PurePath
+ __remote_dpdk_dir: PurePath | None
+ _dpdk_version: str | None
+ _app_compile_timeout: float
+
+ def __init__(self, node_config: NodeConfiguration):
+ super(SutNode, self).__init__(node_config)
+ self._build_target_config = None
+ self._env_vars = EnvVarsDict()
+ self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
+ self.__remote_dpdk_dir = None
+ self._dpdk_version = None
+ self._app_compile_timeout = 90
+
+ @property
+ def _remote_dpdk_dir(self) -> PurePath:
+ if self.__remote_dpdk_dir is None:
+ self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
+ return self.__remote_dpdk_dir
+
+ @_remote_dpdk_dir.setter
+ def _remote_dpdk_dir(self, value: PurePath) -> None:
+ self.__remote_dpdk_dir = value
+
+ @property
+ def remote_dpdk_build_dir(self) -> PurePath:
+ if self._build_target_config:
+ return self.main_session.join_remote_path(
+ self._remote_dpdk_dir, self._build_target_config.name
+ )
+ else:
+ return self.main_session.join_remote_path(self._remote_dpdk_dir, "build")
+
+ @property
+ def dpdk_version(self) -> str:
+ if self._dpdk_version is None:
+ self._dpdk_version = self.main_session.get_dpdk_version(
+ self._remote_dpdk_dir
+ )
+ return self._dpdk_version
+
+ def _guess_dpdk_remote_dir(self) -> PurePath:
+ return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
+
+ def _set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Setup DPDK on the SUT node.
+ """
+ self._configure_build_target(build_target_config)
+ self._copy_dpdk_tarball()
+ self._build_dpdk()
+
+ def _configure_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Populate common environment variables and set build target config.
+ """
+ self._env_vars = EnvVarsDict()
+ self._build_target_config = build_target_config
+ self._env_vars.update(
+ self.main_session.get_dpdk_build_env_vars(build_target_config.arch)
+ )
+ self._env_vars["CC"] = build_target_config.compiler.name
+ if build_target_config.compiler_wrapper:
+ self._env_vars["CC"] = (
+ f"'{build_target_config.compiler_wrapper} "
+ f"{build_target_config.compiler.name}'"
+ )
+
+ @skip_setup
+ def _copy_dpdk_tarball(self) -> None:
+ """
+ Copy to and extract DPDK tarball on the SUT node.
+ """
+ self._logger.info("Copying DPDK tarball to SUT.")
+ self.main_session.copy_file(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
+
+ # construct remote tarball path
+ # the basename is the same on local host and on remote Node
+ remote_tarball_path = self.main_session.join_remote_path(
+ self._remote_tmp_dir, os.path.basename(SETTINGS.dpdk_tarball_path)
+ )
+
+ # construct remote path after extracting
+ with tarfile.open(SETTINGS.dpdk_tarball_path) as dpdk_tar:
+ dpdk_top_dir = dpdk_tar.getnames()[0]
+ self._remote_dpdk_dir = self.main_session.join_remote_path(
+ self._remote_tmp_dir, dpdk_top_dir
+ )
+
+ self._logger.info(
+ f"Extracting DPDK tarball on SUT: "
+ f"'{remote_tarball_path}' into '{self._remote_dpdk_dir}'."
+ )
+ # clean remote path where we're extracting
+ self.main_session.remove_remote_dir(self._remote_dpdk_dir)
+
+ # then extract to remote path
+ self.main_session.extract_remote_tarball(
+ remote_tarball_path, self._remote_dpdk_dir
+ )
+
+ @skip_setup
+ def _build_dpdk(self) -> None:
+ """
+ Build DPDK. Uses the already configured target. Assumes that the tarball has
+ already been copied to and extracted on the SUT node.
+ """
+ self.main_session.build_dpdk(
+ self._env_vars,
+ MesonArgs(default_library="static", enable_kmods=True, libdir="lib"),
+ self._remote_dpdk_dir,
+ self.remote_dpdk_build_dir,
+ )
+
+ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
+ """
+ Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
+ When app_name is 'all', build all example apps.
+ When app_name is any other string, tries to build that example app.
+ Return the directory path of the built app. If building all apps, return
+ the path to the examples directory (where all apps reside).
+ The meson_dpdk_args are keyword arguments
+ found in meson_option.txt in root DPDK directory. Do not use -D with them,
+ for example: enable_kmods=True.
+ """
+ self.main_session.build_dpdk(
+ self._env_vars,
+ MesonArgs(examples=app_name, **meson_dpdk_args),
+ self._remote_dpdk_dir,
+ self.remote_dpdk_build_dir,
+ rebuild=True,
+ timeout=self._app_compile_timeout,
+ )
+
+ if app_name == "all":
+ return self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, "examples"
+ )
+ return self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index c28c8f1082..611071604b 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -1,9 +1,12 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import sys
+from typing import Callable
+
+from .settings import SETTINGS
def check_dts_python_version() -> None:
@@ -22,9 +25,21 @@ def check_dts_python_version() -> None:
print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+def skip_setup(func) -> Callable[..., None]:
+ if SETTINGS.skip_setup:
+ return lambda *args: None
+ else:
+ return func
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
def RED(text: str) -> str:
return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+
+class EnvVarsDict(dict):
+ def __str__(self) -> str:
+ return " ".join(["=".join(item) for item in self.items()])
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 04/10] dts: add dpdk execution handling
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (2 preceding siblings ...)
2023-02-13 15:28 ` [PATCH v4 03/10] dts: add dpdk build on sut Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 05/10] dts: add node memory setup Juraj Linkeš
` (7 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
Add methods for setting up and shutting down DPDK apps and for
constructing EAL parameters.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 4 +
dts/framework/config/__init__.py | 8 +
dts/framework/config/conf_yaml_schema.json | 25 ++
dts/framework/remote_session/linux_session.py | 18 ++
dts/framework/remote_session/os_session.py | 23 +-
dts/framework/remote_session/posix_session.py | 83 ++++++
dts/framework/testbed_model/__init__.py | 8 +
dts/framework/testbed_model/dpdk.py | 45 ++++
dts/framework/testbed_model/hw/__init__.py | 27 ++
dts/framework/testbed_model/hw/cpu.py | 253 ++++++++++++++++++
.../testbed_model/hw/virtual_device.py | 16 ++
dts/framework/testbed_model/node.py | 46 ++++
dts/framework/testbed_model/sut_node.py | 82 +++++-
dts/framework/utils.py | 20 ++
14 files changed, 656 insertions(+), 2 deletions(-)
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 03696d2bab..1648e5c3c5 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,4 +13,8 @@ nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ arch: x86_64
os: linux
+ lcores: ""
+ use_first_core: false
+ memory_channels: 4
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ca61cb10fe..17b917f3b3 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -72,7 +72,11 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ arch: Architecture
os: OS
+ lcores: str
+ use_first_core: bool
+ memory_channels: int
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -81,7 +85,11 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ arch=Architecture(d["arch"]),
os=OS(d["os"]),
+ lcores=d.get("lcores", "1"),
+ use_first_core=d.get("use_first_core", False),
+ memory_channels=d.get("memory_channels", 1),
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 9170307fbe..334b4bd8ab 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -6,6 +6,14 @@
"type": "string",
"description": "A unique identifier for a node"
},
+ "ARCH": {
+ "type": "string",
+ "enum": [
+ "x86_64",
+ "arm64",
+ "ppc64le"
+ ]
+ },
"OS": {
"type": "string",
"enum": [
@@ -92,8 +100,24 @@
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
},
+ "arch": {
+ "$ref": "#/definitions/ARCH"
+ },
"os": {
"$ref": "#/definitions/OS"
+ },
+ "lcores": {
+ "type": "string",
+ "pattern": "^(([0-9]+|([0-9]+-[0-9]+))(,([0-9]+|([0-9]+-[0-9]+)))*)?$",
+ "description": "Optional comma-separated list of logical cores to use, e.g.: 1,2,3,4,5,18-22. Defaults to 1. An empty string means use all lcores."
+ },
+ "use_first_core": {
+ "type": "boolean",
+ "description": "Indicate whether DPDK should use the first physical core. It won't be used by default."
+ },
+ "memory_channels": {
+ "type": "integer",
+ "description": "How many memory channels to use. Optional, defaults to 1."
}
},
"additionalProperties": false,
@@ -101,6 +125,7 @@
"name",
"hostname",
"user",
+ "arch",
"os"
]
},
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index 9d14166077..6809102038 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from framework.testbed_model import LogicalCore
+
from .posix_session import PosixSession
@@ -9,3 +11,19 @@ class LinuxSession(PosixSession):
"""
The implementation of non-Posix compliant parts of Linux remote sessions.
"""
+
+ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+ cpu_info = self.remote_session.send_command(
+ "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#"
+ ).stdout
+ lcores = []
+ for cpu_line in cpu_info.splitlines():
+ lcore, core, socket, node = cpu_line.split(",")
+ if not use_first_core and core == 0 and socket == 0:
+ self._logger.info("Not using the first physical core.")
+ continue
+ lcores.append(LogicalCore(int(lcore), int(core), int(socket), int(node)))
+ return lcores
+
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ return dpdk_prefix
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 06d1ffefdd..c30753e0b8 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -3,12 +3,13 @@
# Copyright(c) 2023 University of New Hampshire
from abc import ABC, abstractmethod
+from collections.abc import Iterable
from pathlib import PurePath
from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
from framework.settings import SETTINGS
-from framework.testbed_model import MesonArgs
+from framework.testbed_model import LogicalCore, MesonArgs
from framework.utils import EnvVarsDict
from .remote import RemoteSession, create_remote_session
@@ -130,3 +131,23 @@ def get_dpdk_version(self, version_path: str | PurePath) -> str:
"""
Inspect DPDK version on the remote node from version_path.
"""
+
+ @abstractmethod
+ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+ """
+ Compose a list of LogicalCores present on the remote node.
+ If use_first_core is False, the first physical core won't be used.
+ """
+
+ @abstractmethod
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ """
+ Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
+ dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+ """
+
+ @abstractmethod
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ """
+ Get the DPDK file prefix that will be used when running DPDK apps.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index d4da9f114e..4c8474b804 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+import re
+from collections.abc import Iterable
from pathlib import PurePath, PurePosixPath
from framework.config import Architecture
@@ -136,3 +138,84 @@ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
)
return out.stdout
+
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ self._logger.info("Cleaning up DPDK apps.")
+ dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
+ if dpdk_runtime_dirs:
+ # kill and cleanup only if DPDK is running
+ dpdk_pids = self._get_dpdk_pids(dpdk_runtime_dirs)
+ for dpdk_pid in dpdk_pids:
+ self.remote_session.send_command(f"kill -9 {dpdk_pid}", 20)
+ self._check_dpdk_hugepages(dpdk_runtime_dirs)
+ self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
+
+ def _get_dpdk_runtime_dirs(
+ self, dpdk_prefix_list: Iterable[str]
+ ) -> list[PurePosixPath]:
+ prefix = PurePosixPath("/var", "run", "dpdk")
+ if not dpdk_prefix_list:
+ remote_prefixes = self._list_remote_dirs(prefix)
+ if not remote_prefixes:
+ dpdk_prefix_list = []
+ else:
+ dpdk_prefix_list = remote_prefixes
+
+ return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
+
+ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
+ """
+ Return a list of directories of the remote_dir.
+ If remote_path doesn't exist, return None.
+ """
+ out = self.remote_session.send_command(
+ f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
+ ).stdout
+ if "No such file or directory" in out:
+ return None
+ else:
+ return out.splitlines()
+
+ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+ pids = []
+ pid_regex = r"p(\d+)"
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ dpdk_config_file = PurePosixPath(dpdk_runtime_dir, "config")
+ if self._remote_files_exists(dpdk_config_file):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {dpdk_config_file}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ for out_line in out.splitlines():
+ match = re.match(pid_regex, out_line)
+ if match:
+ pids.append(int(match.group(1)))
+ return pids
+
+ def _remote_files_exists(self, remote_path: PurePath) -> bool:
+ result = self.remote_session.send_command(f"test -e {remote_path}")
+ return not result.return_code
+
+ def _check_dpdk_hugepages(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
+ if self._remote_files_exists(hugepage_info):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {hugepage_info}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ self._logger.warning("Some DPDK processes did not free hugepages.")
+ self._logger.warning("*******************************************")
+ self._logger.warning(out)
+ self._logger.warning("*******************************************")
+
+ def _remove_dpdk_runtime_dirs(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ self.remove_remote_dir(dpdk_runtime_dir)
+
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ return ""
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 96e2ab7c3f..2be5169dc8 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -10,5 +10,13 @@
# pylama:ignore=W0611
from .dpdk import MesonArgs
+from .hw import (
+ LogicalCore,
+ LogicalCoreAmount,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+ VirtualDevice,
+ lcore_filter,
+)
from .node import Node
from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/dpdk.py b/dts/framework/testbed_model/dpdk.py
index 0526974f72..9b3a9e7381 100644
--- a/dts/framework/testbed_model/dpdk.py
+++ b/dts/framework/testbed_model/dpdk.py
@@ -6,6 +6,8 @@
Various utilities used for configuring, building and running DPDK.
"""
+from .hw import LogicalCoreList, VirtualDevice
+
class MesonArgs(object):
"""
@@ -31,3 +33,46 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
def __str__(self) -> str:
return " ".join(f"{self.default_library} {self.dpdk_args}".split())
+
+
+class EalParameters(object):
+ def __init__(
+ self,
+ lcore_list: LogicalCoreList,
+ memory_channels: int,
+ prefix: str,
+ no_pci: bool,
+ vdevs: list[VirtualDevice],
+ other_eal_param: str,
+ ):
+ """
+ Generate eal parameters character string;
+ :param lcore_list: the list of logical cores to use.
+ :param memory_channels: the number of memory channels to use.
+ :param prefix: set file prefix string, eg:
+ prefix='vf'
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1']
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments'
+ """
+ self._lcore_list = f"-l {lcore_list}"
+ self._memory_channels = f"-n {memory_channels}"
+ self._prefix = prefix
+ if prefix:
+ self._prefix = f"--file-prefix={prefix}"
+ self._no_pci = "--no-pci" if no_pci else ""
+ self._vdevs = " ".join(f"--vdev {vdev}" for vdev in vdevs)
+ self._other_eal_param = other_eal_param
+
+ def __str__(self) -> str:
+ return (
+ f"{self._lcore_list} "
+ f"{self._memory_channels} "
+ f"{self._prefix} "
+ f"{self._no_pci} "
+ f"{self._vdevs} "
+ f"{self._other_eal_param}"
+ )
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
new file mode 100644
index 0000000000..fb4cdac8e3
--- /dev/null
+++ b/dts/framework/testbed_model/hw/__init__.py
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from .cpu import (
+ LogicalCore,
+ LogicalCoreAmount,
+ LogicalCoreAmountFilter,
+ LogicalCoreFilter,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+)
+from .virtual_device import VirtualDevice
+
+
+def lcore_filter(
+ core_list: list[LogicalCore],
+ filter_specifier: LogicalCoreAmount | LogicalCoreList,
+ ascending: bool,
+) -> LogicalCoreFilter:
+ if isinstance(filter_specifier, LogicalCoreList):
+ return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+ elif isinstance(filter_specifier, LogicalCoreAmount):
+ return LogicalCoreAmountFilter(core_list, filter_specifier, ascending)
+ else:
+ raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/hw/cpu.py
new file mode 100644
index 0000000000..96c46ee8c5
--- /dev/null
+++ b/dts/framework/testbed_model/hw/cpu.py
@@ -0,0 +1,253 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+import dataclasses
+from abc import ABC, abstractmethod
+from collections.abc import Iterable
+from dataclasses import dataclass
+
+from framework.utils import expand_range
+
+
+@dataclass(slots=True, frozen=True)
+class LogicalCore(object):
+ """
+ Representation of a CPU core. A physical core is represented in OS
+ by multiple logical cores (lcores) if CPU multithreading is enabled.
+ """
+
+ lcore: int
+ core: int
+ socket: int
+ node: int
+
+ def __int__(self) -> int:
+ return self.lcore
+
+
+class LogicalCoreList(object):
+ """
+ Convert these options into a list of logical core ids.
+ lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
+ lcore_list=[0,1,2,3] - a list of int indices
+ lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
+ lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
+
+ The class creates a unified format used across the framework and allows
+ the user to use either a str representation (using str(instance) or directly
+ in f-strings) or a list representation (by accessing instance.lcore_list).
+ Empty lcore_list is allowed.
+ """
+
+ _lcore_list: list[int]
+ _lcore_str: str
+
+ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+ self._lcore_list = []
+ if isinstance(lcore_list, str):
+ lcore_list = lcore_list.split(",")
+ for lcore in lcore_list:
+ if isinstance(lcore, str):
+ self._lcore_list.extend(expand_range(lcore))
+ else:
+ self._lcore_list.append(int(lcore))
+
+ # the input lcores may not be sorted
+ self._lcore_list.sort()
+ self._lcore_str = (
+ f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+ )
+
+ @property
+ def lcore_list(self) -> list[int]:
+ return self._lcore_list
+
+ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
+ formatted_core_list = []
+ segment = lcore_ids_list[:1]
+ for lcore_id in lcore_ids_list[1:]:
+ if lcore_id - segment[-1] == 1:
+ segment.append(lcore_id)
+ else:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}"
+ if len(segment) > 1
+ else f"{segment[0]}"
+ )
+ current_core_index = lcore_ids_list.index(lcore_id)
+ formatted_core_list.extend(
+ self._get_consecutive_lcores_range(
+ lcore_ids_list[current_core_index:]
+ )
+ )
+ segment.clear()
+ break
+ if len(segment) > 0:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
+ )
+ return formatted_core_list
+
+ def __str__(self) -> str:
+ return self._lcore_str
+
+
+@dataclasses.dataclass(slots=True, frozen=True)
+class LogicalCoreAmount(object):
+ """
+ Define the amount of logical cores to use.
+ If sockets is not None, socket_amount is ignored.
+ """
+
+ lcores_per_core: int = 1
+ cores_per_socket: int = 2
+ socket_amount: int = 1
+ sockets: list[int] | None = None
+
+
+class LogicalCoreFilter(ABC):
+ """
+ Filter according to the input filter specifier. Each filter needs to be
+ implemented in a derived class.
+ This class only implements operations common to all filters, such as sorting
+ the list to be filtered beforehand.
+ """
+
+ _filter_specifier: LogicalCoreAmount | LogicalCoreList
+ _lcores_to_filter: list[LogicalCore]
+
+ def __init__(
+ self,
+ lcore_list: list[LogicalCore],
+ filter_specifier: LogicalCoreAmount | LogicalCoreList,
+ ascending: bool = True,
+ ):
+ self._filter_specifier = filter_specifier
+
+ # sorting by core is needed in case hyperthreading is enabled
+ self._lcores_to_filter = sorted(
+ lcore_list, key=lambda x: x.core, reverse=not ascending
+ )
+ self.filter()
+
+ @abstractmethod
+ def filter(self) -> list[LogicalCore]:
+ """
+ Use self._filter_specifier to filter self._lcores_to_filter
+ and return the list of filtered LogicalCores.
+ self._lcores_to_filter is a sorted copy of the original list,
+ so it may be modified.
+ """
+
+
+class LogicalCoreAmountFilter(LogicalCoreFilter):
+ """
+ Filter the input list of LogicalCores according to specified rules:
+ Use cores from the specified amount of sockets or from the specified socket ids.
+ If sockets is specified, it takes precedence over socket_amount.
+ From each of those sockets, use only cores_per_socket of cores.
+ And for each core, use lcores_per_core of logical cores. Hypertheading
+ must be enabled for this to take effect.
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+
+ _filter_specifier: LogicalCoreAmount
+
+ def filter(self) -> list[LogicalCore]:
+ return self._filter_cores(self._filter_sockets(self._lcores_to_filter))
+
+ def _filter_sockets(
+ self, lcores_to_filter: Iterable[LogicalCore]
+ ) -> list[LogicalCore]:
+ allowed_sockets: set[int] = set()
+ socket_amount = self._filter_specifier.socket_amount
+ if self._filter_specifier.sockets:
+ socket_amount = len(self._filter_specifier.sockets)
+ allowed_sockets = set(self._filter_specifier.sockets)
+
+ filtered_lcores = []
+ for lcore in lcores_to_filter:
+ if not self._filter_specifier.sockets:
+ if len(allowed_sockets) < socket_amount:
+ allowed_sockets.add(lcore.socket)
+ if lcore.socket in allowed_sockets:
+ filtered_lcores.append(lcore)
+
+ if len(allowed_sockets) < socket_amount:
+ raise ValueError(
+ f"The amount of sockets from which to use cores "
+ f"({socket_amount}) exceeds the actual amount present "
+ f"on the node ({len(allowed_sockets)})"
+ )
+
+ return filtered_lcores
+
+ def _filter_cores(
+ self, lcores_to_filter: Iterable[LogicalCore]
+ ) -> list[LogicalCore]:
+ # no need to use ordered dict, from Python3.7 the dict
+ # insertion order is preserved (LIFO).
+ allowed_lcore_per_core_count_map: dict[int, int] = {}
+ filtered_lcores = []
+ for lcore in lcores_to_filter:
+ if lcore.core in allowed_lcore_per_core_count_map:
+ lcore_count = allowed_lcore_per_core_count_map[lcore.core]
+ if self._filter_specifier.lcores_per_core > lcore_count:
+ # only add lcores of the given core
+ allowed_lcore_per_core_count_map[lcore.core] += 1
+ filtered_lcores.append(lcore)
+ else:
+ raise ValueError(
+ f"The amount of logical cores per core to use "
+ f"({self._filter_specifier.lcores_per_core}) "
+ f"exceeds the actual amount present. Is hyperthreading enabled?"
+ )
+ elif self._filter_specifier.cores_per_socket > len(
+ allowed_lcore_per_core_count_map
+ ):
+ # only add lcores if we need more
+ allowed_lcore_per_core_count_map[lcore.core] = 1
+ filtered_lcores.append(lcore)
+ else:
+ # lcores are sorted by core, at this point we won't encounter new cores
+ break
+
+ cores_per_socket = len(allowed_lcore_per_core_count_map)
+ if cores_per_socket < self._filter_specifier.cores_per_socket:
+ raise ValueError(
+ f"The amount of cores per socket to use "
+ f"({self._filter_specifier.cores_per_socket}) "
+ f"exceeds the actual amount present ({cores_per_socket})"
+ )
+
+ return filtered_lcores
+
+
+class LogicalCoreListFilter(LogicalCoreFilter):
+ """
+ Filter the input list of Logical Cores according to the input list of
+ lcore indices.
+ An empty LogicalCoreList won't filter anything.
+ """
+
+ _filter_specifier: LogicalCoreList
+
+ def filter(self) -> list[LogicalCore]:
+ if not len(self._filter_specifier.lcore_list):
+ return self._lcores_to_filter
+
+ filtered_lcores = []
+ for core in self._lcores_to_filter:
+ if core.lcore in self._filter_specifier.lcore_list:
+ filtered_lcores.append(core)
+
+ if len(filtered_lcores) != len(self._filter_specifier.lcore_list):
+ raise ValueError(
+ f"Not all logical cores from {self._filter_specifier.lcore_list} "
+ f"were found among {self._lcores_to_filter}"
+ )
+
+ return filtered_lcores
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/hw/virtual_device.py
new file mode 100644
index 0000000000..eb664d9f17
--- /dev/null
+++ b/dts/framework/testbed_model/hw/virtual_device.py
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+
+class VirtualDevice(object):
+ """
+ Base class for virtual devices used by DPDK.
+ """
+
+ name: str
+
+ def __init__(self, name: str):
+ self.name = name
+
+ def __str__(self) -> str:
+ return self.name
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index a37f7921e0..cf2af2ca72 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -15,6 +15,14 @@
from framework.logger import DTSLOG, getLogger
from framework.remote_session import OSSession, create_session
+from .hw import (
+ LogicalCore,
+ LogicalCoreAmount,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+ lcore_filter,
+)
+
class Node(object):
"""
@@ -26,6 +34,7 @@ class Node(object):
main_session: OSSession
config: NodeConfiguration
name: str
+ lcores: list[LogicalCore]
_logger: DTSLOG
_other_sessions: list[OSSession]
@@ -35,6 +44,12 @@ def __init__(self, node_config: NodeConfiguration):
self._logger = getLogger(self.name)
self.main_session = create_session(self.config, self.name, self._logger)
+ self._get_remote_cpus()
+ # filter the node lcores according to user config
+ self.lcores = LogicalCoreListFilter(
+ self.lcores, LogicalCoreList(self.config.lcores)
+ ).filter()
+
self._other_sessions = []
self._logger.info(f"Created node: {self.name}")
@@ -107,6 +122,37 @@ def create_session(self, name: str) -> OSSession:
self._other_sessions.append(connection)
return connection
+ def filter_lcores(
+ self,
+ filter_specifier: LogicalCoreAmount | LogicalCoreList,
+ ascending: bool = True,
+ ) -> list[LogicalCore]:
+ """
+ Filter the LogicalCores found on the Node according to specified rules:
+ Use cores from the specified amount of sockets or from the specified socket ids.
+ If sockets is specified, it takes precedence over socket_amount.
+ From each of those sockets, use only cores_per_socket of cores.
+ And for each core, use lcores_per_core of logical cores. Hypertheading
+ must be enabled for this to take effect.
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+ self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
+ return lcore_filter(
+ self.lcores,
+ filter_specifier,
+ ascending,
+ ).filter()
+
+ def _get_remote_cpus(self) -> None:
+ """
+ Scan CPUs in the remote OS and store a list of LogicalCores.
+ """
+ self._logger.info("Getting CPU information.")
+ self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 442a41bdc8..ff9d55f2d8 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -4,13 +4,16 @@
import os
import tarfile
+import time
from pathlib import PurePath
from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.remote_session import OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, skip_setup
-from .dpdk import MesonArgs
+from .dpdk import EalParameters, MesonArgs
+from .hw import LogicalCoreAmount, LogicalCoreList, VirtualDevice
from .node import Node
@@ -22,21 +25,29 @@ class SutNode(Node):
Another key capability is building DPDK according to given build target.
"""
+ _dpdk_prefix_list: list[str]
+ _dpdk_timestamp: str
_build_target_config: BuildTargetConfiguration | None
_env_vars: EnvVarsDict
_remote_tmp_dir: PurePath
__remote_dpdk_dir: PurePath | None
_dpdk_version: str | None
_app_compile_timeout: float
+ _dpdk_kill_session: OSSession | None
def __init__(self, node_config: NodeConfiguration):
super(SutNode, self).__init__(node_config)
+ self._dpdk_prefix_list = []
self._build_target_config = None
self._env_vars = EnvVarsDict()
self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
self.__remote_dpdk_dir = None
self._dpdk_version = None
self._app_compile_timeout = 90
+ self._dpdk_kill_session = None
+ self._dpdk_timestamp = (
+ f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"
+ )
@property
def _remote_dpdk_dir(self) -> PurePath:
@@ -169,3 +180,72 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
return self.main_session.join_remote_path(
self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
)
+
+ def kill_cleanup_dpdk_apps(self) -> None:
+ """
+ Kill all dpdk applications on the SUT. Cleanup hugepages.
+ """
+ if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
+ # we can use the session if it exists and responds
+ self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
+ else:
+ # otherwise, we need to (re)create it
+ self._dpdk_kill_session = self.create_session("dpdk_kill")
+ self._dpdk_prefix_list = []
+
+ def create_eal_parameters(
+ self,
+ lcore_filter_specifier: LogicalCoreAmount
+ | LogicalCoreList = LogicalCoreAmount(),
+ ascending_cores: bool = True,
+ prefix: str = "dpdk",
+ append_prefix_timestamp: bool = True,
+ no_pci: bool = False,
+ vdevs: list[VirtualDevice] = None,
+ other_eal_param: str = "",
+ ) -> EalParameters:
+ """
+ Generate eal parameters character string;
+ :param lcore_filter_specifier: an amount of lcores/cores/sockets to use
+ or a list of lcore ids to use.
+ The default will select one lcore for each of two cores
+ on one socket, in ascending order of core ids.
+ :param ascending_cores: True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the
+ highest id and continue in descending order. This ordering
+ affects which sockets to consider first as well.
+ :param prefix: set file prefix string, eg:
+ prefix='vf'
+ :param append_prefix_timestamp: if True, will append a timestamp to
+ DPDK file prefix.
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1']
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments'
+ :return: eal param string, eg:
+ '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+ """
+
+ lcore_list = LogicalCoreList(
+ self.filter_lcores(lcore_filter_specifier, ascending_cores)
+ )
+
+ if append_prefix_timestamp:
+ prefix = f"{prefix}_{self._dpdk_timestamp}"
+ prefix = self.main_session.get_dpdk_file_prefix(prefix)
+ if prefix:
+ self._dpdk_prefix_list.append(prefix)
+
+ if vdevs is None:
+ vdevs = []
+
+ return EalParameters(
+ lcore_list=lcore_list,
+ memory_channels=self.config.memory_channels,
+ prefix=prefix,
+ no_pci=no_pci,
+ vdevs=vdevs,
+ other_eal_param=other_eal_param,
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 611071604b..eebe76f16c 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -32,6 +32,26 @@ def skip_setup(func) -> Callable[..., None]:
return func
+def expand_range(range_str: str) -> list[int]:
+ """
+ Process range string into a list of integers. There are two possible formats:
+ n - a single integer
+ n-m - a range of integers
+
+ The returned range includes both n and m. Empty string returns an empty list.
+ """
+ expanded_range: list[int] = []
+ if range_str:
+ range_boundaries = range_str.split("-")
+ # will throw an exception when items in range_boundaries can't be converted,
+ # serving as type check
+ expanded_range.extend(
+ range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
+ )
+
+ return expanded_range
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 05/10] dts: add node memory setup
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (3 preceding siblings ...)
2023-02-13 15:28 ` [PATCH v4 04/10] dts: add dpdk execution handling Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 06/10] dts: add test suite module Juraj Linkeš
` (6 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
Setup hugepages on nodes. This is useful not only on SUT nodes, but
also on TG nodes which use TGs that utilize hugepages.
The setup is opt-in, i.e. users need to supply hugepage configuration to
instruct DTS to configure them. It not configured, hugepage
configuration will be skipped. This is helpful if users don't want DTS
to tamper with hugepages on their system.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 3 +
dts/framework/config/__init__.py | 14 ++++
dts/framework/config/conf_yaml_schema.json | 21 +++++
dts/framework/remote_session/linux_session.py | 78 +++++++++++++++++++
dts/framework/remote_session/os_session.py | 8 ++
dts/framework/testbed_model/node.py | 12 +++
6 files changed, 136 insertions(+)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1648e5c3c5..32ec6e97a5 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -18,3 +18,6 @@ nodes:
lcores: ""
use_first_core: false
memory_channels: 4
+ hugepages:
+ amount: 256
+ force_first_numa: false
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 17b917f3b3..0e5f493c5d 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -66,6 +66,12 @@ class Compiler(StrEnum):
#
# Frozen makes the object immutable. This enables further optimizations,
# and makes it thread safe should we every want to move in that direction.
+@dataclass(slots=True, frozen=True)
+class HugepageConfiguration:
+ amount: int
+ force_first_numa: bool
+
+
@dataclass(slots=True, frozen=True)
class NodeConfiguration:
name: str
@@ -77,9 +83,16 @@ class NodeConfiguration:
lcores: str
use_first_core: bool
memory_channels: int
+ hugepages: HugepageConfiguration | None
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
+ hugepage_config = d.get("hugepages")
+ if hugepage_config:
+ if "force_first_numa" not in hugepage_config:
+ hugepage_config["force_first_numa"] = False
+ hugepage_config = HugepageConfiguration(**hugepage_config)
+
return NodeConfiguration(
name=d["name"],
hostname=d["hostname"],
@@ -90,6 +103,7 @@ def from_dict(d: dict) -> "NodeConfiguration":
lcores=d.get("lcores", "1"),
use_first_core=d.get("use_first_core", False),
memory_channels=d.get("memory_channels", 1),
+ hugepages=hugepage_config,
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 334b4bd8ab..56f93def36 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -75,6 +75,24 @@
"cpu",
"compiler"
]
+ },
+ "hugepages": {
+ "type": "object",
+ "description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.",
+ "properties": {
+ "amount": {
+ "type": "integer",
+ "description": "The amount of hugepages to configure. Hugepage size will be the system default."
+ },
+ "force_first_numa": {
+ "type": "boolean",
+ "description": "Set to True to force configuring hugepages on the first NUMA node. Defaults to False."
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "amount"
+ ]
}
},
"type": "object",
@@ -118,6 +136,9 @@
"memory_channels": {
"type": "integer",
"description": "How many memory channels to use. Optional, defaults to 1."
+ },
+ "hugepages": {
+ "$ref": "#/definitions/hugepages"
}
},
"additionalProperties": false,
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index 6809102038..5257d5e381 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,7 +2,9 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from framework.exception import RemoteCommandExecutionError
from framework.testbed_model import LogicalCore
+from framework.utils import expand_range
from .posix_session import PosixSession
@@ -27,3 +29,79 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
return dpdk_prefix
+
+ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+ self._logger.info("Getting Hugepage information.")
+ hugepage_size = self._get_hugepage_size()
+ hugepages_total = self._get_hugepages_total()
+ self._numa_nodes = self._get_numa_nodes()
+
+ if force_first_numa or hugepages_total != hugepage_amount:
+ # when forcing numa, we need to clear existing hugepages regardless
+ # of size, so they can be moved to the first numa node
+ self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
+ else:
+ self._logger.info("Hugepages already configured.")
+ self._mount_huge_pages()
+
+ def _get_hugepage_size(self) -> int:
+ hugepage_size = self.remote_session.send_command(
+ "awk '/Hugepagesize/ {print $2}' /proc/meminfo"
+ ).stdout
+ return int(hugepage_size)
+
+ def _get_hugepages_total(self) -> int:
+ hugepages_total = self.remote_session.send_command(
+ "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ ).stdout
+ return int(hugepages_total)
+
+ def _get_numa_nodes(self) -> list[int]:
+ try:
+ numa_count = self.remote_session.send_command(
+ "cat /sys/devices/system/node/online", verify=True
+ ).stdout
+ numa_range = expand_range(numa_count)
+ except RemoteCommandExecutionError:
+ # the file doesn't exist, meaning the node doesn't support numa
+ numa_range = []
+ return numa_range
+
+ def _mount_huge_pages(self) -> None:
+ self._logger.info("Re-mounting Hugepages.")
+ hugapge_fs_cmd = "awk '/hugetlbfs/ { print $2 }' /proc/mounts"
+ self.remote_session.send_command(f"umount $({hugapge_fs_cmd})")
+ result = self.remote_session.send_command(hugapge_fs_cmd)
+ if result.stdout == "":
+ remote_mount_path = "/mnt/huge"
+ self.remote_session.send_command(f"mkdir -p {remote_mount_path}")
+ self.remote_session.send_command(
+ f"mount -t hugetlbfs nodev {remote_mount_path}"
+ )
+
+ def _supports_numa(self) -> bool:
+ # the system supports numa if self._numa_nodes is non-empty and there are more
+ # than one numa node (in the latter case it may actually support numa, but
+ # there's no reason to do any numa specific configuration)
+ return len(self._numa_nodes) > 1
+
+ def _configure_huge_pages(
+ self, amount: int, size: int, force_first_numa: bool
+ ) -> None:
+ self._logger.info("Configuring Hugepages.")
+ hugepage_config_path = (
+ f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+ )
+ if force_first_numa and self._supports_numa():
+ # clear non-numa hugepages
+ self.remote_session.send_command(
+ f"echo 0 | sudo tee {hugepage_config_path}"
+ )
+ hugepage_config_path = (
+ f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages"
+ f"/hugepages-{size}kB/nr_hugepages"
+ )
+
+ self.remote_session.send_command(
+ f"echo {amount} | sudo tee {hugepage_config_path}"
+ )
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index c30753e0b8..4753c70e91 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -151,3 +151,11 @@ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
"""
Get the DPDK file prefix that will be used when running DPDK apps.
"""
+
+ @abstractmethod
+ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+ """
+ Get the node's Hugepage Size, configure the specified amount of hugepages
+ if needed and mount the hugepages if needed.
+ If force_first_numa is True, configure hugepages just on the first socket.
+ """
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index cf2af2ca72..6c9e976947 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -59,6 +59,7 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
Perform the execution setup that will be done for each execution
this node is part of.
"""
+ self._setup_hugepages()
self._set_up_execution(execution_config)
def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
@@ -153,6 +154,17 @@ def _get_remote_cpus(self) -> None:
self._logger.info("Getting CPU information.")
self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
+ def _setup_hugepages(self):
+ """
+ Setup hugepages on the Node. Different architectures can supply different
+ amounts of memory for hugepages and numa-based hugepage allocation may need
+ to be considered.
+ """
+ if self.config.hugepages:
+ self.main_session.setup_hugepages(
+ self.config.hugepages.amount, self.config.hugepages.force_first_numa
+ )
+
def close(self) -> None:
"""
Close all connections and free other resources.
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 06/10] dts: add test suite module
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (4 preceding siblings ...)
2023-02-13 15:28 ` [PATCH v4 05/10] dts: add node memory setup Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 07/10] dts: add hello world testsuite Juraj Linkeš
` (5 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
The module implements the base class that all test suites inherit from.
It implements methods common to all test suites.
The derived test suites implement test cases and any particular setup
needed for the suite or tests.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 2 +
dts/framework/config/__init__.py | 4 +
dts/framework/config/conf_yaml_schema.json | 10 +
dts/framework/exception.py | 16 ++
dts/framework/settings.py | 24 +++
dts/framework/test_suite.py | 228 +++++++++++++++++++++
6 files changed, 284 insertions(+)
create mode 100644 dts/framework/test_suite.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 32ec6e97a5..b606aa0df1 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -8,6 +8,8 @@ executions:
cpu: native
compiler: gcc
compiler_wrapper: ccache
+ perf: false
+ func: true
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 0e5f493c5d..544fceca6a 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -131,6 +131,8 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
+ perf: bool
+ func: bool
system_under_test: NodeConfiguration
@staticmethod
@@ -143,6 +145,8 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
return ExecutionConfiguration(
build_targets=build_targets,
+ perf=d["perf"],
+ func=d["func"],
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 56f93def36..878ca3aec2 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -164,6 +164,14 @@
},
"minimum": 1
},
+ "perf": {
+ "type": "boolean",
+ "description": "Enable performance testing."
+ },
+ "func": {
+ "type": "boolean",
+ "description": "Enable functional testing."
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -171,6 +179,8 @@
"additionalProperties": false,
"required": [
"build_targets",
+ "perf",
+ "func",
"system_under_test"
]
},
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index b4545a5a40..ca353d98fc 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -24,6 +24,7 @@ class ErrorSeverity(IntEnum):
REMOTE_CMD_EXEC_ERR = 3
SSH_ERR = 4
DPDK_BUILD_ERR = 10
+ TESTCASE_VERIFY_ERR = 20
class DTSError(Exception):
@@ -128,3 +129,18 @@ class DPDKBuildError(DTSError):
"""
severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
+
+
+class TestCaseVerifyError(DTSError):
+ """
+ Used in test cases to verify the expected behavior.
+ """
+
+ value: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
+
+ def __init__(self, value: str):
+ self.value = value
+
+ def __str__(self) -> str:
+ return repr(self.value)
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index f787187ade..4ccc98537d 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -66,6 +66,8 @@ class _Settings:
skip_setup: bool
dpdk_tarball_path: Path
compile_timeout: float
+ test_cases: list
+ re_run: int
def _get_parser() -> argparse.ArgumentParser:
@@ -137,6 +139,26 @@ def _get_parser() -> argparse.ArgumentParser:
help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
)
+ parser.add_argument(
+ "--test-cases",
+ action=_env_arg("DTS_TESTCASES"),
+ default="",
+ required=False,
+ help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
+ "Unknown test cases will be silently ignored.",
+ )
+
+ parser.add_argument(
+ "--re-run",
+ "--re_run",
+ action=_env_arg("DTS_RERUN"),
+ default=0,
+ type=int,
+ required=False,
+ help="[DTS_RERUN] Re-run each test case the specified amount of times "
+ "if a test failure occurs",
+ )
+
return parser
@@ -156,6 +178,8 @@ def _get_settings() -> _Settings:
skip_setup=(parsed_args.skip_setup == "Y"),
dpdk_tarball_path=parsed_args.tarball,
compile_timeout=parsed_args.compile_timeout,
+ test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+ re_run=parsed_args.re_run,
)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
new file mode 100644
index 0000000000..0972a70c14
--- /dev/null
+++ b/dts/framework/test_suite.py
@@ -0,0 +1,228 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Base class for creating DTS test cases.
+"""
+
+import inspect
+import re
+from collections.abc import MutableSequence
+from types import MethodType
+
+from .exception import SSHTimeoutError, TestCaseVerifyError
+from .logger import DTSLOG, getLogger
+from .settings import SETTINGS
+from .testbed_model import SutNode
+
+
+class TestSuite(object):
+ """
+ The base TestSuite class provides methods for handling basic flow of a test suite:
+ * test case filtering and collection
+ * test suite setup/cleanup
+ * test setup/cleanup
+ * test case execution
+ * error handling and results storage
+ Test cases are implemented by derived classes. Test cases are all methods
+ starting with test_, further divided into performance test cases
+ (starting with test_perf_) and functional test cases (all other test cases).
+ By default, all test cases will be executed. A list of testcase str names
+ may be specified in conf.yaml or on the command line
+ to filter which test cases to run.
+ The methods named [set_up|tear_down]_[suite|test_case] should be overridden
+ in derived classes if the appropriate suite/test case fixtures are needed.
+ """
+
+ sut_node: SutNode
+ _logger: DTSLOG
+ _test_cases_to_run: list[str]
+ _func: bool
+ _errors: MutableSequence[Exception]
+
+ def __init__(
+ self,
+ sut_node: SutNode,
+ test_cases: list[str],
+ func: bool,
+ errors: MutableSequence[Exception],
+ ):
+ self.sut_node = sut_node
+ self._logger = getLogger(self.__class__.__name__)
+ self._test_cases_to_run = test_cases
+ self._test_cases_to_run.extend(SETTINGS.test_cases)
+ self._func = func
+ self._errors = errors
+
+ def set_up_suite(self) -> None:
+ """
+ Set up test fixtures common to all test cases; this is done before
+ any test case is run.
+ """
+
+ def tear_down_suite(self) -> None:
+ """
+ Tear down the previously created test fixtures common to all test cases.
+ """
+
+ def set_up_test_case(self) -> None:
+ """
+ Set up test fixtures before each test case.
+ """
+
+ def tear_down_test_case(self) -> None:
+ """
+ Tear down the previously created test fixtures after each test case.
+ """
+
+ def verify(self, condition: bool, failure_description: str) -> None:
+ if not condition:
+ self._logger.debug(
+ "A test case failed, showing the last 10 commands executed on SUT:"
+ )
+ for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ self._logger.debug(command_res.command)
+ raise TestCaseVerifyError(failure_description)
+
+ def run(self) -> None:
+ """
+ Setup, execute and teardown the whole suite.
+ Suite execution consists of running all test cases scheduled to be executed.
+ A test cast run consists of setup, execution and teardown of said test case.
+ """
+ test_suite_name = self.__class__.__name__
+
+ try:
+ self._logger.info(f"Starting test suite setup: {test_suite_name}")
+ self.set_up_suite()
+ self._logger.info(f"Test suite setup successful: {test_suite_name}")
+ except Exception as e:
+ self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
+ self._errors.append(e)
+
+ else:
+ self._execute_test_suite()
+
+ finally:
+ try:
+ self.tear_down_suite()
+ self.sut_node.kill_cleanup_dpdk_apps()
+ except Exception as e:
+ self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
+ self._logger.warning(
+ f"Test suite '{test_suite_name}' teardown failed, "
+ f"the next test suite may be affected."
+ )
+ self._errors.append(e)
+
+ def _execute_test_suite(self) -> None:
+ """
+ Execute all test cases scheduled to be executed in this suite.
+ """
+ if self._func:
+ for test_case_method in self._get_functional_test_cases():
+ all_attempts = SETTINGS.re_run + 1
+ attempt_nr = 1
+ while (
+ not self._run_test_case(test_case_method)
+ and attempt_nr <= all_attempts
+ ):
+ attempt_nr += 1
+ self._logger.info(
+ f"Re-running FAILED test case '{test_case_method.__name__}'. "
+ f"Attempt number {attempt_nr} out of {all_attempts}."
+ )
+
+ def _get_functional_test_cases(self) -> list[MethodType]:
+ """
+ Get all functional test cases.
+ """
+ return self._get_test_cases(r"test_(?!perf_)")
+
+ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
+ """
+ Return a list of test cases matching test_case_regex.
+ """
+ self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
+ filtered_test_cases = []
+ for test_case_name, test_case in inspect.getmembers(self, inspect.ismethod):
+ if self._should_be_executed(test_case_name, test_case_regex):
+ filtered_test_cases.append(test_case)
+ cases_str = ", ".join((x.__name__ for x in filtered_test_cases))
+ self._logger.debug(
+ f"Found test cases '{cases_str}' in {self.__class__.__name__}."
+ )
+ return filtered_test_cases
+
+ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
+ """
+ Check whether the test case should be executed.
+ """
+ match = bool(re.match(test_case_regex, test_case_name))
+ if self._test_cases_to_run:
+ return match and test_case_name in self._test_cases_to_run
+
+ return match
+
+ def _run_test_case(self, test_case_method: MethodType) -> bool:
+ """
+ Setup, execute and teardown a test case in this suite.
+ Exceptions are caught and recorded in logs.
+ """
+ test_case_name = test_case_method.__name__
+ result = False
+
+ try:
+ # run set_up function for each case
+ self.set_up_test_case()
+ except SSHTimeoutError as e:
+ self._logger.exception(f"Test case setup FAILED: {test_case_name}")
+ self._errors.append(e)
+ except Exception as e:
+ self._logger.exception(f"Test case setup ERROR: {test_case_name}")
+ self._errors.append(e)
+
+ else:
+ # run test case if setup was successful
+ result = self._execute_test_case(test_case_method)
+
+ finally:
+ try:
+ self.tear_down_test_case()
+ except Exception as e:
+ self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
+ self._logger.warning(
+ f"Test case '{test_case_name}' teardown failed, "
+ f"the next test case may be affected."
+ )
+ self._errors.append(e)
+ result = False
+
+ return result
+
+ def _execute_test_case(self, test_case_method: MethodType) -> bool:
+ """
+ Execute one test case and handle failures.
+ """
+ test_case_name = test_case_method.__name__
+ result = False
+ try:
+ self._logger.info(f"Starting test case execution: {test_case_name}")
+ test_case_method()
+ result = True
+ self._logger.info(f"Test case execution PASSED: {test_case_name}")
+
+ except TestCaseVerifyError as e:
+ self._logger.exception(f"Test case execution FAILED: {test_case_name}")
+ self._errors.append(e)
+ except Exception as e:
+ self._logger.exception(f"Test case execution ERROR: {test_case_name}")
+ self._errors.append(e)
+ except KeyboardInterrupt:
+ self._logger.error(
+ f"Test case execution INTERRUPTED by user: {test_case_name}"
+ )
+ raise KeyboardInterrupt("Stop DTS")
+
+ return result
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 07/10] dts: add hello world testsuite
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (5 preceding siblings ...)
2023-02-13 15:28 ` [PATCH v4 06/10] dts: add test suite module Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 08/10] dts: add test suite config and runner Juraj Linkeš
` (4 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
The test suite implements test cases defined in the corresponding test
plan.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/__init__.py | 2 +-
dts/framework/remote_session/os_session.py | 16 ++++-
.../remote_session/remote/__init__.py | 2 +-
dts/framework/testbed_model/__init__.py | 1 +
dts/framework/testbed_model/sut_node.py | 12 +++-
dts/tests/TestSuite_hello_world.py | 64 +++++++++++++++++++
6 files changed, 93 insertions(+), 4 deletions(-)
create mode 100644 dts/tests/TestSuite_hello_world.py
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 747316c78a..ee221503df 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,7 @@
from .linux_session import LinuxSession
from .os_session import OSSession
-from .remote import RemoteSession, SSHSession
+from .remote import CommandResult, RemoteSession, SSHSession
def create_session(
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 4753c70e91..2731a7ca0a 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -12,7 +12,7 @@
from framework.testbed_model import LogicalCore, MesonArgs
from framework.utils import EnvVarsDict
-from .remote import RemoteSession, create_remote_session
+from .remote import CommandResult, RemoteSession, create_remote_session
class OSSession(ABC):
@@ -50,6 +50,20 @@ def is_alive(self) -> bool:
"""
return self.remote_session.is_alive()
+ def send_command(
+ self,
+ command: str,
+ timeout: float,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
+ ) -> CommandResult:
+ """
+ An all-purpose API in case the command to be executed is already
+ OS-agnostic, such as when the path to the executed command has been
+ constructed beforehand.
+ """
+ return self.remote_session.send_command(command, timeout, verify, env)
+
@abstractmethod
def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
"""
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
index f3092f8bbe..8a1512210a 100644
--- a/dts/framework/remote_session/remote/__init__.py
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -6,7 +6,7 @@
from framework.config import NodeConfiguration
from framework.logger import DTSLOG
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
from .ssh_session import SSHSession
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 2be5169dc8..efb463f2e2 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -13,6 +13,7 @@
from .hw import (
LogicalCore,
LogicalCoreAmount,
+ LogicalCoreAmountFilter,
LogicalCoreList,
LogicalCoreListFilter,
VirtualDevice,
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index ff9d55f2d8..1bf90a448f 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -8,7 +8,7 @@
from pathlib import PurePath
from framework.config import BuildTargetConfiguration, NodeConfiguration
-from framework.remote_session import OSSession
+from framework.remote_session import CommandResult, OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, skip_setup
@@ -249,3 +249,13 @@ def create_eal_parameters(
vdevs=vdevs,
other_eal_param=other_eal_param,
)
+
+ def run_dpdk_app(
+ self, app_path: PurePath, eal_args: EalParameters, timeout: float = 30
+ ) -> CommandResult:
+ """
+ Run DPDK application on the remote node.
+ """
+ return self.main_session.send_command(
+ f"{app_path} {eal_args}", timeout, verify=True
+ )
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
new file mode 100644
index 0000000000..dcdf637a2a
--- /dev/null
+++ b/dts/tests/TestSuite_hello_world.py
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+
+"""
+Run the helloworld example app and verify it prints a message for each used core.
+No other EAL parameters apart from cores are used.
+"""
+
+from framework.test_suite import TestSuite
+from framework.testbed_model import (
+ LogicalCoreAmount,
+ LogicalCoreAmountFilter,
+ LogicalCoreList,
+)
+
+
+class TestHelloWorld(TestSuite):
+ def set_up_suite(self) -> None:
+ """
+ Setup:
+ Build the app we're about to test - helloworld.
+ """
+ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
+
+ def test_hello_world_single_core(self) -> None:
+ """
+ Steps:
+ Run the helloworld app on the first usable logical core.
+ Verify:
+ The app prints a message from the used core:
+ "hello from core <core_id>"
+ """
+
+ # get the first usable core
+ lcore_amount = LogicalCoreAmount(1, 1, 1)
+ lcores = LogicalCoreAmountFilter(self.sut_node.lcores, lcore_amount).filter()
+ eal_para = self.sut_node.create_eal_parameters(
+ lcore_filter_specifier=lcore_amount
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para)
+ self.verify(
+ f"hello from core {int(lcores[0])}" in result.stdout,
+ f"helloworld didn't start on lcore{lcores[0]}",
+ )
+
+ def test_hello_world_all_cores(self) -> None:
+ """
+ Steps:
+ Run the helloworld app on all usable logical cores.
+ Verify:
+ The app prints a message from all used cores:
+ "hello from core <core_id>"
+ """
+
+ # get the maximum logical core number
+ eal_para = self.sut_node.create_eal_parameters(
+ lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para, 50)
+ for lcore in self.sut_node.lcores:
+ self.verify(
+ f"hello from core {int(lcore)}" in result.stdout,
+ f"helloworld didn't start on lcore{lcore}",
+ )
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 08/10] dts: add test suite config and runner
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (6 preceding siblings ...)
2023-02-13 15:28 ` [PATCH v4 07/10] dts: add hello world testsuite Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 09/10] dts: add test results module Juraj Linkeš
` (3 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
The config allows users to specify which test suites and test cases
within test suites to run.
Also add test suite running capabilities to dts runner.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 2 ++
dts/framework/config/__init__.py | 29 +++++++++++++++-
dts/framework/config/conf_yaml_schema.json | 40 ++++++++++++++++++++++
dts/framework/dts.py | 19 ++++++++++
dts/framework/test_suite.py | 24 ++++++++++++-
5 files changed, 112 insertions(+), 2 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index b606aa0df1..70989be528 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -10,6 +10,8 @@ executions:
compiler_wrapper: ccache
perf: false
func: true
+ test_suites:
+ - hello_world
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 544fceca6a..ebb0823ff5 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -12,7 +12,7 @@
import pathlib
from dataclasses import dataclass
from enum import Enum, auto, unique
-from typing import Any
+from typing import Any, TypedDict
import warlock # type: ignore
import yaml
@@ -128,11 +128,34 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
)
+class TestSuiteConfigDict(TypedDict):
+ suite: str
+ cases: list[str]
+
+
+@dataclass(slots=True, frozen=True)
+class TestSuiteConfig:
+ test_suite: str
+ test_cases: list[str]
+
+ @staticmethod
+ def from_dict(
+ entry: str | TestSuiteConfigDict,
+ ) -> "TestSuiteConfig":
+ if isinstance(entry, str):
+ return TestSuiteConfig(test_suite=entry, test_cases=[])
+ elif isinstance(entry, dict):
+ return TestSuiteConfig(test_suite=entry["suite"], test_cases=entry["cases"])
+ else:
+ raise TypeError(f"{type(entry)} is not valid for a test suite config.")
+
+
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
perf: bool
func: bool
+ test_suites: list[TestSuiteConfig]
system_under_test: NodeConfiguration
@staticmethod
@@ -140,6 +163,9 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets: list[BuildTargetConfiguration] = list(
map(BuildTargetConfiguration.from_dict, d["build_targets"])
)
+ test_suites: list[TestSuiteConfig] = list(
+ map(TestSuiteConfig.from_dict, d["test_suites"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
@@ -147,6 +173,7 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets=build_targets,
perf=d["perf"],
func=d["func"],
+ test_suites=test_suites,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 878ca3aec2..ca2d4a1ef2 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -93,6 +93,32 @@
"required": [
"amount"
]
+ },
+ "test_suite": {
+ "type": "string",
+ "enum": [
+ "hello_world"
+ ]
+ },
+ "test_target": {
+ "type": "object",
+ "properties": {
+ "suite": {
+ "$ref": "#/definitions/test_suite"
+ },
+ "cases": {
+ "type": "array",
+ "description": "If specified, only this subset of test suite's test cases will be run. Unknown test cases will be silently ignored.",
+ "items": {
+ "type": "string"
+ },
+ "minimum": 1
+ }
+ },
+ "required": [
+ "suite"
+ ],
+ "additionalProperties": false
}
},
"type": "object",
@@ -172,6 +198,19 @@
"type": "boolean",
"description": "Enable functional testing."
},
+ "test_suites": {
+ "type": "array",
+ "items": {
+ "oneOf": [
+ {
+ "$ref": "#/definitions/test_suite"
+ },
+ {
+ "$ref": "#/definitions/test_target"
+ }
+ ]
+ }
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -181,6 +220,7 @@
"build_targets",
"perf",
"func",
+ "test_suites",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 6ea7c6e736..f98000450f 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -8,6 +8,7 @@
from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .test_suite import get_test_suites
from .testbed_model import SutNode
from .utils import check_dts_python_version
@@ -132,6 +133,24 @@ def _run_suites(
with possibly only a subset of test cases.
If no subset is specified, run all test cases.
"""
+ for test_suite_config in execution.test_suites:
+ try:
+ full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
+ test_suite_classes = get_test_suites(full_suite_path)
+ suites_str = ", ".join((x.__name__ for x in test_suite_classes))
+ dts_logger.debug(
+ f"Found test suites '{suites_str}' in '{full_suite_path}'."
+ )
+ except Exception as e:
+ dts_logger.exception("An error occurred when searching for test suites.")
+ errors.append(e)
+
+ else:
+ for test_suite_class in test_suite_classes:
+ test_suite = test_suite_class(
+ sut_node, test_suite_config.test_cases, execution.func, errors
+ )
+ test_suite.run()
def _exit_dts() -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 0972a70c14..57118f7c14 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -6,12 +6,13 @@
Base class for creating DTS test cases.
"""
+import importlib
import inspect
import re
from collections.abc import MutableSequence
from types import MethodType
-from .exception import SSHTimeoutError, TestCaseVerifyError
+from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
from .testbed_model import SutNode
@@ -226,3 +227,24 @@ def _execute_test_case(self, test_case_method: MethodType) -> bool:
raise KeyboardInterrupt("Stop DTS")
return result
+
+
+def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+ def is_test_suite(object) -> bool:
+ try:
+ if issubclass(object, TestSuite) and object is not TestSuite:
+ return True
+ except TypeError:
+ return False
+ return False
+
+ try:
+ testcase_module = importlib.import_module(testsuite_module_path)
+ except ModuleNotFoundError as e:
+ raise ConfigurationError(
+ f"Test suite '{testsuite_module_path}' not found."
+ ) from e
+ return [
+ test_suite_class
+ for _, test_suite_class in inspect.getmembers(testcase_module, is_test_suite)
+ ]
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 09/10] dts: add test results module
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (7 preceding siblings ...)
2023-02-13 15:28 ` [PATCH v4 08/10] dts: add test suite config and runner Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 10/10] doc: update DTS setup and test suite cookbook Juraj Linkeš
` (2 subsequent siblings)
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
The module stores the results and errors from all executions, build
targets, test suites and test cases.
The result consist of the result of the setup and the teardown of each
testing stage (listed above) and the results of the inner stages. The
innermost stage is the case, which also contains the result of test case
itself.
The modules also produces a brief overview of the results and the
number of executed tests.
It also finds the proper return code to exit with from among the stored
errors.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/dts.py | 64 +++----
dts/framework/settings.py | 2 -
dts/framework/test_result.py | 316 +++++++++++++++++++++++++++++++++++
dts/framework/test_suite.py | 60 +++----
4 files changed, 382 insertions(+), 60 deletions(-)
create mode 100644 dts/framework/test_result.py
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index f98000450f..117b7cae83 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,14 +6,14 @@
import sys
from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
-from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
from .test_suite import get_test_suites
from .testbed_model import SutNode
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("dts_runner")
-errors = []
+result: DTSResult = DTSResult(dts_logger)
def run_all() -> None:
@@ -22,7 +22,7 @@ def run_all() -> None:
config file.
"""
global dts_logger
- global errors
+ global result
# check the python version of the server that run dts
check_dts_python_version()
@@ -39,29 +39,31 @@ def run_all() -> None:
# the SUT has not been initialized yet
try:
sut_node = SutNode(execution.system_under_test)
+ result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception(
f"Connection to node {execution.system_under_test} failed."
)
- errors.append(e)
+ result.update_setup(Result.FAIL, e)
else:
nodes[sut_node.name] = sut_node
if sut_node:
- _run_execution(sut_node, execution)
+ _run_execution(sut_node, execution, result)
except Exception as e:
dts_logger.exception("An unexpected error has occurred.")
- errors.append(e)
+ result.add_error(e)
raise
finally:
try:
for node in nodes.values():
node.close()
+ result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Final cleanup of nodes failed.")
- errors.append(e)
+ result.update_teardown(Result.ERROR, e)
# we need to put the sys.exit call outside the finally clause to make sure
# that unexpected exceptions will propagate
@@ -72,61 +74,72 @@ def run_all() -> None:
_exit_dts()
-def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+def _run_execution(
+ sut_node: SutNode, execution: ExecutionConfiguration, result: DTSResult
+) -> None:
"""
Run the given execution. This involves running the execution setup as well as
running all build targets in the given execution.
"""
dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+ execution_result = result.add_execution(sut_node.config)
try:
sut_node.set_up_execution(execution)
+ execution_result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception("Execution setup failed.")
- errors.append(e)
+ execution_result.update_setup(Result.FAIL, e)
else:
for build_target in execution.build_targets:
- _run_build_target(sut_node, build_target, execution)
+ _run_build_target(sut_node, build_target, execution, execution_result)
finally:
try:
sut_node.tear_down_execution()
+ execution_result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Execution teardown failed.")
- errors.append(e)
+ execution_result.update_teardown(Result.FAIL, e)
def _run_build_target(
sut_node: SutNode,
build_target: BuildTargetConfiguration,
execution: ExecutionConfiguration,
+ execution_result: ExecutionResult,
) -> None:
"""
Run the given build target.
"""
dts_logger.info(f"Running build target '{build_target.name}'.")
+ build_target_result = execution_result.add_build_target(build_target)
try:
sut_node.set_up_build_target(build_target)
+ result.dpdk_version = sut_node.dpdk_version
+ build_target_result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception("Build target setup failed.")
- errors.append(e)
+ build_target_result.update_setup(Result.FAIL, e)
else:
- _run_suites(sut_node, execution)
+ _run_suites(sut_node, execution, build_target_result)
finally:
try:
sut_node.tear_down_build_target()
+ build_target_result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Build target teardown failed.")
- errors.append(e)
+ build_target_result.update_teardown(Result.FAIL, e)
def _run_suites(
sut_node: SutNode,
execution: ExecutionConfiguration,
+ build_target_result: BuildTargetResult,
) -> None:
"""
Use the given build_target to run execution's test suites
@@ -143,12 +156,15 @@ def _run_suites(
)
except Exception as e:
dts_logger.exception("An error occurred when searching for test suites.")
- errors.append(e)
+ result.update_setup(Result.ERROR, e)
else:
for test_suite_class in test_suite_classes:
test_suite = test_suite_class(
- sut_node, test_suite_config.test_cases, execution.func, errors
+ sut_node,
+ test_suite_config.test_cases,
+ execution.func,
+ build_target_result,
)
test_suite.run()
@@ -157,20 +173,8 @@ def _exit_dts() -> None:
"""
Process all errors and exit with the proper exit code.
"""
- if errors and dts_logger:
- dts_logger.debug("Summary of errors:")
- for error in errors:
- dts_logger.debug(repr(error))
-
- return_code = ErrorSeverity.NO_ERR
- for error in errors:
- error_return_code = ErrorSeverity.GENERIC_ERR
- if isinstance(error, DTSError):
- error_return_code = error.severity
-
- if error_return_code > return_code:
- return_code = error_return_code
+ result.process()
if dts_logger:
dts_logger.info("DTS execution has ended.")
- sys.exit(return_code)
+ sys.exit(result.get_return_code())
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 4ccc98537d..71955f4581 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -143,7 +143,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--test-cases",
action=_env_arg("DTS_TESTCASES"),
default="",
- required=False,
help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
"Unknown test cases will be silently ignored.",
)
@@ -154,7 +153,6 @@ def _get_parser() -> argparse.ArgumentParser:
action=_env_arg("DTS_RERUN"),
default=0,
type=int,
- required=False,
help="[DTS_RERUN] Re-run each test case the specified amount of times "
"if a test failure occurs",
)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
new file mode 100644
index 0000000000..743919820c
--- /dev/null
+++ b/dts/framework/test_result.py
@@ -0,0 +1,316 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Generic result container and reporters
+"""
+
+import os.path
+from collections.abc import MutableSequence
+from enum import Enum, auto
+
+from .config import (
+ OS,
+ Architecture,
+ BuildTargetConfiguration,
+ Compiler,
+ CPUType,
+ NodeConfiguration,
+)
+from .exception import DTSError, ErrorSeverity
+from .logger import DTSLOG
+from .settings import SETTINGS
+
+
+class Result(Enum):
+ """
+ An Enum defining the possible states that
+ a setup, a teardown or a test case may end up in.
+ """
+
+ PASS = auto()
+ FAIL = auto()
+ ERROR = auto()
+ SKIP = auto()
+
+ def __bool__(self) -> bool:
+ return self is self.PASS
+
+
+class FixtureResult(object):
+ """
+ A record that stored the result of a setup or a teardown.
+ The default is FAIL because immediately after creating the object
+ the setup of the corresponding stage will be executed, which also guarantees
+ the execution of teardown.
+ """
+
+ result: Result
+ error: Exception | None = None
+
+ def __init__(
+ self,
+ result: Result = Result.FAIL,
+ error: Exception | None = None,
+ ):
+ self.result = result
+ self.error = error
+
+ def __bool__(self) -> bool:
+ return bool(self.result)
+
+
+class Statistics(dict):
+ """
+ A helper class used to store the number of test cases by its result
+ along a few other basic information.
+ Using a dict provides a convenient way to format the data.
+ """
+
+ def __init__(self, dpdk_version):
+ super(Statistics, self).__init__()
+ for result in Result:
+ self[result.name] = 0
+ self["PASS RATE"] = 0.0
+ self["DPDK VERSION"] = dpdk_version
+
+ def __iadd__(self, other: Result) -> "Statistics":
+ """
+ Add a Result to the final count.
+ """
+ self[other.name] += 1
+ self["PASS RATE"] = (
+ float(self[Result.PASS.name])
+ * 100
+ / sum(self[result.name] for result in Result)
+ )
+ return self
+
+ def __str__(self) -> str:
+ """
+ Provide a string representation of the data.
+ """
+ stats_str = ""
+ for key, value in self.items():
+ stats_str += f"{key:<12} = {value}\n"
+ # according to docs, we should use \n when writing to text files
+ # on all platforms
+ return stats_str
+
+
+class BaseResult(object):
+ """
+ The Base class for all results. Stores the results of
+ the setup and teardown portions of the corresponding stage
+ and a list of results from each inner stage in _inner_results.
+ """
+
+ setup_result: FixtureResult
+ teardown_result: FixtureResult
+ _inner_results: MutableSequence["BaseResult"]
+
+ def __init__(self):
+ self.setup_result = FixtureResult()
+ self.teardown_result = FixtureResult()
+ self._inner_results = []
+
+ def update_setup(self, result: Result, error: Exception | None = None) -> None:
+ self.setup_result.result = result
+ self.setup_result.error = error
+
+ def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+ self.teardown_result.result = result
+ self.teardown_result.error = error
+
+ def _get_setup_teardown_errors(self) -> list[Exception]:
+ errors = []
+ if self.setup_result.error:
+ errors.append(self.setup_result.error)
+ if self.teardown_result.error:
+ errors.append(self.teardown_result.error)
+ return errors
+
+ def _get_inner_errors(self) -> list[Exception]:
+ return [
+ error
+ for inner_result in self._inner_results
+ for error in inner_result.get_errors()
+ ]
+
+ def get_errors(self) -> list[Exception]:
+ return self._get_setup_teardown_errors() + self._get_inner_errors()
+
+ def add_stats(self, statistics: Statistics) -> None:
+ for inner_result in self._inner_results:
+ inner_result.add_stats(statistics)
+
+
+class TestCaseResult(BaseResult, FixtureResult):
+ """
+ The test case specific result.
+ Stores the result of the actual test case.
+ Also stores the test case name.
+ """
+
+ test_case_name: str
+
+ def __init__(self, test_case_name: str):
+ super(TestCaseResult, self).__init__()
+ self.test_case_name = test_case_name
+
+ def update(self, result: Result, error: Exception | None = None) -> None:
+ self.result = result
+ self.error = error
+
+ def _get_inner_errors(self) -> list[Exception]:
+ if self.error:
+ return [self.error]
+ return []
+
+ def add_stats(self, statistics: Statistics) -> None:
+ statistics += self.result
+
+ def __bool__(self) -> bool:
+ return (
+ bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
+ )
+
+
+class TestSuiteResult(BaseResult):
+ """
+ The test suite specific result.
+ The _inner_results list stores results of test cases in a given test suite.
+ Also stores the test suite name.
+ """
+
+ suite_name: str
+
+ def __init__(self, suite_name: str):
+ super(TestSuiteResult, self).__init__()
+ self.suite_name = suite_name
+
+ def add_test_case(self, test_case_name: str) -> TestCaseResult:
+ test_case_result = TestCaseResult(test_case_name)
+ self._inner_results.append(test_case_result)
+ return test_case_result
+
+
+class BuildTargetResult(BaseResult):
+ """
+ The build target specific result.
+ The _inner_results list stores results of test suites in a given build target.
+ Also stores build target specifics, such as compiler used to build DPDK.
+ """
+
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+
+ def __init__(self, build_target: BuildTargetConfiguration):
+ super(BuildTargetResult, self).__init__()
+ self.arch = build_target.arch
+ self.os = build_target.os
+ self.cpu = build_target.cpu
+ self.compiler = build_target.compiler
+
+ def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+ test_suite_result = TestSuiteResult(test_suite_name)
+ self._inner_results.append(test_suite_result)
+ return test_suite_result
+
+
+class ExecutionResult(BaseResult):
+ """
+ The execution specific result.
+ The _inner_results list stores results of build targets in a given execution.
+ Also stores the SUT node configuration.
+ """
+
+ sut_node: NodeConfiguration
+
+ def __init__(self, sut_node: NodeConfiguration):
+ super(ExecutionResult, self).__init__()
+ self.sut_node = sut_node
+
+ def add_build_target(
+ self, build_target: BuildTargetConfiguration
+ ) -> BuildTargetResult:
+ build_target_result = BuildTargetResult(build_target)
+ self._inner_results.append(build_target_result)
+ return build_target_result
+
+
+class DTSResult(BaseResult):
+ """
+ Stores environment information and test results from a DTS run, which are:
+ * Execution level information, such as SUT and TG hardware.
+ * Build target level information, such as compiler, target OS and cpu.
+ * Test suite results.
+ * All errors that are caught and recorded during DTS execution.
+
+ The information is stored in nested objects.
+
+ The class is capable of computing the return code used to exit DTS with
+ from the stored error.
+
+ It also provides a brief statistical summary of passed/failed test cases.
+ """
+
+ dpdk_version: str | None
+ _logger: DTSLOG
+ _errors: list[Exception]
+ _return_code: ErrorSeverity
+ _stats_result: Statistics | None
+ _stats_filename: str
+
+ def __init__(self, logger: DTSLOG):
+ super(DTSResult, self).__init__()
+ self.dpdk_version = None
+ self._logger = logger
+ self._errors = []
+ self._return_code = ErrorSeverity.NO_ERR
+ self._stats_result = None
+ self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
+
+ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+ execution_result = ExecutionResult(sut_node)
+ self._inner_results.append(execution_result)
+ return execution_result
+
+ def add_error(self, error) -> None:
+ self._errors.append(error)
+
+ def process(self) -> None:
+ """
+ Process the data after a DTS run.
+ The data is added to nested objects during runtime and this parent object
+ is not updated at that time. This requires us to process the nested data
+ after it's all been gathered.
+
+ The processing gathers all errors and the result statistics of test cases.
+ """
+ self._errors += self.get_errors()
+ if self._errors and self._logger:
+ self._logger.debug("Summary of errors:")
+ for error in self._errors:
+ self._logger.debug(repr(error))
+
+ self._stats_result = Statistics(self.dpdk_version)
+ self.add_stats(self._stats_result)
+ with open(self._stats_filename, "w+") as stats_file:
+ stats_file.write(str(self._stats_result))
+
+ def get_return_code(self) -> int:
+ """
+ Go through all stored Exceptions and return the highest error code found.
+ """
+ for error in self._errors:
+ error_return_code = ErrorSeverity.GENERIC_ERR
+ if isinstance(error, DTSError):
+ error_return_code = error.severity
+
+ if error_return_code > self._return_code:
+ self._return_code = error_return_code
+
+ return int(self._return_code)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 57118f7c14..1d57fbeb21 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -9,12 +9,12 @@
import importlib
import inspect
import re
-from collections.abc import MutableSequence
from types import MethodType
from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
+from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
from .testbed_model import SutNode
@@ -40,21 +40,21 @@ class TestSuite(object):
_logger: DTSLOG
_test_cases_to_run: list[str]
_func: bool
- _errors: MutableSequence[Exception]
+ _result: TestSuiteResult
def __init__(
self,
sut_node: SutNode,
test_cases: list[str],
func: bool,
- errors: MutableSequence[Exception],
+ build_target_result: BuildTargetResult,
):
self.sut_node = sut_node
self._logger = getLogger(self.__class__.__name__)
self._test_cases_to_run = test_cases
self._test_cases_to_run.extend(SETTINGS.test_cases)
self._func = func
- self._errors = errors
+ self._result = build_target_result.add_test_suite(self.__class__.__name__)
def set_up_suite(self) -> None:
"""
@@ -97,10 +97,11 @@ def run(self) -> None:
try:
self._logger.info(f"Starting test suite setup: {test_suite_name}")
self.set_up_suite()
+ self._result.update_setup(Result.PASS)
self._logger.info(f"Test suite setup successful: {test_suite_name}")
except Exception as e:
self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
- self._errors.append(e)
+ self._result.update_setup(Result.ERROR, e)
else:
self._execute_test_suite()
@@ -109,13 +110,14 @@ def run(self) -> None:
try:
self.tear_down_suite()
self.sut_node.kill_cleanup_dpdk_apps()
+ self._result.update_teardown(Result.PASS)
except Exception as e:
self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
self._logger.warning(
f"Test suite '{test_suite_name}' teardown failed, "
f"the next test suite may be affected."
)
- self._errors.append(e)
+ self._result.update_setup(Result.ERROR, e)
def _execute_test_suite(self) -> None:
"""
@@ -123,17 +125,18 @@ def _execute_test_suite(self) -> None:
"""
if self._func:
for test_case_method in self._get_functional_test_cases():
+ test_case_name = test_case_method.__name__
+ test_case_result = self._result.add_test_case(test_case_name)
all_attempts = SETTINGS.re_run + 1
attempt_nr = 1
- while (
- not self._run_test_case(test_case_method)
- and attempt_nr <= all_attempts
- ):
+ self._run_test_case(test_case_method, test_case_result)
+ while not test_case_result and attempt_nr <= all_attempts:
attempt_nr += 1
self._logger.info(
- f"Re-running FAILED test case '{test_case_method.__name__}'. "
+ f"Re-running FAILED test case '{test_case_name}'. "
f"Attempt number {attempt_nr} out of {all_attempts}."
)
+ self._run_test_case(test_case_method, test_case_result)
def _get_functional_test_cases(self) -> list[MethodType]:
"""
@@ -166,68 +169,69 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
return match
- def _run_test_case(self, test_case_method: MethodType) -> bool:
+ def _run_test_case(
+ self, test_case_method: MethodType, test_case_result: TestCaseResult
+ ) -> None:
"""
Setup, execute and teardown a test case in this suite.
- Exceptions are caught and recorded in logs.
+ Exceptions are caught and recorded in logs and results.
"""
test_case_name = test_case_method.__name__
- result = False
try:
# run set_up function for each case
self.set_up_test_case()
+ test_case_result.update_setup(Result.PASS)
except SSHTimeoutError as e:
self._logger.exception(f"Test case setup FAILED: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update_setup(Result.FAIL, e)
except Exception as e:
self._logger.exception(f"Test case setup ERROR: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update_setup(Result.ERROR, e)
else:
# run test case if setup was successful
- result = self._execute_test_case(test_case_method)
+ self._execute_test_case(test_case_method, test_case_result)
finally:
try:
self.tear_down_test_case()
+ test_case_result.update_teardown(Result.PASS)
except Exception as e:
self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
self._logger.warning(
f"Test case '{test_case_name}' teardown failed, "
f"the next test case may be affected."
)
- self._errors.append(e)
- result = False
+ test_case_result.update_teardown(Result.ERROR, e)
+ test_case_result.update(Result.ERROR)
- return result
-
- def _execute_test_case(self, test_case_method: MethodType) -> bool:
+ def _execute_test_case(
+ self, test_case_method: MethodType, test_case_result: TestCaseResult
+ ) -> None:
"""
Execute one test case and handle failures.
"""
test_case_name = test_case_method.__name__
- result = False
try:
self._logger.info(f"Starting test case execution: {test_case_name}")
test_case_method()
- result = True
+ test_case_result.update(Result.PASS)
self._logger.info(f"Test case execution PASSED: {test_case_name}")
except TestCaseVerifyError as e:
self._logger.exception(f"Test case execution FAILED: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update(Result.FAIL, e)
except Exception as e:
self._logger.exception(f"Test case execution ERROR: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update(Result.ERROR, e)
except KeyboardInterrupt:
self._logger.error(
f"Test case execution INTERRUPTED by user: {test_case_name}"
)
+ test_case_result.update(Result.SKIP)
raise KeyboardInterrupt("Stop DTS")
- return result
-
def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
def is_test_suite(object) -> bool:
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v4 10/10] doc: update DTS setup and test suite cookbook
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (8 preceding siblings ...)
2023-02-13 15:28 ` [PATCH v4 09/10] dts: add test results module Juraj Linkeš
@ 2023-02-13 15:28 ` Juraj Linkeš
2023-02-17 17:26 ` [PATCH v4 00/10] dts: add hello world testcase Bruce Richardson
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
11 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-13 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
bruce.richardson, wathsala.vithanage, probb
Cc: dev, Juraj Linkeš
Document how to configure and run DTS.
Also add documentation related to new features: SUT setup and a brief
test suite implementation cookbook.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
doc/guides/tools/dts.rst | 145 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 144 insertions(+), 1 deletion(-)
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index daf54359ed..5484451a49 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -56,7 +56,7 @@ DTS runtime environment or just plain DTS environment are used interchangeably.
Setting up DTS environment
---------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~
#. **Python Version**
@@ -93,6 +93,149 @@ Setting up DTS environment
poetry install
poetry shell
+#. **SSH Connection**
+
+ DTS uses Python pexpect for SSH connections between DTS environment and the other hosts.
+ The pexpect implementation is a wrapper around the ssh command in the DTS environment.
+ This means it'll use the SSH agent providing the ssh command and its keys.
+
+
+Setting up System Under Test
+----------------------------
+
+There are two areas that need to be set up on a System Under Test:
+
+#. **DPDK dependencies**
+
+ DPDK will be built and run on the SUT.
+ Consult the Getting Started guides for the list of dependencies for each distribution.
+
+#. **Hardware dependencies**
+
+ Any hardware that DPDK uses needs a proper driver
+ and most OS distributions provide those, but the version may not be satisfactory.
+ It's up to each user to install the driver they're interested in testing.
+ The hardware also may also need firmware upgrades, which is also left at user discretion.
+
+
+Running DTS
+-----------
+
+DTS needs to know which nodes to connect to and what hardware to use on those nodes.
+Once that's configured, DTS needs a DPDK tarball and it's ready to run.
+
+Configuring DTS
+~~~~~~~~~~~~~~~
+
+DTS configuration is split into nodes and executions and build targets within executions:
+
+ .. literalinclude:: ../../../dts/conf.yaml
+ :language: yaml
+ :start-at: executions:
+
+
+The various fields are mostly self-explanatory
+and documented in more detail in ``dts/framework/config/conf_yaml_schema.json``.
+
+DTS Execution
+~~~~~~~~~~~~~
+
+DTS is run with ``main.py`` located in the ``dts`` directory after entering Poetry shell::
+
+ usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT]
+ [-v VERBOSE] [-s SKIP_SETUP] [--tarball TARBALL]
+ [--compile-timeout COMPILE_TIMEOUT] [--test-cases TEST_CASES]
+ [--re-run RE_RUN]
+
+ Run DPDK test suites. All options may be specified with the environment variables provided in
+ brackets. Command line arguments have higher priority.
+
+ options:
+ -h, --help show this help message and exit
+ --config-file CONFIG_FILE
+ [DTS_CFG_FILE] configuration file that describes the test cases, SUTs
+ and targets. (default: conf.yaml)
+ --output-dir OUTPUT_DIR, --output OUTPUT_DIR
+ [DTS_OUTPUT_DIR] Output directory where dts logs and results are
+ saved. (default: output)
+ -t TIMEOUT, --timeout TIMEOUT
+ [DTS_TIMEOUT] The default timeout for all DTS operations except for
+ compiling DPDK. (default: 15)
+ -v VERBOSE, --verbose VERBOSE
+ [DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all
+ messages to the console. (default: N)
+ -s SKIP_SETUP, --skip-setup SKIP_SETUP
+ [DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG
+ nodes. (default: N)
+ --tarball TARBALL, --snapshot TARBALL
+ [DTS_DPDK_TARBALL] Path to DPDK source code tarball which will be
+ used in testing. (default: dpdk.tar.xz)
+ --compile-timeout COMPILE_TIMEOUT
+ [DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK. (default: 1200)
+ --test-cases TEST_CASES
+ [DTS_TESTCASES] Comma-separated list of test cases to execute.
+ Unknown test cases will be silently ignored. (default: )
+ --re-run RE_RUN, --re_run RE_RUN
+ [DTS_RERUN] Re-run each test case the specified amount of times if a
+ test failure occurs (default: 0)
+
+
+The brackets contain the names of environment variables that set the same thing.
+The minimum DTS needs is a config file and a DPDK tarball.
+You may pass those to DTS using the command line arguments or use the default paths.
+
+
+DTS Results
+~~~~~~~~~~~
+
+Results are stored in the output dir by default
+which be changed with the ``--output-dir`` command line argument.
+The results contain basic statistics of passed/failed test cases and DPDK version.
+
+
+How To Write a Test Suite
+-------------------------
+
+All test suites inherit from ``TestSuite`` defined in ``dts/framework/test_suite.py``.
+There are four types of methods that comprise a test suite:
+
+#. **Test cases**
+
+ | Test cases are methods that start with a particular prefix.
+ | Functional test cases start with ``test_``, e.g. ``test_hello_world_single_core``.
+ | Performance test cases start with ``test_perf_``, e.g. ``test_perf_nic_single_core``.
+ | A test suite may have any number of functional and/or performance test cases.
+ However, these test cases must test the same feature,
+ following the rule of one feature = one test suite.
+ Test cases for one feature don't need to be grouped in just one test suite, though.
+ If the feature requires many testing scenarios to cover,
+ the test cases would be better off spread over multiple test suites
+ so that each test suite doesn't take too long to execute.
+
+#. **Setup and Teardown methods**
+
+ | There are setup and teardown methods for the whole test suite and each individual test case.
+ | Methods ``set_up_suite`` and ``tear_down_suite`` will be executed
+ before any and after all test cases have been executed, respectively.
+ | Methods ``set_up_test_case`` and ``tear_down_test_case`` will be executed
+ before and after each test case, respectively.
+ | These methods don't need to be implemented if there's no need for them in a test suite.
+ In that case, nothing will happen when they're is executed.
+
+#. **Test case verification**
+
+ Test case verification should be done with the ``verify`` method, which records the result.
+ The method should be called at the end of each test case.
+
+#. **Other methods**
+
+ Of course, all test suite code should adhere to coding standards.
+ Only the above methods will be treated specially and any other methods may be defined
+ (which should be mostly private methods needed by each particular test suite).
+ Any specific features (such as NIC configuration) required by a test suite
+ should be implemented in the ``SutNode`` class (and the underlying classes that ``SutNode`` uses)
+ and used by the test suite via the ``sut_node`` field.
+
DTS Developer Tools
-------------------
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 00/10] dts: add hello world testcase
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (9 preceding siblings ...)
2023-02-13 15:28 ` [PATCH v4 10/10] doc: update DTS setup and test suite cookbook Juraj Linkeš
@ 2023-02-17 17:26 ` Bruce Richardson
2023-02-20 10:13 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
11 siblings, 1 reply; 97+ messages in thread
From: Bruce Richardson @ 2023-02-17 17:26 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
On Mon, Feb 13, 2023 at 04:28:36PM +0100, Juraj Linkeš wrote:
> Add code needed to run the HelloWorld testcase which just runs the hello
> world dpdk application.
>
> The patchset currently heavily refactors this original DTS code needed
> to run the testcase:
> * The whole architecture has been redone into more sensible class
> hierarchy
> * DPDK build on the System under Test
> * DPDK eal args construction, app running and shutting down
> * Optional SUT hugepage memory configuration
> * Test runner
> * Test results
> * TestSuite class
> * Test runner parts interfacing with TestSuite
> * The HelloWorld testsuite itself
>
> The code is divided into sub-packages, some of which are divided
> further.
>
> There patch may need to be divided into smaller chunks. If so, proposals
> on where exactly to split it would be very helpful.
>
> v3:
> Finished refactoring everything in this patch, with test suite and test
> results being the last parts.
> Also changed the directory structure. It's now simplified and the
> imports look much better.
> I've also many many minor changes such as renaming variables here and
> there.
>
> v4:
> Made hugepage config optional, users may now specify that in the main
> config file.
> Removed HelloWorld test plan and incorporated parts of it into the test
> suite python file.
> Updated documentation.
>
Hi,
just trying this out by reading the docs and trying to follow along. Couple
of high-level comments thus far without getting into the patches:
* In the "configuring DTS" section, I think it would be good to:
- say that the config file should be named conf.yaml by default. It's in
the next section, but I think it should be called out earlier.
- say that there is a template conf.yaml file in the dts directory
already. On my first reading I actually thought that the sample config
file was dts/framework/config/conf_yaml_schema.json (and I was going
to comment on the name being weird! :-)). Only when I opened it did I
realise my mistake. Therefore, downplan the schema, and put more
emphasis on where to find the simple conf example to start with.
- if hugepage config is now optional, as you say above, remove that from
the sample and the docs.
* The code thus far seems to imply that you are always going to use root.
When I configured it to log on to bruce@localhost, it timed out waiting
for a prompt, I believe because it was looking for "#" which is the
default only for root prompts.
* When running as root, things progressed further but I hit an error when
DTS was trying to get the CPU config. No idea what is happening here,
because running the same commands manually over ssh seemed to work fine.
Below is the error. Any hints as to what is the problem appreciated.
Thanks,
/Bruce
$ ./main.py --tarball ~/Downloads/dpdk-22.11.1.tar.xz -v Y
2023/02/17 16:59:57 - SUT 1 - INFO - Connecting to root@localhost.
2023/02/17 16:59:58 - SUT 1 - INFO - Connection to root@localhost successful.
2023/02/17 16:59:58 - SUT 1 - INFO - Getting CPU information.
2023/02/17 16:59:58 - SUT 1 - INFO - Sending: 'lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \#'
2023/02/17 16:59:59 - dts_runner - ERROR - Connection to node NodeConfiguration(name='SUT 1', hostname='localhost', user='root', password=None, arch=<Architecture.x86_64: 'x86_64'>, os=<OS.linux: 'linux'>, lcores='3,4', use_first_core=False, memory_channels=8, hugepages=HugepageConfiguration(amount=256, force_first_numa=False)) failed.
Traceback (most recent call last):
File "/home/bruce/dpdk.org/dts/framework/dts.py", line 41, in run_all
sut_node = SutNode(execution.system_under_test)
File "/home/bruce/dpdk.org/dts/framework/testbed_model/sut_node.py", line 39, in __init__
super(SutNode, self).__init__(node_config)
File "/home/bruce/dpdk.org/dts/framework/testbed_model/node.py", line 47, in __init__
self._get_remote_cpus()
File "/home/bruce/dpdk.org/dts/framework/testbed_model/node.py", line 155, in _get_remote_cpus
self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
File "/home/bruce/dpdk.org/dts/framework/remote_session/linux_session.py", line 18, in get_remote_cpus
cpu_info = self.remote_session.send_command(
File "/home/bruce/dpdk.org/dts/framework/remote_session/remote/remote_session.py", line 103, in send_command
result = self._send_command(command, timeout, env)
File "/home/bruce/dpdk.org/dts/framework/remote_session/remote/ssh_session.py", line 172, in _send_command
return_code = int(self._send_command_get_output("echo $?", timeout, None))
ValueError: invalid literal for int() with base 10: '\x1b[?2004l\r\r\n0'
2023/02/17 16:59:59 - dts_runner - DEBUG - Summary of errors:
2023/02/17 16:59:59 - dts_runner - DEBUG - ValueError("invalid literal for int() with base 10: '\\x1b[?2004l\\r\\r\\n0'")
2023/02/17 16:59:59 - dts_runner - INFO - DTS execution has ended.
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 01/10] dts: add node and os abstractions
2023-02-13 15:28 ` [PATCH v4 01/10] dts: add node and os abstractions Juraj Linkeš
@ 2023-02-17 17:44 ` Bruce Richardson
2023-02-20 13:24 ` Juraj Linkeš
0 siblings, 1 reply; 97+ messages in thread
From: Bruce Richardson @ 2023-02-17 17:44 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
On Mon, Feb 13, 2023 at 04:28:37PM +0100, Juraj Linkeš wrote:
> The abstraction model in DTS is as follows:
> Node, defining and implementing methods common to and the base of SUT
> (system under test) Node and TG (traffic generator) Node.
> Remote Session, defining and implementing methods common to any remote
> session implementation, such as SSH Session.
> OSSession, defining and implementing methods common to any operating
> system/distribution, such as Linux.
>
> OSSession uses a derived Remote Session and Node in turn uses a derived
> OSSession. This split delegates OS-specific and connection-specific code
> to specialized classes designed to handle the differences.
>
> The base classes implement the methods or parts of methods that are
> common to all implementations and defines abstract methods that must be
> implemented by derived classes.
>
> Part of the abstractions is the DTS test execution skeleton:
> execution setup, build setup and then test execution.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/conf.yaml | 11 +-
> dts/framework/config/__init__.py | 73 +++++++-
> dts/framework/config/conf_yaml_schema.json | 76 +++++++-
> dts/framework/dts.py | 162 ++++++++++++++----
> dts/framework/exception.py | 46 ++++-
> dts/framework/logger.py | 24 +--
> dts/framework/remote_session/__init__.py | 30 +++-
> dts/framework/remote_session/linux_session.py | 11 ++
> dts/framework/remote_session/os_session.py | 46 +++++
> dts/framework/remote_session/posix_session.py | 12 ++
> .../remote_session/remote/__init__.py | 16 ++
> .../{ => remote}/remote_session.py | 41 +++--
> .../{ => remote}/ssh_session.py | 20 +--
> dts/framework/settings.py | 15 +-
> dts/framework/testbed_model/__init__.py | 10 +-
> dts/framework/testbed_model/node.py | 104 ++++++++---
> dts/framework/testbed_model/sut_node.py | 13 ++
> 17 files changed, 591 insertions(+), 119 deletions(-)
> create mode 100644 dts/framework/remote_session/linux_session.py
> create mode 100644 dts/framework/remote_session/os_session.py
> create mode 100644 dts/framework/remote_session/posix_session.py
> create mode 100644 dts/framework/remote_session/remote/__init__.py
> rename dts/framework/remote_session/{ => remote}/remote_session.py (61%)
> rename dts/framework/remote_session/{ => remote}/ssh_session.py (91%)
> create mode 100644 dts/framework/testbed_model/sut_node.py
>
<snip>
> +
> +def _exit_dts() -> None:
> + """
> + Process all errors and exit with the proper exit code.
> + """
> + if errors and dts_logger:
> + dts_logger.debug("Summary of errors:")
> + for error in errors:
> + dts_logger.debug(repr(error))
This is nice to have at the end. However, what I think is a definite
niceer-to-have here, is the actual commands which produced the errors. In my
case, the summary just reports:
2023/02/17 16:59:59 - dts_runner - DEBUG - ValueError("invalid literal for int() with base 10: '\\x1b[?2004l\\r\\r\\n0'")
What is really missing and I had to hunt for, was what command produced the
invalid literal. Perhaps that's the responsibility of the error message to
include the details, but either way it would be great to get!
/Bruce
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 00/10] dts: add hello world testcase
2023-02-17 17:26 ` [PATCH v4 00/10] dts: add hello world testcase Bruce Richardson
@ 2023-02-20 10:13 ` Juraj Linkeš
2023-02-20 11:56 ` Bruce Richardson
2023-02-22 16:39 ` Bruce Richardson
0 siblings, 2 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-20 10:13 UTC (permalink / raw)
To: Bruce Richardson
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
Thanks for the comments, Bruce.
On Fri, Feb 17, 2023 at 6:26 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Mon, Feb 13, 2023 at 04:28:36PM +0100, Juraj Linkeš wrote:
> > Add code needed to run the HelloWorld testcase which just runs the hello
> > world dpdk application.
> >
> > The patchset currently heavily refactors this original DTS code needed
> > to run the testcase:
> > * The whole architecture has been redone into more sensible class
> > hierarchy
> > * DPDK build on the System under Test
> > * DPDK eal args construction, app running and shutting down
> > * Optional SUT hugepage memory configuration
> > * Test runner
> > * Test results
> > * TestSuite class
> > * Test runner parts interfacing with TestSuite
> > * The HelloWorld testsuite itself
> >
> > The code is divided into sub-packages, some of which are divided
> > further.
> >
> > There patch may need to be divided into smaller chunks. If so, proposals
> > on where exactly to split it would be very helpful.
> >
> > v3:
> > Finished refactoring everything in this patch, with test suite and test
> > results being the last parts.
> > Also changed the directory structure. It's now simplified and the
> > imports look much better.
> > I've also many many minor changes such as renaming variables here and
> > there.
> >
> > v4:
> > Made hugepage config optional, users may now specify that in the main
> > config file.
> > Removed HelloWorld test plan and incorporated parts of it into the test
> > suite python file.
> > Updated documentation.
> >
> Hi,
>
> just trying this out by reading the docs and trying to follow along. Couple
> of high-level comments thus far without getting into the patches:
>
> * In the "configuring DTS" section, I think it would be good to:
> - say that the config file should be named conf.yaml by default. It's in
> the next section, but I think it should be called out earlier.
> - say that there is a template conf.yaml file in the dts directory
> already. On my first reading I actually thought that the sample config
> file was dts/framework/config/conf_yaml_schema.json (and I was going
> to comment on the name being weird! :-)). Only when I opened it did I
> realise my mistake. Therefore, downplan the schema, and put more
> emphasis on where to find the simple conf example to start with.
Good points, I'll rewrite that a bit.
> - if hugepage config is now optional, as you say above, remove that from
> the sample and the docs.
>
The optional part is just that users may choose between DTS either
configuring the hugepages or not, but hugepages still must be
configured (if not by DTS, then beforehand). I'll document this a bit
more, but I'd like to leave it in the sample config with a note saying
it's optional.
> * The code thus far seems to imply that you are always going to use root.
> When I configured it to log on to bruce@localhost, it timed out waiting
> for a prompt, I believe because it was looking for "#" which is the
> default only for root prompts.
>
True, I'll add this to docs. This will not be a requirement in the
future though - we want to do passwordless sudo.
> * When running as root, things progressed further but I hit an error when
> DTS was trying to get the CPU config. No idea what is happening here,
> because running the same commands manually over ssh seemed to work fine.
> Below is the error. Any hints as to what is the problem appreciated.
>
I remember running into the same issue as well. I think it's related
to the bracketed paste feature of some terminal emulators:
https://askubuntu.com/questions/662222/why-bracketed-paste-mode-is-enabled-sporadically-in-my-terminal-screen
Please try disabling it and see whether that helps.
I haven't gone to great lengths to harden this part of SSH
implementation as we'll be moving to Fabric (from pexpect) after this
patch (which uses a mature Python SSH implementation instead of
expect).
> Thanks,
> /Bruce
>
> $ ./main.py --tarball ~/Downloads/dpdk-22.11.1.tar.xz -v Y
> 2023/02/17 16:59:57 - SUT 1 - INFO - Connecting to root@localhost.
> 2023/02/17 16:59:58 - SUT 1 - INFO - Connection to root@localhost successful.
> 2023/02/17 16:59:58 - SUT 1 - INFO - Getting CPU information.
> 2023/02/17 16:59:58 - SUT 1 - INFO - Sending: 'lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \#'
> 2023/02/17 16:59:59 - dts_runner - ERROR - Connection to node NodeConfiguration(name='SUT 1', hostname='localhost', user='root', password=None, arch=<Architecture.x86_64: 'x86_64'>, os=<OS.linux: 'linux'>, lcores='3,4', use_first_core=False, memory_channels=8, hugepages=HugepageConfiguration(amount=256, force_first_numa=False)) failed.
> Traceback (most recent call last):
> File "/home/bruce/dpdk.org/dts/framework/dts.py", line 41, in run_all
> sut_node = SutNode(execution.system_under_test)
> File "/home/bruce/dpdk.org/dts/framework/testbed_model/sut_node.py", line 39, in __init__
> super(SutNode, self).__init__(node_config)
> File "/home/bruce/dpdk.org/dts/framework/testbed_model/node.py", line 47, in __init__
> self._get_remote_cpus()
> File "/home/bruce/dpdk.org/dts/framework/testbed_model/node.py", line 155, in _get_remote_cpus
> self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
> File "/home/bruce/dpdk.org/dts/framework/remote_session/linux_session.py", line 18, in get_remote_cpus
> cpu_info = self.remote_session.send_command(
> File "/home/bruce/dpdk.org/dts/framework/remote_session/remote/remote_session.py", line 103, in send_command
> result = self._send_command(command, timeout, env)
> File "/home/bruce/dpdk.org/dts/framework/remote_session/remote/ssh_session.py", line 172, in _send_command
> return_code = int(self._send_command_get_output("echo $?", timeout, None))
> ValueError: invalid literal for int() with base 10: '\x1b[?2004l\r\r\n0'
> 2023/02/17 16:59:59 - dts_runner - DEBUG - Summary of errors:
> 2023/02/17 16:59:59 - dts_runner - DEBUG - ValueError("invalid literal for int() with base 10: '\\x1b[?2004l\\r\\r\\n0'")
> 2023/02/17 16:59:59 - dts_runner - INFO - DTS execution has ended.
>
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 00/10] dts: add hello world testcase
2023-02-20 10:13 ` Juraj Linkeš
@ 2023-02-20 11:56 ` Bruce Richardson
2023-02-22 16:39 ` Bruce Richardson
1 sibling, 0 replies; 97+ messages in thread
From: Bruce Richardson @ 2023-02-20 11:56 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
On Mon, Feb 20, 2023 at 11:13:45AM +0100, Juraj Linkeš wrote:
> Thanks for the comments, Bruce.
>
> On Fri, Feb 17, 2023 at 6:26 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Mon, Feb 13, 2023 at 04:28:36PM +0100, Juraj Linkeš wrote:
> > > Add code needed to run the HelloWorld testcase which just runs the hello
> > > world dpdk application.
> > >
> > > The patchset currently heavily refactors this original DTS code needed
> > > to run the testcase:
> > > * The whole architecture has been redone into more sensible class
> > > hierarchy
> > > * DPDK build on the System under Test
> > > * DPDK eal args construction, app running and shutting down
> > > * Optional SUT hugepage memory configuration
> > > * Test runner
> > > * Test results
> > > * TestSuite class
> > > * Test runner parts interfacing with TestSuite
> > > * The HelloWorld testsuite itself
> > >
> > > The code is divided into sub-packages, some of which are divided
> > > further.
> > >
> > > There patch may need to be divided into smaller chunks. If so, proposals
> > > on where exactly to split it would be very helpful.
> > >
> > > v3:
> > > Finished refactoring everything in this patch, with test suite and test
> > > results being the last parts.
> > > Also changed the directory structure. It's now simplified and the
> > > imports look much better.
> > > I've also many many minor changes such as renaming variables here and
> > > there.
> > >
> > > v4:
> > > Made hugepage config optional, users may now specify that in the main
> > > config file.
> > > Removed HelloWorld test plan and incorporated parts of it into the test
> > > suite python file.
> > > Updated documentation.
> > >
> > Hi,
> >
> > just trying this out by reading the docs and trying to follow along. Couple
> > of high-level comments thus far without getting into the patches:
> >
> > * In the "configuring DTS" section, I think it would be good to:
> > - say that the config file should be named conf.yaml by default. It's in
> > the next section, but I think it should be called out earlier.
> > - say that there is a template conf.yaml file in the dts directory
> > already. On my first reading I actually thought that the sample config
> > file was dts/framework/config/conf_yaml_schema.json (and I was going
> > to comment on the name being weird! :-)). Only when I opened it did I
> > realise my mistake. Therefore, downplan the schema, and put more
> > emphasis on where to find the simple conf example to start with.
>
> Good points, I'll rewrite that a bit.
>
> > - if hugepage config is now optional, as you say above, remove that from
> > the sample and the docs.
> >
>
> The optional part is just that users may choose between DTS either
> configuring the hugepages or not, but hugepages still must be
> configured (if not by DTS, then beforehand). I'll document this a bit
> more, but I'd like to leave it in the sample config with a note saying
> it's optional.
>
> > * The code thus far seems to imply that you are always going to use root.
> > When I configured it to log on to bruce@localhost, it timed out waiting
> > for a prompt, I believe because it was looking for "#" which is the
> > default only for root prompts.
> >
>
> True, I'll add this to docs. This will not be a requirement in the
> future though - we want to do passwordless sudo.
>
> > * When running as root, things progressed further but I hit an error when
> > DTS was trying to get the CPU config. No idea what is happening here,
> > because running the same commands manually over ssh seemed to work fine.
> > Below is the error. Any hints as to what is the problem appreciated.
> >
>
> I remember running into the same issue as well. I think it's related
> to the bracketed paste feature of some terminal emulators:
> https://askubuntu.com/questions/662222/why-bracketed-paste-mode-is-enabled-sporadically-in-my-terminal-screen
> Please try disabling it and see whether that helps.
> I haven't gone to great lengths to harden this part of SSH
> implementation as we'll be moving to Fabric (from pexpect) after this
> patch (which uses a mature Python SSH implementation instead of
> expect).
>
Ok, thanks for the explanation, I'll try that out.
/Bruce
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 01/10] dts: add node and os abstractions
2023-02-17 17:44 ` Bruce Richardson
@ 2023-02-20 13:24 ` Juraj Linkeš
0 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-20 13:24 UTC (permalink / raw)
To: Bruce Richardson
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
On Fri, Feb 17, 2023 at 6:44 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Mon, Feb 13, 2023 at 04:28:37PM +0100, Juraj Linkeš wrote:
> > The abstraction model in DTS is as follows:
> > Node, defining and implementing methods common to and the base of SUT
> > (system under test) Node and TG (traffic generator) Node.
> > Remote Session, defining and implementing methods common to any remote
> > session implementation, such as SSH Session.
> > OSSession, defining and implementing methods common to any operating
> > system/distribution, such as Linux.
> >
> > OSSession uses a derived Remote Session and Node in turn uses a derived
> > OSSession. This split delegates OS-specific and connection-specific code
> > to specialized classes designed to handle the differences.
> >
> > The base classes implement the methods or parts of methods that are
> > common to all implementations and defines abstract methods that must be
> > implemented by derived classes.
> >
> > Part of the abstractions is the DTS test execution skeleton:
> > execution setup, build setup and then test execution.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> > dts/conf.yaml | 11 +-
> > dts/framework/config/__init__.py | 73 +++++++-
> > dts/framework/config/conf_yaml_schema.json | 76 +++++++-
> > dts/framework/dts.py | 162 ++++++++++++++----
> > dts/framework/exception.py | 46 ++++-
> > dts/framework/logger.py | 24 +--
> > dts/framework/remote_session/__init__.py | 30 +++-
> > dts/framework/remote_session/linux_session.py | 11 ++
> > dts/framework/remote_session/os_session.py | 46 +++++
> > dts/framework/remote_session/posix_session.py | 12 ++
> > .../remote_session/remote/__init__.py | 16 ++
> > .../{ => remote}/remote_session.py | 41 +++--
> > .../{ => remote}/ssh_session.py | 20 +--
> > dts/framework/settings.py | 15 +-
> > dts/framework/testbed_model/__init__.py | 10 +-
> > dts/framework/testbed_model/node.py | 104 ++++++++---
> > dts/framework/testbed_model/sut_node.py | 13 ++
> > 17 files changed, 591 insertions(+), 119 deletions(-)
> > create mode 100644 dts/framework/remote_session/linux_session.py
> > create mode 100644 dts/framework/remote_session/os_session.py
> > create mode 100644 dts/framework/remote_session/posix_session.py
> > create mode 100644 dts/framework/remote_session/remote/__init__.py
> > rename dts/framework/remote_session/{ => remote}/remote_session.py (61%)
> > rename dts/framework/remote_session/{ => remote}/ssh_session.py (91%)
> > create mode 100644 dts/framework/testbed_model/sut_node.py
> >
> <snip>
>
> > +
> > +def _exit_dts() -> None:
> > + """
> > + Process all errors and exit with the proper exit code.
> > + """
> > + if errors and dts_logger:
> > + dts_logger.debug("Summary of errors:")
> > + for error in errors:
> > + dts_logger.debug(repr(error))
>
> This is nice to have at the end. However, what I think is a definite
> niceer-to-have here, is the actual commands which produced the errors. In my
> case, the summary just reports:
>
> 2023/02/17 16:59:59 - dts_runner - DEBUG - ValueError("invalid literal for int() with base 10: '\\x1b[?2004l\\r\\r\\n0'")
>
> What is really missing and I had to hunt for, was what command produced the
> invalid literal. Perhaps that's the responsibility of the error message to
> include the details, but either way it would be great to get!
>
Yes, it should be in the error message. It's not there because this is
a corner case in code that will be replaced shortly.
> /Bruce
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 00/10] dts: add hello world testcase
2023-02-20 10:13 ` Juraj Linkeš
2023-02-20 11:56 ` Bruce Richardson
@ 2023-02-22 16:39 ` Bruce Richardson
2023-02-23 8:27 ` Juraj Linkeš
1 sibling, 1 reply; 97+ messages in thread
From: Bruce Richardson @ 2023-02-22 16:39 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
On Mon, Feb 20, 2023 at 11:13:45AM +0100, Juraj Linkeš wrote:
> Thanks for the comments, Bruce.
>
> On Fri, Feb 17, 2023 at 6:26 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Mon, Feb 13, 2023 at 04:28:36PM +0100, Juraj Linkeš wrote:
> > > Add code needed to run the HelloWorld testcase which just runs the hello
> > > world dpdk application.
> > >
> > > The patchset currently heavily refactors this original DTS code needed
> > > to run the testcase:
> > > * The whole architecture has been redone into more sensible class
> > > hierarchy
> > > * DPDK build on the System under Test
> > > * DPDK eal args construction, app running and shutting down
> > > * Optional SUT hugepage memory configuration
> > > * Test runner
> > > * Test results
> > > * TestSuite class
> > > * Test runner parts interfacing with TestSuite
> > > * The HelloWorld testsuite itself
> > >
<snip>
>
> > * When running as root, things progressed further but I hit an error when
> > DTS was trying to get the CPU config. No idea what is happening here,
> > because running the same commands manually over ssh seemed to work fine.
> > Below is the error. Any hints as to what is the problem appreciated.
> >
>
> I remember running into the same issue as well. I think it's related
> to the bracketed paste feature of some terminal emulators:
> https://askubuntu.com/questions/662222/why-bracketed-paste-mode-is-enabled-sporadically-in-my-terminal-screen
> Please try disabling it and see whether that helps.
> I haven't gone to great lengths to harden this part of SSH
> implementation as we'll be moving to Fabric (from pexpect) after this
> patch (which uses a mature Python SSH implementation instead of
> expect).
>
Adding things to my environment, e.g. bashrc didn't seem to work for me,
but the following change fixed this particular error. Might be worth
including in the code to avoid others hitting an issue?
index d0863d8791..936d5f4642 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -68,6 +68,7 @@ def _connect(self) -> None:
self.send_expect("stty -echo", "#")
self.send_expect("stty columns 1000", "#")
+ self.send_expect("bind 'set enable-bracketed-paste off'", "#")
except Exception as e:
self._logger.error(RED(str(e)))
if getattr(self, "port", None):
Unfortunately, things still aren't running correctly for me. The code gets
copied over and builds, and then the first hello-world test case runs ok.
However, things don't work after that - something seems wrong with the
lcore detection or filtering logic on my system.
File "/home/bruce/dpdk.org/dts/framework/testbed_model/hw/cpu.py", line 206, in _filter_cores
raise ValueError(
ValueError: The amount of logical cores per core to use (1) exceeds the actual amount present. Is hyperthreading enabled?
To the suggestion on hyperthreading, I then checked, and yes, I have HT
enabled on the system. Any suggestions what is wrong?
BTW: suggest the following changes to the error message:
* s/amount/number/ - as cores are countable.
* "Is hyperthreading enabled?" -> "This test requires SMT/hyperthreading be
enabled". By asking if it's enabled, you don't make it clear whether it
should be enabled or not. Since I had it enabled, the question implied to
me that it should be disabled. It's only on reading the code I see the comment
that it is meant to be enabled.
/Bruce
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 03/10] dts: add dpdk build on sut
2023-02-13 15:28 ` [PATCH v4 03/10] dts: add dpdk build on sut Juraj Linkeš
@ 2023-02-22 16:44 ` Bruce Richardson
0 siblings, 0 replies; 97+ messages in thread
From: Bruce Richardson @ 2023-02-22 16:44 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
On Mon, Feb 13, 2023 at 04:28:39PM +0100, Juraj Linkeš wrote:
> Add the ability to build DPDK and apps on the SUT, using a configured
> target.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/framework/config/__init__.py | 2 +
> dts/framework/exception.py | 17 ++
> dts/framework/remote_session/os_session.py | 90 +++++++++-
> dts/framework/remote_session/posix_session.py | 126 ++++++++++++++
> .../remote_session/remote/remote_session.py | 38 ++++-
> .../remote_session/remote/ssh_session.py | 68 +++++++-
> dts/framework/settings.py | 44 ++++-
> dts/framework/testbed_model/__init__.py | 1 +
> dts/framework/testbed_model/dpdk.py | 33 ++++
> dts/framework/testbed_model/sut_node.py | 158 ++++++++++++++++++
> dts/framework/utils.py | 19 ++-
> 11 files changed, 580 insertions(+), 16 deletions(-)
> create mode 100644 dts/framework/testbed_model/dpdk.py
>
<snip>
> +
> + def build_dpdk(
> + self,
> + env_vars: EnvVarsDict,
> + meson_args: MesonArgs,
> + remote_dpdk_dir: str | PurePath,
> + remote_dpdk_build_dir: str | PurePath,
> + rebuild: bool = False,
> + timeout: float = SETTINGS.compile_timeout,
> + ) -> None:
> + try:
> + if rebuild:
> + # reconfigure, then build
> + self._logger.info("Reconfiguring DPDK build.")
> + self.remote_session.send_command(
> + f"meson configure {meson_args} {remote_dpdk_build_dir}",
> + timeout,
> + verify=True,
> + env=env_vars,
> + )
> + else:
> + # fresh build - remove target dir first, then build from scratch
> + self._logger.info("Configuring DPDK build from scratch.")
> + self.remove_remote_dir(remote_dpdk_build_dir)
> + self.remote_session.send_command(
> + f"meson {meson_args} {remote_dpdk_dir} {remote_dpdk_build_dir}",
Minor nit - this should be "meson setup" rather than just "meson". Latest
versions of meson complain about omitting the "setup" operation.
/Bruce
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 00/10] dts: add hello world testcase
2023-02-22 16:39 ` Bruce Richardson
@ 2023-02-23 8:27 ` Juraj Linkeš
2023-02-23 9:17 ` Bruce Richardson
0 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 8:27 UTC (permalink / raw)
To: Bruce Richardson
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
On Wed, Feb 22, 2023 at 5:43 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Mon, Feb 20, 2023 at 11:13:45AM +0100, Juraj Linkeš wrote:
> > Thanks for the comments, Bruce.
> >
> > On Fri, Feb 17, 2023 at 6:26 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Mon, Feb 13, 2023 at 04:28:36PM +0100, Juraj Linkeš wrote:
> > > > Add code needed to run the HelloWorld testcase which just runs the hello
> > > > world dpdk application.
> > > >
> > > > The patchset currently heavily refactors this original DTS code needed
> > > > to run the testcase:
> > > > * The whole architecture has been redone into more sensible class
> > > > hierarchy
> > > > * DPDK build on the System under Test
> > > > * DPDK eal args construction, app running and shutting down
> > > > * Optional SUT hugepage memory configuration
> > > > * Test runner
> > > > * Test results
> > > > * TestSuite class
> > > > * Test runner parts interfacing with TestSuite
> > > > * The HelloWorld testsuite itself
> > > >
> <snip>
> >
> > > * When running as root, things progressed further but I hit an error when
> > > DTS was trying to get the CPU config. No idea what is happening here,
> > > because running the same commands manually over ssh seemed to work fine.
> > > Below is the error. Any hints as to what is the problem appreciated.
> > >
> >
> > I remember running into the same issue as well. I think it's related
> > to the bracketed paste feature of some terminal emulators:
> > https://askubuntu.com/questions/662222/why-bracketed-paste-mode-is-enabled-sporadically-in-my-terminal-screen
> > Please try disabling it and see whether that helps.
> > I haven't gone to great lengths to harden this part of SSH
> > implementation as we'll be moving to Fabric (from pexpect) after this
> > patch (which uses a mature Python SSH implementation instead of
> > expect).
> >
>
> Adding things to my environment, e.g. bashrc didn't seem to work for me,
> but the following change fixed this particular error. Might be worth
> including in the code to avoid others hitting an issue?
I didn't really want to modify the code that's about to be replaced,
but this is a small and bening change, so I don't mind.
>
> index d0863d8791..936d5f4642 100644
> --- a/dts/framework/remote_session/remote/ssh_session.py
> +++ b/dts/framework/remote_session/remote/ssh_session.py
> @@ -68,6 +68,7 @@ def _connect(self) -> None:
>
> self.send_expect("stty -echo", "#")
> self.send_expect("stty columns 1000", "#")
> + self.send_expect("bind 'set enable-bracketed-paste off'", "#")
> except Exception as e:
> self._logger.error(RED(str(e)))
> if getattr(self, "port", None):
>
> Unfortunately, things still aren't running correctly for me. The code gets
> copied over and builds, and then the first hello-world test case runs ok.
> However, things don't work after that - something seems wrong with the
> lcore detection or filtering logic on my system.
>
> File "/home/bruce/dpdk.org/dts/framework/testbed_model/hw/cpu.py", line 206, in _filter_cores
> raise ValueError(
> ValueError: The amount of logical cores per core to use (1) exceeds the actual amount present. Is hyperthreading enabled?
>
> To the suggestion on hyperthreading, I then checked, and yes, I have HT
> enabled on the system. Any suggestions what is wrong?
Interesting. The first test case runs hello world on all cores
specified in conf.yaml (or all system cores if lcores is empty).
The second one tries to run it on just one core and, interestingly,
that fails. It's definitely related to hyperthreading, which I've
tested a bit (or I thought so), but apparently missed something.
Looking at the code, there's something wrong when checking the number
of lcores per core (with hyperthreading, more than 1 core per core
could be present) requested by filter (in this case, the test case
supplies the filter) and the lcores on the system.
I'll try to fix it and send v5 right away. If the fix doesn't work, we
could look at what "lscpu -p=CPU,CORE,SOCKET,NODE | grep -v #" returns
on your system. It's also captured in dts/output/suite.log. The lcore
config in conf.yaml could also be relevant, but I assume you didn't
change that. We could also check the test case output. It's also in
dts/output/suite.log
>
> BTW: suggest the following changes to the error message:
> * s/amount/number/ - as cores are countable.
Thanks. I've used it inappropriately in a number of places.
> * "Is hyperthreading enabled?" -> "This test requires SMT/hyperthreading be
> enabled". By asking if it's enabled, you don't make it clear whether it
> should be enabled or not. Since I had it enabled, the question implied to
> me that it should be disabled. It's only on reading the code I see the comment
> that it is meant to be enabled.
I see where the confusion is. The question is just a mere suggestion
as to where the problem could be, but the logic in code is faulty,
leading to this unclear error message. I'll fix the logic and probably
modify the message so it makes more sense.
>
> /Bruce
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v4 00/10] dts: add hello world testcase
2023-02-23 8:27 ` Juraj Linkeš
@ 2023-02-23 9:17 ` Bruce Richardson
0 siblings, 0 replies; 97+ messages in thread
From: Bruce Richardson @ 2023-02-23 9:17 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, ohilyard, lijuan.tu,
wathsala.vithanage, probb, dev
On Thu, Feb 23, 2023 at 09:27:05AM +0100, Juraj Linkeš wrote:
> On Wed, Feb 22, 2023 at 5:43 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Mon, Feb 20, 2023 at 11:13:45AM +0100, Juraj Linkeš wrote:
> > > Thanks for the comments, Bruce.
> > >
> > > On Fri, Feb 17, 2023 at 6:26 PM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > > On Mon, Feb 13, 2023 at 04:28:36PM +0100, Juraj Linkeš wrote:
> > > > > Add code needed to run the HelloWorld testcase which just runs the hello
> > > > > world dpdk application.
> > > > >
> > > > > The patchset currently heavily refactors this original DTS code needed
> > > > > to run the testcase:
> > > > > * The whole architecture has been redone into more sensible class
> > > > > hierarchy
> > > > > * DPDK build on the System under Test
> > > > > * DPDK eal args construction, app running and shutting down
> > > > > * Optional SUT hugepage memory configuration
> > > > > * Test runner
> > > > > * Test results
> > > > > * TestSuite class
> > > > > * Test runner parts interfacing with TestSuite
> > > > > * The HelloWorld testsuite itself
> > > > >
> > <snip>
> > >
> > > > * When running as root, things progressed further but I hit an error when
> > > > DTS was trying to get the CPU config. No idea what is happening here,
> > > > because running the same commands manually over ssh seemed to work fine.
> > > > Below is the error. Any hints as to what is the problem appreciated.
> > > >
> > >
> > > I remember running into the same issue as well. I think it's related
> > > to the bracketed paste feature of some terminal emulators:
> > > https://askubuntu.com/questions/662222/why-bracketed-paste-mode-is-enabled-sporadically-in-my-terminal-screen
> > > Please try disabling it and see whether that helps.
> > > I haven't gone to great lengths to harden this part of SSH
> > > implementation as we'll be moving to Fabric (from pexpect) after this
> > > patch (which uses a mature Python SSH implementation instead of
> > > expect).
> > >
> >
> > Adding things to my environment, e.g. bashrc didn't seem to work for me,
> > but the following change fixed this particular error. Might be worth
> > including in the code to avoid others hitting an issue?
>
> I didn't really want to modify the code that's about to be replaced,
> but this is a small and bening change, so I don't mind.
>
> >
> > index d0863d8791..936d5f4642 100644
> > --- a/dts/framework/remote_session/remote/ssh_session.py
> > +++ b/dts/framework/remote_session/remote/ssh_session.py
> > @@ -68,6 +68,7 @@ def _connect(self) -> None:
> >
> > self.send_expect("stty -echo", "#")
> > self.send_expect("stty columns 1000", "#")
> > + self.send_expect("bind 'set enable-bracketed-paste off'", "#")
> > except Exception as e:
> > self._logger.error(RED(str(e)))
> > if getattr(self, "port", None):
> >
> > Unfortunately, things still aren't running correctly for me. The code gets
> > copied over and builds, and then the first hello-world test case runs ok.
> > However, things don't work after that - something seems wrong with the
> > lcore detection or filtering logic on my system.
> >
> > File "/home/bruce/dpdk.org/dts/framework/testbed_model/hw/cpu.py", line 206, in _filter_cores
> > raise ValueError(
> > ValueError: The amount of logical cores per core to use (1) exceeds the actual amount present. Is hyperthreading enabled?
> >
> > To the suggestion on hyperthreading, I then checked, and yes, I have HT
> > enabled on the system. Any suggestions what is wrong?
>
> Interesting. The first test case runs hello world on all cores
> specified in conf.yaml (or all system cores if lcores is empty).
> The second one tries to run it on just one core and, interestingly,
> that fails. It's definitely related to hyperthreading, which I've
> tested a bit (or I thought so), but apparently missed something.
>
> Looking at the code, there's something wrong when checking the number
> of lcores per core (with hyperthreading, more than 1 core per core
> could be present) requested by filter (in this case, the test case
> supplies the filter) and the lcores on the system.
>
> I'll try to fix it and send v5 right away. If the fix doesn't work, we
> could look at what "lscpu -p=CPU,CORE,SOCKET,NODE | grep -v #" returns
> on your system. It's also captured in dts/output/suite.log. The lcore
> config in conf.yaml could also be relevant, but I assume you didn't
> change that. We could also check the test case output. It's also in
> dts/output/suite.log
>
> >
> > BTW: suggest the following changes to the error message:
> > * s/amount/number/ - as cores are countable.
>
> Thanks. I've used it inappropriately in a number of places.
>
> > * "Is hyperthreading enabled?" -> "This test requires SMT/hyperthreading be
> > enabled". By asking if it's enabled, you don't make it clear whether it
> > should be enabled or not. Since I had it enabled, the question implied to
> > me that it should be disabled. It's only on reading the code I see the comment
> > that it is meant to be enabled.
>
> I see where the confusion is. The question is just a mere suggestion
> as to where the problem could be, but the logic in code is faulty,
> leading to this unclear error message. I'll fix the logic and probably
> modify the message so it makes more sense.
>
Thanks, if you do a new version I'm happy enough to retest today.
/Bruce
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 00/10] dts: add hello world testcase
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
` (10 preceding siblings ...)
2023-02-17 17:26 ` [PATCH v4 00/10] dts: add hello world testcase Bruce Richardson
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 01/10] dts: add node and os abstractions Juraj Linkeš
` (12 more replies)
11 siblings, 13 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Add code needed to run the HelloWorld testcase which just runs the hello
world dpdk application.
The patchset currently heavily refactors this original DTS code needed
to run the testcase:
* The whole architecture has been redone into more sensible class
hierarchy
* DPDK build on the System under Test
* DPDK eal args construction, app running and shutting down
* Optional SUT hugepage memory configuration
The optional part is DTS either configuring them or not. They still
must be configured even the user doesn't want DTS to do that.
* Test runner
* Test results
* TestSuite class
* Test runner parts interfacing with TestSuite
* The HelloWorld testsuite itself
The code is divided into sub-packages, some of which are divided
further.
There patch may need to be divided into smaller chunks. If so, proposals
on where exactly to split it would be very helpful.
v4:
Made hugepage config optional, users may now specify that in the main
config file.
Removed HelloWorld test plan and incorporated parts of it into the test
suite python file.
Updated documentation.
v5:
Documentation updates about running as root and hugepage configuration.
Fixed multiple problems with cpu filtering.
Other minor issues, such as typos and renaming variables.
Juraj Linkeš (10):
dts: add node and os abstractions
dts: add ssh command verification
dts: add dpdk build on sut
dts: add dpdk execution handling
dts: add node memory setup
dts: add test suite module
dts: add hello world testsuite
dts: add test suite config and runner
dts: add test results module
doc: update DTS setup and test suite cookbook
doc/guides/tools/dts.rst | 165 ++++++++-
dts/conf.yaml | 22 +-
dts/framework/config/__init__.py | 130 ++++++-
dts/framework/config/conf_yaml_schema.json | 172 +++++++++-
dts/framework/dts.py | 185 ++++++++--
dts/framework/exception.py | 100 +++++-
dts/framework/logger.py | 24 +-
dts/framework/remote_session/__init__.py | 30 +-
dts/framework/remote_session/linux_session.py | 107 ++++++
dts/framework/remote_session/os_session.py | 175 ++++++++++
dts/framework/remote_session/posix_session.py | 222 ++++++++++++
.../remote_session/remote/__init__.py | 16 +
.../remote_session/remote/remote_session.py | 155 +++++++++
.../{ => remote}/ssh_session.py | 92 ++++-
.../remote_session/remote_session.py | 95 ------
dts/framework/settings.py | 81 ++++-
dts/framework/test_result.py | 316 ++++++++++++++++++
dts/framework/test_suite.py | 254 ++++++++++++++
dts/framework/testbed_model/__init__.py | 20 +-
dts/framework/testbed_model/dpdk.py | 78 +++++
dts/framework/testbed_model/hw/__init__.py | 27 ++
dts/framework/testbed_model/hw/cpu.py | 274 +++++++++++++++
.../testbed_model/hw/virtual_device.py | 16 +
dts/framework/testbed_model/node.py | 159 +++++++--
dts/framework/testbed_model/sut_node.py | 260 ++++++++++++++
dts/framework/utils.py | 39 ++-
dts/tests/TestSuite_hello_world.py | 64 ++++
27 files changed, 3068 insertions(+), 210 deletions(-)
create mode 100644 dts/framework/remote_session/linux_session.py
create mode 100644 dts/framework/remote_session/os_session.py
create mode 100644 dts/framework/remote_session/posix_session.py
create mode 100644 dts/framework/remote_session/remote/__init__.py
create mode 100644 dts/framework/remote_session/remote/remote_session.py
rename dts/framework/remote_session/{ => remote}/ssh_session.py (64%)
delete mode 100644 dts/framework/remote_session/remote_session.py
create mode 100644 dts/framework/test_result.py
create mode 100644 dts/framework/test_suite.py
create mode 100644 dts/framework/testbed_model/dpdk.py
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
create mode 100644 dts/framework/testbed_model/sut_node.py
create mode 100644 dts/tests/TestSuite_hello_world.py
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 01/10] dts: add node and os abstractions
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 02/10] dts: add ssh command verification Juraj Linkeš
` (11 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The abstraction model in DTS is as follows:
Node, defining and implementing methods common to and the base of SUT
(system under test) Node and TG (traffic generator) Node.
Remote Session, defining and implementing methods common to any remote
session implementation, such as SSH Session.
OSSession, defining and implementing methods common to any operating
system/distribution, such as Linux.
OSSession uses a derived Remote Session and Node in turn uses a derived
OSSession. This split delegates OS-specific and connection-specific code
to specialized classes designed to handle the differences.
The base classes implement the methods or parts of methods that are
common to all implementations and defines abstract methods that must be
implemented by derived classes.
Part of the abstractions is the DTS test execution skeleton:
execution setup, build setup and then test execution.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 11 +-
dts/framework/config/__init__.py | 73 +++++++-
dts/framework/config/conf_yaml_schema.json | 76 +++++++-
dts/framework/dts.py | 162 ++++++++++++++----
dts/framework/exception.py | 46 ++++-
dts/framework/logger.py | 24 +--
dts/framework/remote_session/__init__.py | 30 +++-
dts/framework/remote_session/linux_session.py | 11 ++
dts/framework/remote_session/os_session.py | 46 +++++
dts/framework/remote_session/posix_session.py | 12 ++
.../remote_session/remote/__init__.py | 16 ++
.../{ => remote}/remote_session.py | 41 +++--
.../{ => remote}/ssh_session.py | 20 +--
dts/framework/settings.py | 15 +-
dts/framework/testbed_model/__init__.py | 10 +-
dts/framework/testbed_model/node.py | 104 ++++++++---
dts/framework/testbed_model/sut_node.py | 13 ++
17 files changed, 591 insertions(+), 119 deletions(-)
create mode 100644 dts/framework/remote_session/linux_session.py
create mode 100644 dts/framework/remote_session/os_session.py
create mode 100644 dts/framework/remote_session/posix_session.py
create mode 100644 dts/framework/remote_session/remote/__init__.py
rename dts/framework/remote_session/{ => remote}/remote_session.py (61%)
rename dts/framework/remote_session/{ => remote}/ssh_session.py (91%)
create mode 100644 dts/framework/testbed_model/sut_node.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1aaa593612..03696d2bab 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -1,9 +1,16 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2022 The DPDK contributors
+# Copyright 2022-2023 The DPDK contributors
executions:
- - system_under_test: "SUT 1"
+ - build_targets:
+ - arch: x86_64
+ os: linux
+ cpu: native
+ compiler: gcc
+ compiler_wrapper: ccache
+ system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ os: linux
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 214be8e7f4..e3e2d74eac 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -1,15 +1,17 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2021 Intel Corporation
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
"""
-Generic port and topology nodes configuration file load function
+Yaml config parsing methods
"""
import json
import os.path
import pathlib
from dataclasses import dataclass
+from enum import Enum, auto, unique
from typing import Any
import warlock # type: ignore
@@ -18,6 +20,47 @@
from framework.settings import SETTINGS
+class StrEnum(Enum):
+ @staticmethod
+ def _generate_next_value_(
+ name: str, start: int, count: int, last_values: object
+ ) -> str:
+ return name
+
+
+@unique
+class Architecture(StrEnum):
+ i686 = auto()
+ x86_64 = auto()
+ x86_32 = auto()
+ arm64 = auto()
+ ppc64le = auto()
+
+
+@unique
+class OS(StrEnum):
+ linux = auto()
+ freebsd = auto()
+ windows = auto()
+
+
+@unique
+class CPUType(StrEnum):
+ native = auto()
+ armv8a = auto()
+ dpaa2 = auto()
+ thunderx = auto()
+ xgene1 = auto()
+
+
+@unique
+class Compiler(StrEnum):
+ gcc = auto()
+ clang = auto()
+ icc = auto()
+ msvc = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -29,6 +72,7 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ os: OS
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -37,19 +81,44 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ os=OS(d["os"]),
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class BuildTargetConfiguration:
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+ name: str
+
+ @staticmethod
+ def from_dict(d: dict) -> "BuildTargetConfiguration":
+ return BuildTargetConfiguration(
+ arch=Architecture(d["arch"]),
+ os=OS(d["os"]),
+ cpu=CPUType(d["cpu"]),
+ compiler=Compiler(d["compiler"]),
+ name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
+ build_targets: list[BuildTargetConfiguration]
system_under_test: NodeConfiguration
@staticmethod
def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
+ build_targets: list[BuildTargetConfiguration] = list(
+ map(BuildTargetConfiguration.from_dict, d["build_targets"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
return ExecutionConfiguration(
+ build_targets=build_targets,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 6b8d6ccd05..9170307fbe 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -5,6 +5,68 @@
"node_name": {
"type": "string",
"description": "A unique identifier for a node"
+ },
+ "OS": {
+ "type": "string",
+ "enum": [
+ "linux"
+ ]
+ },
+ "cpu": {
+ "type": "string",
+ "description": "Native should be the default on x86",
+ "enum": [
+ "native",
+ "armv8a",
+ "dpaa2",
+ "thunderx",
+ "xgene1"
+ ]
+ },
+ "compiler": {
+ "type": "string",
+ "enum": [
+ "gcc",
+ "clang",
+ "icc",
+ "mscv"
+ ]
+ },
+ "build_target": {
+ "type": "object",
+ "description": "Targets supported by DTS",
+ "properties": {
+ "arch": {
+ "type": "string",
+ "enum": [
+ "ALL",
+ "x86_64",
+ "arm64",
+ "ppc64le",
+ "other"
+ ]
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
+ },
+ "cpu": {
+ "$ref": "#/definitions/cpu"
+ },
+ "compiler": {
+ "$ref": "#/definitions/compiler"
+ },
+ "compiler_wrapper": {
+ "type": "string",
+ "description": "This will be added before compiler to the CC variable when building DPDK. Optional."
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "arch",
+ "os",
+ "cpu",
+ "compiler"
+ ]
}
},
"type": "object",
@@ -29,13 +91,17 @@
"password": {
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
}
},
"additionalProperties": false,
"required": [
"name",
"hostname",
- "user"
+ "user",
+ "os"
]
},
"minimum": 1
@@ -45,12 +111,20 @@
"items": {
"type": "object",
"properties": {
+ "build_targets": {
+ "type": "array",
+ "items": {
+ "$ref": "#/definitions/build_target"
+ },
+ "minimum": 1
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
},
"additionalProperties": false,
"required": [
+ "build_targets",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index d23cfc4526..6ea7c6e736 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -1,67 +1,157 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2019 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import sys
-import traceback
-from collections.abc import Iterable
-from framework.testbed_model.node import Node
-
-from .config import CONFIGURATION
+from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
+from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .testbed_model import SutNode
from .utils import check_dts_python_version
-dts_logger: DTSLOG | None = None
+dts_logger: DTSLOG = getLogger("dts_runner")
+errors = []
def run_all() -> None:
"""
- Main process of DTS, it will run all test suites in the config file.
+ The main process of DTS. Runs all build targets in all executions from the main
+ config file.
"""
-
global dts_logger
+ global errors
# check the python version of the server that run dts
check_dts_python_version()
- dts_logger = getLogger("dts")
-
- nodes = {}
- # This try/finally block means "Run the try block, if there is an exception,
- # run the finally block before passing it upward. If there is not an exception,
- # run the finally block after the try block is finished." This helps avoid the
- # problem of python's interpreter exit context, which essentially prevents you
- # from making certain system calls. This makes cleaning up resources difficult,
- # since most of the resources in DTS are network-based, which is restricted.
+ nodes: dict[str, SutNode] = {}
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
- sut_config = execution.system_under_test
- if sut_config.name not in nodes:
- node = Node(sut_config)
- nodes[sut_config.name] = node
- node.send_command("echo Hello World")
+ sut_node = None
+ if execution.system_under_test.name in nodes:
+ # a Node with the same name already exists
+ sut_node = nodes[execution.system_under_test.name]
+ else:
+ # the SUT has not been initialized yet
+ try:
+ sut_node = SutNode(execution.system_under_test)
+ except Exception as e:
+ dts_logger.exception(
+ f"Connection to node {execution.system_under_test} failed."
+ )
+ errors.append(e)
+ else:
+ nodes[sut_node.name] = sut_node
+
+ if sut_node:
+ _run_execution(sut_node, execution)
+
+ except Exception as e:
+ dts_logger.exception("An unexpected error has occurred.")
+ errors.append(e)
+ raise
+
+ finally:
+ try:
+ for node in nodes.values():
+ node.close()
+ except Exception as e:
+ dts_logger.exception("Final cleanup of nodes failed.")
+ errors.append(e)
+ # we need to put the sys.exit call outside the finally clause to make sure
+ # that unexpected exceptions will propagate
+ # in that case, the error that should be reported is the uncaught exception as
+ # that is a severe error originating from the framework
+ # at that point, we'll only have partial results which could be impacted by the
+ # error causing the uncaught exception, making them uninterpretable
+ _exit_dts()
+
+
+def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+ """
+ Run the given execution. This involves running the execution setup as well as
+ running all build targets in the given execution.
+ """
+ dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+
+ try:
+ sut_node.set_up_execution(execution)
except Exception as e:
- # sys.exit() doesn't produce a stack trace, need to print it explicitly
- traceback.print_exc()
- raise e
+ dts_logger.exception("Execution setup failed.")
+ errors.append(e)
+
+ else:
+ for build_target in execution.build_targets:
+ _run_build_target(sut_node, build_target, execution)
finally:
- quit_execution(nodes.values())
+ try:
+ sut_node.tear_down_execution()
+ except Exception as e:
+ dts_logger.exception("Execution teardown failed.")
+ errors.append(e)
-def quit_execution(sut_nodes: Iterable[Node]) -> None:
+def _run_build_target(
+ sut_node: SutNode,
+ build_target: BuildTargetConfiguration,
+ execution: ExecutionConfiguration,
+) -> None:
"""
- Close session to SUT and TG before quit.
- Return exit status when failure occurred.
+ Run the given build target.
"""
- for sut_node in sut_nodes:
- # close all session
- sut_node.node_exit()
+ dts_logger.info(f"Running build target '{build_target.name}'.")
+
+ try:
+ sut_node.set_up_build_target(build_target)
+ except Exception as e:
+ dts_logger.exception("Build target setup failed.")
+ errors.append(e)
+
+ else:
+ _run_suites(sut_node, execution)
+
+ finally:
+ try:
+ sut_node.tear_down_build_target()
+ except Exception as e:
+ dts_logger.exception("Build target teardown failed.")
+ errors.append(e)
+
+
+def _run_suites(
+ sut_node: SutNode,
+ execution: ExecutionConfiguration,
+) -> None:
+ """
+ Use the given build_target to run execution's test suites
+ with possibly only a subset of test cases.
+ If no subset is specified, run all test cases.
+ """
+
+
+def _exit_dts() -> None:
+ """
+ Process all errors and exit with the proper exit code.
+ """
+ if errors and dts_logger:
+ dts_logger.debug("Summary of errors:")
+ for error in errors:
+ dts_logger.debug(repr(error))
+
+ return_code = ErrorSeverity.NO_ERR
+ for error in errors:
+ error_return_code = ErrorSeverity.GENERIC_ERR
+ if isinstance(error, DTSError):
+ error_return_code = error.severity
+
+ if error_return_code > return_code:
+ return_code = error_return_code
- if dts_logger is not None:
+ if dts_logger:
dts_logger.info("DTS execution has ended.")
- sys.exit(0)
+ sys.exit(return_code)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 8b2f08a8f0..121a0f7296 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -1,20 +1,46 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
User-defined exceptions used across the framework.
"""
+from enum import IntEnum, unique
+from typing import ClassVar
-class SSHTimeoutError(Exception):
+
+@unique
+class ErrorSeverity(IntEnum):
+ """
+ The severity of errors that occur during DTS execution.
+ All exceptions are caught and the most severe error is used as return code.
+ """
+
+ NO_ERR = 0
+ GENERIC_ERR = 1
+ CONFIG_ERR = 2
+ SSH_ERR = 3
+
+
+class DTSError(Exception):
+ """
+ The base exception from which all DTS exceptions are derived.
+ Stores error severity.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
+
+
+class SSHTimeoutError(DTSError):
"""
Command execution timeout.
"""
command: str
output: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, command: str, output: str):
self.command = command
@@ -27,12 +53,13 @@ def get_output(self) -> str:
return self.output
-class SSHConnectionError(Exception):
+class SSHConnectionError(DTSError):
"""
SSH connection error.
"""
host: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, host: str):
self.host = host
@@ -41,16 +68,25 @@ def __str__(self) -> str:
return f"Error trying to connect with {self.host}"
-class SSHSessionDeadError(Exception):
+class SSHSessionDeadError(DTSError):
"""
SSH session is not alive.
It can no longer be used.
"""
host: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, host: str):
self.host = host
def __str__(self) -> str:
return f"SSH session with {self.host} has died"
+
+
+class ConfigurationError(DTSError):
+ """
+ Raised when an invalid configuration is encountered.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index a31fcc8242..bb2991e994 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
DTS logger module with several log level. DTS framework and TestSuite logs
@@ -33,17 +33,17 @@ class DTSLOG(logging.LoggerAdapter):
DTS log class for framework and testsuite.
"""
- logger: logging.Logger
+ _logger: logging.Logger
node: str
sh: logging.StreamHandler
fh: logging.FileHandler
verbose_fh: logging.FileHandler
def __init__(self, logger: logging.Logger, node: str = "suite"):
- self.logger = logger
+ self._logger = logger
# 1 means log everything, this will be used by file handlers if their level
# is not set
- self.logger.setLevel(1)
+ self._logger.setLevel(1)
self.node = node
@@ -55,9 +55,13 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
if SETTINGS.verbose is True:
sh.setLevel(logging.DEBUG)
- self.logger.addHandler(sh)
+ self._logger.addHandler(sh)
self.sh = sh
+ # prepare the output folder
+ if not os.path.exists(SETTINGS.output_dir):
+ os.mkdir(SETTINGS.output_dir)
+
logging_path_prefix = os.path.join(SETTINGS.output_dir, node)
fh = logging.FileHandler(f"{logging_path_prefix}.log")
@@ -68,7 +72,7 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
)
)
- self.logger.addHandler(fh)
+ self._logger.addHandler(fh)
self.fh = fh
# This outputs EVERYTHING, intended for post-mortem debugging
@@ -82,10 +86,10 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
)
)
- self.logger.addHandler(verbose_fh)
+ self._logger.addHandler(verbose_fh)
self.verbose_fh = verbose_fh
- super(DTSLOG, self).__init__(self.logger, dict(node=self.node))
+ super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
def logger_exit(self) -> None:
"""
@@ -93,7 +97,7 @@ def logger_exit(self) -> None:
"""
for handler in (self.sh, self.fh, self.verbose_fh):
handler.flush()
- self.logger.removeHandler(handler)
+ self._logger.removeHandler(handler)
def getLogger(name: str, node: str = "suite") -> DTSLOG:
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index a227d8db22..747316c78a 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -1,14 +1,30 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
-from framework.config import NodeConfiguration
+"""
+The package provides modules for managing remote connections to a remote host (node),
+differentiated by OS.
+The package provides a factory function, create_session, that returns the appropriate
+remote connection based on the passed configuration. The differences are in the
+underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""
+
+# pylama:ignore=W0611
+
+from framework.config import OS, NodeConfiguration
+from framework.exception import ConfigurationError
from framework.logger import DTSLOG
-from .remote_session import RemoteSession
-from .ssh_session import SSHSession
+from .linux_session import LinuxSession
+from .os_session import OSSession
+from .remote import RemoteSession, SSHSession
-def create_remote_session(
+def create_session(
node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
- return SSHSession(node_config, name, logger)
+) -> OSSession:
+ match node_config.os:
+ case OS.linux:
+ return LinuxSession(node_config, name, logger)
+ case _:
+ raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
new file mode 100644
index 0000000000..9d14166077
--- /dev/null
+++ b/dts/framework/remote_session/linux_session.py
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from .posix_session import PosixSession
+
+
+class LinuxSession(PosixSession):
+ """
+ The implementation of non-Posix compliant parts of Linux remote sessions.
+ """
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
new file mode 100644
index 0000000000..7a4cc5e669
--- /dev/null
+++ b/dts/framework/remote_session/os_session.py
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from abc import ABC
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote import RemoteSession, create_remote_session
+
+
+class OSSession(ABC):
+ """
+ The OS classes create a DTS node remote session and implement OS specific
+ behavior. There a few control methods implemented by the base class, the rest need
+ to be implemented by derived classes.
+ """
+
+ _config: NodeConfiguration
+ name: str
+ _logger: DTSLOG
+ remote_session: RemoteSession
+
+ def __init__(
+ self,
+ node_config: NodeConfiguration,
+ name: str,
+ logger: DTSLOG,
+ ):
+ self._config = node_config
+ self.name = name
+ self._logger = logger
+ self.remote_session = create_remote_session(node_config, name, logger)
+
+ def close(self, force: bool = False) -> None:
+ """
+ Close the remote session.
+ """
+ self.remote_session.close(force)
+
+ def is_alive(self) -> bool:
+ """
+ Check whether the remote session is still responding.
+ """
+ return self.remote_session.is_alive()
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
new file mode 100644
index 0000000000..110b6a4804
--- /dev/null
+++ b/dts/framework/remote_session/posix_session.py
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from .os_session import OSSession
+
+
+class PosixSession(OSSession):
+ """
+ An intermediary class implementing the Posix compliant parts of
+ Linux and other OS remote sessions.
+ """
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
new file mode 100644
index 0000000000..f3092f8bbe
--- /dev/null
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote_session import RemoteSession
+from .ssh_session import SSHSession
+
+
+def create_remote_session(
+ node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+ return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
similarity index 61%
rename from dts/framework/remote_session/remote_session.py
rename to dts/framework/remote_session/remote/remote_session.py
index 33047d9d0a..7c7b30225f 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import dataclasses
from abc import ABC, abstractmethod
@@ -19,14 +19,23 @@ class HistoryRecord:
class RemoteSession(ABC):
+ """
+ The base class for defining which methods must be implemented in order to connect
+ to a remote host (node) and maintain a remote session. The derived classes are
+ supposed to implement/use some underlying transport protocol (e.g. SSH) to
+ implement the methods. On top of that, it provides some basic services common to
+ all derived classes, such as keeping history and logging what's being executed
+ on the remote node.
+ """
+
name: str
hostname: str
ip: str
port: int | None
username: str
password: str
- logger: DTSLOG
history: list[HistoryRecord]
+ _logger: DTSLOG
_node_config: NodeConfiguration
def __init__(
@@ -46,31 +55,34 @@ def __init__(
self.port = int(port)
self.username = node_config.user
self.password = node_config.password or ""
- self.logger = logger
self.history = []
- self.logger.info(f"Connecting to {self.username}@{self.hostname}.")
+ self._logger = logger
+ self._logger.info(f"Connecting to {self.username}@{self.hostname}.")
self._connect()
- self.logger.info(f"Connection to {self.username}@{self.hostname} successful.")
+ self._logger.info(f"Connection to {self.username}@{self.hostname} successful.")
@abstractmethod
def _connect(self) -> None:
"""
Create connection to assigned node.
"""
- pass
def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
- self.logger.info(f"Sending: {command}")
+ """
+ Send a command and return the output.
+ """
+ self._logger.info(f"Sending: {command}")
out = self._send_command(command, timeout)
- self.logger.debug(f"Received from {command}: {out}")
+ self._logger.debug(f"Received from {command}: {out}")
self._history_add(command=command, output=out)
return out
@abstractmethod
def _send_command(self, command: str, timeout: float) -> str:
"""
- Send a command and return the output.
+ Use the underlying protocol to execute the command and return the output
+ of the command.
"""
def _history_add(self, command: str, output: str) -> None:
@@ -79,17 +91,20 @@ def _history_add(self, command: str, output: str) -> None:
)
def close(self, force: bool = False) -> None:
- self.logger.logger_exit()
+ """
+ Close the remote session and free all used resources.
+ """
+ self._logger.logger_exit()
self._close(force)
@abstractmethod
def _close(self, force: bool = False) -> None:
"""
- Close the remote session, freeing all used resources.
+ Execute protocol specific steps needed to close the session properly.
"""
@abstractmethod
def is_alive(self) -> bool:
"""
- Check whether the session is still responding.
+ Check whether the remote session is still responding.
"""
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/ssh_session.py
rename to dts/framework/remote_session/remote/ssh_session.py
index 7ec327054d..96175f5284 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import time
@@ -17,7 +17,7 @@
class SSHSession(RemoteSession):
"""
- Module for creating Pexpect SSH sessions to a node.
+ Module for creating Pexpect SSH remote sessions.
"""
session: pxssh.pxssh
@@ -56,9 +56,9 @@ def _connect(self) -> None:
)
break
except Exception as e:
- self.logger.warning(e)
+ self._logger.warning(e)
time.sleep(2)
- self.logger.info(
+ self._logger.info(
f"Retrying connection: retry number {retry_attempt + 1}."
)
else:
@@ -67,13 +67,13 @@ def _connect(self) -> None:
self.send_expect("stty -echo", "#")
self.send_expect("stty columns 1000", "#")
except Exception as e:
- self.logger.error(RED(str(e)))
+ self._logger.error(RED(str(e)))
if getattr(self, "port", None):
suggestion = (
f"\nSuggestion: Check if the firewall on {self.hostname} is "
f"stopped.\n"
)
- self.logger.info(GREEN(suggestion))
+ self._logger.info(GREEN(suggestion))
raise SSHConnectionError(self.hostname)
@@ -87,8 +87,8 @@ def send_expect(
try:
retval = int(ret_status)
if retval:
- self.logger.error(f"Command: {command} failure!")
- self.logger.error(ret)
+ self._logger.error(f"Command: {command} failure!")
+ self._logger.error(ret)
return retval
else:
return ret
@@ -97,7 +97,7 @@ def send_expect(
else:
return ret
except Exception as e:
- self.logger.error(
+ self._logger.error(
f"Exception happened in [{command}] and output is "
f"[{self._get_output()}]"
)
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 800f2c7b7f..6422b23499 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2021 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
# Copyright(c) 2022 University of New Hampshire
import argparse
@@ -23,7 +23,7 @@ def __init__(
default: str = None,
type: Callable[[str], _T | argparse.FileType | None] = None,
choices: Iterable[_T] | None = None,
- required: bool = True,
+ required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None:
@@ -63,13 +63,17 @@ class _Settings:
def _get_parser() -> argparse.ArgumentParser:
- parser = argparse.ArgumentParser(description="DPDK test framework.")
+ parser = argparse.ArgumentParser(
+ description="Run DPDK test suites. All options may be specified with "
+ "the environment variables provided in brackets. "
+ "Command line arguments have higher priority.",
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter,
+ )
parser.add_argument(
"--config-file",
action=_env_arg("DTS_CFG_FILE"),
default="conf.yaml",
- required=False,
help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
"and targets.",
)
@@ -79,7 +83,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--output",
action=_env_arg("DTS_OUTPUT_DIR"),
default="output",
- required=False,
help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
)
@@ -88,7 +91,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--timeout",
action=_env_arg("DTS_TIMEOUT"),
default=15,
- required=False,
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
"compiling DPDK.",
)
@@ -98,7 +100,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--verbose",
action=_env_arg("DTS_VERBOSE"),
default="N",
- required=False,
help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
"to the console.",
)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index c5512e5812..8ead9db482 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -1,7 +1,13 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
"""
-This module contains the classes used to model the physical traffic generator,
+This package contains the classes used to model the physical traffic generator,
system under test and any other components that need to be interacted with.
"""
+
+# pylama:ignore=W0611
+
+from .node import Node
+from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 8437975416..a37f7921e0 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -1,62 +1,118 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
A node is a generic host that DTS connects to and manages.
"""
-from framework.config import NodeConfiguration
+from framework.config import (
+ BuildTargetConfiguration,
+ ExecutionConfiguration,
+ NodeConfiguration,
+)
from framework.logger import DTSLOG, getLogger
-from framework.remote_session import RemoteSession, create_remote_session
-from framework.settings import SETTINGS
+from framework.remote_session import OSSession, create_session
class Node(object):
"""
- Basic module for node management. This module implements methods that
+ Basic class for node management. This class implements methods that
manage a node, such as information gathering (of CPU/PCI/NIC) and
environment setup.
"""
+ main_session: OSSession
+ config: NodeConfiguration
name: str
- main_session: RemoteSession
- logger: DTSLOG
- _config: NodeConfiguration
- _other_sessions: list[RemoteSession]
+ _logger: DTSLOG
+ _other_sessions: list[OSSession]
def __init__(self, node_config: NodeConfiguration):
- self._config = node_config
+ self.config = node_config
+ self.name = node_config.name
+ self._logger = getLogger(self.name)
+ self.main_session = create_session(self.config, self.name, self._logger)
+
self._other_sessions = []
- self.name = node_config.name
- self.logger = getLogger(self.name)
- self.logger.info(f"Created node: {self.name}")
- self.main_session = create_remote_session(self._config, self.name, self.logger)
+ self._logger.info(f"Created node: {self.name}")
- def send_command(self, cmds: str, timeout: float = SETTINGS.timeout) -> str:
+ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
"""
- Send commands to node and return string before timeout.
+ Perform the execution setup that will be done for each execution
+ this node is part of.
"""
+ self._set_up_execution(execution_config)
- return self.main_session.send_command(cmds, timeout)
+ def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
- def create_session(self, name: str) -> RemoteSession:
- connection = create_remote_session(
- self._config,
+ def tear_down_execution(self) -> None:
+ """
+ Perform the execution teardown that will be done after each execution
+ this node is part of concludes.
+ """
+ self._tear_down_execution()
+
+ def _tear_down_execution(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Perform the build target setup that will be done for each build target
+ tested on this node.
+ """
+ self._set_up_build_target(build_target_config)
+
+ def _set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def tear_down_build_target(self) -> None:
+ """
+ Perform the build target teardown that will be done after each build target
+ tested on this node.
+ """
+ self._tear_down_build_target()
+
+ def _tear_down_build_target(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def create_session(self, name: str) -> OSSession:
+ """
+ Create and return a new OSSession tailored to the remote OS.
+ """
+ connection = create_session(
+ self.config,
name,
getLogger(name, node=self.name),
)
self._other_sessions.append(connection)
return connection
- def node_exit(self) -> None:
+ def close(self) -> None:
"""
- Recover all resource before node exit
+ Close all connections and free other resources.
"""
if self.main_session:
self.main_session.close()
for session in self._other_sessions:
session.close()
- self.logger.logger_exit()
+ self._logger.logger_exit()
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
new file mode 100644
index 0000000000..42acb6f9b2
--- /dev/null
+++ b/dts/framework/testbed_model/sut_node.py
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from .node import Node
+
+
+class SutNode(Node):
+ """
+ A class for managing connections to the System under Test, providing
+ methods that retrieve the necessary information about the node (such as
+ CPU, memory and NIC details) and configuration capabilities.
+ """
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 02/10] dts: add ssh command verification
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 01/10] dts: add node and os abstractions Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 03/10] dts: add dpdk build on sut Juraj Linkeš
` (10 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
This is a basic capability needed to check whether the command execution
was successful or not. If not, raise a RemoteCommandExecutionError. When
a failure is expected, the caller is supposed to catch the exception.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/exception.py | 23 +++++++-
.../remote_session/remote/remote_session.py | 55 +++++++++++++------
.../remote_session/remote/ssh_session.py | 12 +++-
3 files changed, 69 insertions(+), 21 deletions(-)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 121a0f7296..e776b42bd9 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -21,7 +21,8 @@ class ErrorSeverity(IntEnum):
NO_ERR = 0
GENERIC_ERR = 1
CONFIG_ERR = 2
- SSH_ERR = 3
+ REMOTE_CMD_EXEC_ERR = 3
+ SSH_ERR = 4
class DTSError(Exception):
@@ -90,3 +91,23 @@ class ConfigurationError(DTSError):
"""
severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
+
+
+class RemoteCommandExecutionError(DTSError):
+ """
+ Raised when a command executed on a Node returns a non-zero exit status.
+ """
+
+ command: str
+ command_return_code: int
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+
+ def __init__(self, command: str, command_return_code: int):
+ self.command = command
+ self.command_return_code = command_return_code
+
+ def __str__(self) -> str:
+ return (
+ f"Command {self.command} returned a non-zero exit code: "
+ f"{self.command_return_code}"
+ )
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 7c7b30225f..5ac395ec79 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -7,15 +7,29 @@
from abc import ABC, abstractmethod
from framework.config import NodeConfiguration
+from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
@dataclasses.dataclass(slots=True, frozen=True)
-class HistoryRecord:
+class CommandResult:
+ """
+ The result of remote execution of a command.
+ """
+
name: str
command: str
- output: str | int
+ stdout: str
+ stderr: str
+ return_code: int
+
+ def __str__(self) -> str:
+ return (
+ f"stdout: '{self.stdout}'\n"
+ f"stderr: '{self.stderr}'\n"
+ f"return_code: '{self.return_code}'"
+ )
class RemoteSession(ABC):
@@ -34,7 +48,7 @@ class RemoteSession(ABC):
port: int | None
username: str
password: str
- history: list[HistoryRecord]
+ history: list[CommandResult]
_logger: DTSLOG
_node_config: NodeConfiguration
@@ -68,28 +82,33 @@ def _connect(self) -> None:
Create connection to assigned node.
"""
- def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
+ def send_command(
+ self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ ) -> CommandResult:
"""
- Send a command and return the output.
+ Send a command to the connected node and return CommandResult.
+ If verify is True, check the return code of the executed command
+ and raise a RemoteCommandExecutionError if the command failed.
"""
- self._logger.info(f"Sending: {command}")
- out = self._send_command(command, timeout)
- self._logger.debug(f"Received from {command}: {out}")
- self._history_add(command=command, output=out)
- return out
+ self._logger.info(f"Sending: '{command}'")
+ result = self._send_command(command, timeout)
+ if verify and result.return_code:
+ self._logger.debug(
+ f"Command '{command}' failed with return code '{result.return_code}'"
+ )
+ self._logger.debug(f"stdout: '{result.stdout}'")
+ self._logger.debug(f"stderr: '{result.stderr}'")
+ raise RemoteCommandExecutionError(command, result.return_code)
+ self._logger.debug(f"Received from '{command}':\n{result}")
+ self.history.append(result)
+ return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return the output
- of the command.
+ Use the underlying protocol to execute the command and return CommandResult.
"""
- def _history_add(self, command: str, output: str) -> None:
- self.history.append(
- HistoryRecord(name=self.name, command=command, output=output)
- )
-
def close(self, force: bool = False) -> None:
"""
Close the remote session and free all used resources.
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index 96175f5284..c2362e2fdf 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -12,7 +12,7 @@
from framework.logger import DTSLOG
from framework.utils import GREEN, RED
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
class SSHSession(RemoteSession):
@@ -66,6 +66,7 @@ def _connect(self) -> None:
self.send_expect("stty -echo", "#")
self.send_expect("stty columns 1000", "#")
+ self.send_expect("bind 'set enable-bracketed-paste off'", "#")
except Exception as e:
self._logger.error(RED(str(e)))
if getattr(self, "port", None):
@@ -163,7 +164,14 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
+ output = self._send_command_get_output(command, timeout)
+ return_code = int(self._send_command_get_output("echo $?", timeout))
+
+ # we're capturing only stdout
+ return CommandResult(self.name, command, output, "", return_code)
+
+ def _send_command_get_output(self, command: str, timeout: float) -> str:
try:
self._clean_session()
self._send_line(command)
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 03/10] dts: add dpdk build on sut
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 01/10] dts: add node and os abstractions Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 02/10] dts: add ssh command verification Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 04/10] dts: add dpdk execution handling Juraj Linkeš
` (9 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Add the ability to build DPDK and apps on the SUT, using a configured
target.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/config/__init__.py | 2 +
dts/framework/exception.py | 17 ++
dts/framework/remote_session/os_session.py | 90 +++++++++-
dts/framework/remote_session/posix_session.py | 127 ++++++++++++++
.../remote_session/remote/remote_session.py | 38 ++++-
.../remote_session/remote/ssh_session.py | 68 +++++++-
dts/framework/settings.py | 44 ++++-
dts/framework/testbed_model/__init__.py | 1 +
dts/framework/testbed_model/dpdk.py | 33 ++++
dts/framework/testbed_model/sut_node.py | 158 ++++++++++++++++++
dts/framework/utils.py | 19 ++-
11 files changed, 581 insertions(+), 16 deletions(-)
create mode 100644 dts/framework/testbed_model/dpdk.py
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index e3e2d74eac..ca61cb10fe 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -91,6 +91,7 @@ class BuildTargetConfiguration:
os: OS
cpu: CPUType
compiler: Compiler
+ compiler_wrapper: str
name: str
@staticmethod
@@ -100,6 +101,7 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
os=OS(d["os"]),
cpu=CPUType(d["cpu"]),
compiler=Compiler(d["compiler"]),
+ compiler_wrapper=d.get("compiler_wrapper", ""),
name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index e776b42bd9..b4545a5a40 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -23,6 +23,7 @@ class ErrorSeverity(IntEnum):
CONFIG_ERR = 2
REMOTE_CMD_EXEC_ERR = 3
SSH_ERR = 4
+ DPDK_BUILD_ERR = 10
class DTSError(Exception):
@@ -111,3 +112,19 @@ def __str__(self) -> str:
f"Command {self.command} returned a non-zero exit code: "
f"{self.command_return_code}"
)
+
+
+class RemoteDirectoryExistsError(DTSError):
+ """
+ Raised when a remote directory to be created already exists.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+
+
+class DPDKBuildError(DTSError):
+ """
+ Raised when DPDK build fails for any reason.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 7a4cc5e669..06d1ffefdd 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -2,10 +2,14 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
-from abc import ABC
+from abc import ABC, abstractmethod
+from pathlib import PurePath
-from framework.config import NodeConfiguration
+from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
+from framework.settings import SETTINGS
+from framework.testbed_model import MesonArgs
+from framework.utils import EnvVarsDict
from .remote import RemoteSession, create_remote_session
@@ -44,3 +48,85 @@ def is_alive(self) -> bool:
Check whether the remote session is still responding.
"""
return self.remote_session.is_alive()
+
+ @abstractmethod
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+ """
+ Try to find DPDK remote dir in remote_dir.
+ """
+
+ @abstractmethod
+ def get_remote_tmp_dir(self) -> PurePath:
+ """
+ Get the path of the temporary directory of the remote OS.
+ """
+
+ @abstractmethod
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for the target architecture. Get
+ information from the node if needed.
+ """
+
+ @abstractmethod
+ def join_remote_path(self, *args: str | PurePath) -> PurePath:
+ """
+ Join path parts using the path separator that fits the remote OS.
+ """
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local filesystem to destination_file
+ on the remote Node associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated remote Node to destination_file on local storage.
+ """
+
+ @abstractmethod
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ """
+ Remove remote directory, by default remove recursively and forcefully.
+ """
+
+ @abstractmethod
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ """
+ Extract remote tarball in place. If expected_dir is a non-empty string, check
+ whether the dir exists after extracting the archive.
+ """
+
+ @abstractmethod
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: MesonArgs,
+ remote_dpdk_dir: str | PurePath,
+ remote_dpdk_build_dir: str | PurePath,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> None:
+ """
+ Build DPDK in the input dir with specified environment variables and meson
+ arguments.
+ """
+
+ @abstractmethod
+ def get_dpdk_version(self, version_path: str | PurePath) -> str:
+ """
+ Inspect DPDK version on the remote node from version_path.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index 110b6a4804..7a5c38c36e 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -2,6 +2,14 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from pathlib import PurePath, PurePosixPath
+
+from framework.config import Architecture
+from framework.exception import DPDKBuildError, RemoteCommandExecutionError
+from framework.settings import SETTINGS
+from framework.testbed_model import MesonArgs
+from framework.utils import EnvVarsDict
+
from .os_session import OSSession
@@ -10,3 +18,122 @@ class PosixSession(OSSession):
An intermediary class implementing the Posix compliant parts of
Linux and other OS remote sessions.
"""
+
+ @staticmethod
+ def combine_short_options(**opts: bool) -> str:
+ ret_opts = ""
+ for opt, include in opts.items():
+ if include:
+ ret_opts = f"{ret_opts}{opt}"
+
+ if ret_opts:
+ ret_opts = f" -{ret_opts}"
+
+ return ret_opts
+
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+ remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
+ result = self.remote_session.send_command(f"ls -d {remote_guess} | tail -1")
+ return PurePosixPath(result.stdout)
+
+ def get_remote_tmp_dir(self) -> PurePosixPath:
+ return PurePosixPath("/tmp")
+
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for i686 arch build. Get information
+ from the node if needed.
+ """
+ env_vars = {}
+ if arch == Architecture.i686:
+ # find the pkg-config path and store it in PKG_CONFIG_LIBDIR
+ out = self.remote_session.send_command("find /usr -type d -name pkgconfig")
+ pkg_path = ""
+ res_path = out.stdout.split("\r\n")
+ for cur_path in res_path:
+ if "i386" in cur_path:
+ pkg_path = cur_path
+ break
+ assert pkg_path != "", "i386 pkg-config path not found"
+
+ env_vars["CFLAGS"] = "-m32"
+ env_vars["PKG_CONFIG_LIBDIR"] = pkg_path
+
+ return env_vars
+
+ def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+ return PurePosixPath(*args)
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ self.remote_session.copy_file(source_file, destination_file, source_remote)
+
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ opts = PosixSession.combine_short_options(r=recursive, f=force)
+ self.remote_session.send_command(f"rm{opts} {remote_dir_path}")
+
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ self.remote_session.send_command(
+ f"tar xfm {remote_tarball_path} "
+ f"-C {PurePosixPath(remote_tarball_path).parent}",
+ 60,
+ )
+ if expected_dir:
+ self.remote_session.send_command(f"ls {expected_dir}", verify=True)
+
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: MesonArgs,
+ remote_dpdk_dir: str | PurePath,
+ remote_dpdk_build_dir: str | PurePath,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> None:
+ try:
+ if rebuild:
+ # reconfigure, then build
+ self._logger.info("Reconfiguring DPDK build.")
+ self.remote_session.send_command(
+ f"meson configure {meson_args} {remote_dpdk_build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+ else:
+ # fresh build - remove target dir first, then build from scratch
+ self._logger.info("Configuring DPDK build from scratch.")
+ self.remove_remote_dir(remote_dpdk_build_dir)
+ self.remote_session.send_command(
+ f"meson setup "
+ f"{meson_args} {remote_dpdk_dir} {remote_dpdk_build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+
+ self._logger.info("Building DPDK.")
+ self.remote_session.send_command(
+ f"ninja -C {remote_dpdk_build_dir}", timeout, verify=True, env=env_vars
+ )
+ except RemoteCommandExecutionError as e:
+ raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
+
+ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+ out = self.remote_session.send_command(
+ f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
+ )
+ return out.stdout
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 5ac395ec79..91dee3cb4f 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -5,11 +5,13 @@
import dataclasses
from abc import ABC, abstractmethod
+from pathlib import PurePath
from framework.config import NodeConfiguration
from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict
@dataclasses.dataclass(slots=True, frozen=True)
@@ -83,15 +85,22 @@ def _connect(self) -> None:
"""
def send_command(
- self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ self,
+ command: str,
+ timeout: float = SETTINGS.timeout,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
) -> CommandResult:
"""
- Send a command to the connected node and return CommandResult.
+ Send a command to the connected node using optional env vars
+ and return CommandResult.
If verify is True, check the return code of the executed command
and raise a RemoteCommandExecutionError if the command failed.
"""
- self._logger.info(f"Sending: '{command}'")
- result = self._send_command(command, timeout)
+ self._logger.info(
+ f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
+ )
+ result = self._send_command(command, timeout, env)
if verify and result.return_code:
self._logger.debug(
f"Command '{command}' failed with return code '{result.return_code}'"
@@ -104,9 +113,12 @@ def send_command(
return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> CommandResult:
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return CommandResult.
+ Use the underlying protocol to execute the command using optional env vars
+ and return CommandResult.
"""
def close(self, force: bool = False) -> None:
@@ -127,3 +139,17 @@ def is_alive(self) -> bool:
"""
Check whether the remote session is still responding.
"""
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local filesystem to destination_file on the remote Node
+ associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated Node to destination_file on local filesystem.
+ """
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index c2362e2fdf..757bfaed64 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -4,13 +4,15 @@
# Copyright(c) 2022-2023 University of New Hampshire
import time
+from pathlib import PurePath
+import pexpect # type: ignore
from pexpect import pxssh # type: ignore
from framework.config import NodeConfiguration
from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
from framework.logger import DTSLOG
-from framework.utils import GREEN, RED
+from framework.utils import GREEN, RED, EnvVarsDict
from .remote_session import CommandResult, RemoteSession
@@ -164,16 +166,22 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> CommandResult:
- output = self._send_command_get_output(command, timeout)
- return_code = int(self._send_command_get_output("echo $?", timeout))
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
+ output = self._send_command_get_output(command, timeout, env)
+ return_code = int(self._send_command_get_output("echo $?", timeout, None))
# we're capturing only stdout
return CommandResult(self.name, command, output, "", return_code)
- def _send_command_get_output(self, command: str, timeout: float) -> str:
+ def _send_command_get_output(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> str:
try:
self._clean_session()
+ if env:
+ command = f"{env} {command}"
self._send_line(command)
except Exception as e:
raise e
@@ -190,3 +198,53 @@ def _close(self, force: bool = False) -> None:
else:
if self.is_alive():
self.session.logout()
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Send a local file to a remote host.
+ """
+ if source_remote:
+ source_file = f"{self.username}@{self.ip}:{source_file}"
+ else:
+ destination_file = f"{self.username}@{self.ip}:{destination_file}"
+
+ port = ""
+ if self.port:
+ port = f" -P {self.port}"
+
+ # this is not OS agnostic, find a Pythonic (and thus OS agnostic) way
+ # TODO Fabric should handle this
+ command = (
+ f"scp -v{port} -o NoHostAuthenticationForLocalhost=yes"
+ f" {source_file} {destination_file}"
+ )
+
+ self._spawn_scp(command)
+
+ def _spawn_scp(self, scp_cmd: str) -> None:
+ """
+ Transfer a file with SCP
+ """
+ self._logger.info(scp_cmd)
+ p: pexpect.spawn = pexpect.spawn(scp_cmd)
+ time.sleep(0.5)
+ ssh_newkey: str = "Are you sure you want to continue connecting"
+ i: int = p.expect(
+ [ssh_newkey, "[pP]assword", "# ", pexpect.EOF, pexpect.TIMEOUT], 120
+ )
+ if i == 0: # add once in trust list
+ p.sendline("yes")
+ i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2)
+
+ if i == 1:
+ time.sleep(0.5)
+ p.sendline(self.password)
+ p.expect("Exit status 0", 60)
+ if i == 4:
+ self._logger.error("SCP TIMEOUT error %d" % i)
+ p.close()
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 6422b23499..f787187ade 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -7,8 +7,11 @@
import os
from collections.abc import Callable, Iterable, Sequence
from dataclasses import dataclass
+from pathlib import Path
from typing import Any, TypeVar
+from .exception import ConfigurationError
+
_T = TypeVar("_T")
@@ -60,6 +63,9 @@ class _Settings:
output_dir: str
timeout: float
verbose: bool
+ skip_setup: bool
+ dpdk_tarball_path: Path
+ compile_timeout: float
def _get_parser() -> argparse.ArgumentParser:
@@ -91,6 +97,7 @@ def _get_parser() -> argparse.ArgumentParser:
"--timeout",
action=_env_arg("DTS_TIMEOUT"),
default=15,
+ type=float,
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
"compiling DPDK.",
)
@@ -104,16 +111,51 @@ def _get_parser() -> argparse.ArgumentParser:
"to the console.",
)
+ parser.add_argument(
+ "-s",
+ "--skip-setup",
+ action=_env_arg("DTS_SKIP_SETUP"),
+ default="N",
+ help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+ )
+
+ parser.add_argument(
+ "--tarball",
+ "--snapshot",
+ action=_env_arg("DTS_DPDK_TARBALL"),
+ default="dpdk.tar.xz",
+ type=Path,
+ help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball "
+ "which will be used in testing.",
+ )
+
+ parser.add_argument(
+ "--compile-timeout",
+ action=_env_arg("DTS_COMPILE_TIMEOUT"),
+ default=1200,
+ type=float,
+ help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
+ )
+
return parser
+def _check_tarball_path(parsed_args: argparse.Namespace) -> None:
+ if not os.path.exists(parsed_args.tarball):
+ raise ConfigurationError(f"DPDK tarball '{parsed_args.tarball}' doesn't exist.")
+
+
def _get_settings() -> _Settings:
parsed_args = _get_parser().parse_args()
+ _check_tarball_path(parsed_args)
return _Settings(
config_file_path=parsed_args.config_file,
output_dir=parsed_args.output_dir,
- timeout=float(parsed_args.timeout),
+ timeout=parsed_args.timeout,
verbose=(parsed_args.verbose == "Y"),
+ skip_setup=(parsed_args.skip_setup == "Y"),
+ dpdk_tarball_path=parsed_args.tarball,
+ compile_timeout=parsed_args.compile_timeout,
)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ead9db482..96e2ab7c3f 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,5 +9,6 @@
# pylama:ignore=W0611
+from .dpdk import MesonArgs
from .node import Node
from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/dpdk.py b/dts/framework/testbed_model/dpdk.py
new file mode 100644
index 0000000000..0526974f72
--- /dev/null
+++ b/dts/framework/testbed_model/dpdk.py
@@ -0,0 +1,33 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Various utilities used for configuring, building and running DPDK.
+"""
+
+
+class MesonArgs(object):
+ """
+ Aggregate the arguments needed to build DPDK:
+ default_library: Default library type, Meson allows "shared", "static" and "both".
+ Defaults to None, in which case the argument won't be used.
+ Keyword arguments: The arguments found in meson_option.txt in root DPDK directory.
+ Do not use -D with them, for example: enable_kmods=True.
+ """
+
+ default_library: str
+
+ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+ self.default_library = (
+ f"--default-library={default_library}" if default_library else ""
+ )
+ self.dpdk_args = " ".join(
+ (
+ f"-D{dpdk_arg_name}={dpdk_arg_value}"
+ for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
+ )
+ )
+
+ def __str__(self) -> str:
+ return " ".join(f"{self.default_library} {self.dpdk_args}".split())
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 42acb6f9b2..442a41bdc8 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -2,6 +2,15 @@
# Copyright(c) 2010-2014 Intel Corporation
# Copyright(c) 2023 PANTHEON.tech s.r.o.
+import os
+import tarfile
+from pathlib import PurePath
+
+from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict, skip_setup
+
+from .dpdk import MesonArgs
from .node import Node
@@ -10,4 +19,153 @@ class SutNode(Node):
A class for managing connections to the System under Test, providing
methods that retrieve the necessary information about the node (such as
CPU, memory and NIC details) and configuration capabilities.
+ Another key capability is building DPDK according to given build target.
"""
+
+ _build_target_config: BuildTargetConfiguration | None
+ _env_vars: EnvVarsDict
+ _remote_tmp_dir: PurePath
+ __remote_dpdk_dir: PurePath | None
+ _dpdk_version: str | None
+ _app_compile_timeout: float
+
+ def __init__(self, node_config: NodeConfiguration):
+ super(SutNode, self).__init__(node_config)
+ self._build_target_config = None
+ self._env_vars = EnvVarsDict()
+ self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
+ self.__remote_dpdk_dir = None
+ self._dpdk_version = None
+ self._app_compile_timeout = 90
+
+ @property
+ def _remote_dpdk_dir(self) -> PurePath:
+ if self.__remote_dpdk_dir is None:
+ self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
+ return self.__remote_dpdk_dir
+
+ @_remote_dpdk_dir.setter
+ def _remote_dpdk_dir(self, value: PurePath) -> None:
+ self.__remote_dpdk_dir = value
+
+ @property
+ def remote_dpdk_build_dir(self) -> PurePath:
+ if self._build_target_config:
+ return self.main_session.join_remote_path(
+ self._remote_dpdk_dir, self._build_target_config.name
+ )
+ else:
+ return self.main_session.join_remote_path(self._remote_dpdk_dir, "build")
+
+ @property
+ def dpdk_version(self) -> str:
+ if self._dpdk_version is None:
+ self._dpdk_version = self.main_session.get_dpdk_version(
+ self._remote_dpdk_dir
+ )
+ return self._dpdk_version
+
+ def _guess_dpdk_remote_dir(self) -> PurePath:
+ return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
+
+ def _set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Setup DPDK on the SUT node.
+ """
+ self._configure_build_target(build_target_config)
+ self._copy_dpdk_tarball()
+ self._build_dpdk()
+
+ def _configure_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Populate common environment variables and set build target config.
+ """
+ self._env_vars = EnvVarsDict()
+ self._build_target_config = build_target_config
+ self._env_vars.update(
+ self.main_session.get_dpdk_build_env_vars(build_target_config.arch)
+ )
+ self._env_vars["CC"] = build_target_config.compiler.name
+ if build_target_config.compiler_wrapper:
+ self._env_vars["CC"] = (
+ f"'{build_target_config.compiler_wrapper} "
+ f"{build_target_config.compiler.name}'"
+ )
+
+ @skip_setup
+ def _copy_dpdk_tarball(self) -> None:
+ """
+ Copy to and extract DPDK tarball on the SUT node.
+ """
+ self._logger.info("Copying DPDK tarball to SUT.")
+ self.main_session.copy_file(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
+
+ # construct remote tarball path
+ # the basename is the same on local host and on remote Node
+ remote_tarball_path = self.main_session.join_remote_path(
+ self._remote_tmp_dir, os.path.basename(SETTINGS.dpdk_tarball_path)
+ )
+
+ # construct remote path after extracting
+ with tarfile.open(SETTINGS.dpdk_tarball_path) as dpdk_tar:
+ dpdk_top_dir = dpdk_tar.getnames()[0]
+ self._remote_dpdk_dir = self.main_session.join_remote_path(
+ self._remote_tmp_dir, dpdk_top_dir
+ )
+
+ self._logger.info(
+ f"Extracting DPDK tarball on SUT: "
+ f"'{remote_tarball_path}' into '{self._remote_dpdk_dir}'."
+ )
+ # clean remote path where we're extracting
+ self.main_session.remove_remote_dir(self._remote_dpdk_dir)
+
+ # then extract to remote path
+ self.main_session.extract_remote_tarball(
+ remote_tarball_path, self._remote_dpdk_dir
+ )
+
+ @skip_setup
+ def _build_dpdk(self) -> None:
+ """
+ Build DPDK. Uses the already configured target. Assumes that the tarball has
+ already been copied to and extracted on the SUT node.
+ """
+ self.main_session.build_dpdk(
+ self._env_vars,
+ MesonArgs(default_library="static", enable_kmods=True, libdir="lib"),
+ self._remote_dpdk_dir,
+ self.remote_dpdk_build_dir,
+ )
+
+ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
+ """
+ Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
+ When app_name is 'all', build all example apps.
+ When app_name is any other string, tries to build that example app.
+ Return the directory path of the built app. If building all apps, return
+ the path to the examples directory (where all apps reside).
+ The meson_dpdk_args are keyword arguments
+ found in meson_option.txt in root DPDK directory. Do not use -D with them,
+ for example: enable_kmods=True.
+ """
+ self.main_session.build_dpdk(
+ self._env_vars,
+ MesonArgs(examples=app_name, **meson_dpdk_args),
+ self._remote_dpdk_dir,
+ self.remote_dpdk_build_dir,
+ rebuild=True,
+ timeout=self._app_compile_timeout,
+ )
+
+ if app_name == "all":
+ return self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, "examples"
+ )
+ return self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index c28c8f1082..611071604b 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -1,9 +1,12 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import sys
+from typing import Callable
+
+from .settings import SETTINGS
def check_dts_python_version() -> None:
@@ -22,9 +25,21 @@ def check_dts_python_version() -> None:
print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+def skip_setup(func) -> Callable[..., None]:
+ if SETTINGS.skip_setup:
+ return lambda *args: None
+ else:
+ return func
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
def RED(text: str) -> str:
return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+
+class EnvVarsDict(dict):
+ def __str__(self) -> str:
+ return " ".join(["=".join(item) for item in self.items()])
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 04/10] dts: add dpdk execution handling
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (2 preceding siblings ...)
2023-02-23 15:28 ` [PATCH v5 03/10] dts: add dpdk build on sut Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 05/10] dts: add node memory setup Juraj Linkeš
` (8 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Add methods for setting up and shutting down DPDK apps and for
constructing EAL parameters.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 4 +
dts/framework/config/__init__.py | 8 +
dts/framework/config/conf_yaml_schema.json | 25 ++
dts/framework/remote_session/linux_session.py | 18 ++
dts/framework/remote_session/os_session.py | 23 +-
dts/framework/remote_session/posix_session.py | 83 ++++++
dts/framework/testbed_model/__init__.py | 8 +
dts/framework/testbed_model/dpdk.py | 45 +++
dts/framework/testbed_model/hw/__init__.py | 27 ++
dts/framework/testbed_model/hw/cpu.py | 274 ++++++++++++++++++
.../testbed_model/hw/virtual_device.py | 16 +
dts/framework/testbed_model/node.py | 43 +++
dts/framework/testbed_model/sut_node.py | 81 +++++-
dts/framework/utils.py | 20 ++
14 files changed, 673 insertions(+), 2 deletions(-)
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 03696d2bab..1648e5c3c5 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,4 +13,8 @@ nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ arch: x86_64
os: linux
+ lcores: ""
+ use_first_core: false
+ memory_channels: 4
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ca61cb10fe..17b917f3b3 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -72,7 +72,11 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ arch: Architecture
os: OS
+ lcores: str
+ use_first_core: bool
+ memory_channels: int
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -81,7 +85,11 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ arch=Architecture(d["arch"]),
os=OS(d["os"]),
+ lcores=d.get("lcores", "1"),
+ use_first_core=d.get("use_first_core", False),
+ memory_channels=d.get("memory_channels", 1),
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 9170307fbe..334b4bd8ab 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -6,6 +6,14 @@
"type": "string",
"description": "A unique identifier for a node"
},
+ "ARCH": {
+ "type": "string",
+ "enum": [
+ "x86_64",
+ "arm64",
+ "ppc64le"
+ ]
+ },
"OS": {
"type": "string",
"enum": [
@@ -92,8 +100,24 @@
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
},
+ "arch": {
+ "$ref": "#/definitions/ARCH"
+ },
"os": {
"$ref": "#/definitions/OS"
+ },
+ "lcores": {
+ "type": "string",
+ "pattern": "^(([0-9]+|([0-9]+-[0-9]+))(,([0-9]+|([0-9]+-[0-9]+)))*)?$",
+ "description": "Optional comma-separated list of logical cores to use, e.g.: 1,2,3,4,5,18-22. Defaults to 1. An empty string means use all lcores."
+ },
+ "use_first_core": {
+ "type": "boolean",
+ "description": "Indicate whether DPDK should use the first physical core. It won't be used by default."
+ },
+ "memory_channels": {
+ "type": "integer",
+ "description": "How many memory channels to use. Optional, defaults to 1."
}
},
"additionalProperties": false,
@@ -101,6 +125,7 @@
"name",
"hostname",
"user",
+ "arch",
"os"
]
},
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index 9d14166077..c49b6bb1d7 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from framework.testbed_model import LogicalCore
+
from .posix_session import PosixSession
@@ -9,3 +11,19 @@ class LinuxSession(PosixSession):
"""
The implementation of non-Posix compliant parts of Linux remote sessions.
"""
+
+ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+ cpu_info = self.remote_session.send_command(
+ "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#"
+ ).stdout
+ lcores = []
+ for cpu_line in cpu_info.splitlines():
+ lcore, core, socket, node = map(int, cpu_line.split(","))
+ if core == 0 and socket == 0 and not use_first_core:
+ self._logger.info("Not using the first physical core.")
+ continue
+ lcores.append(LogicalCore(lcore, core, socket, node))
+ return lcores
+
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ return dpdk_prefix
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 06d1ffefdd..c30753e0b8 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -3,12 +3,13 @@
# Copyright(c) 2023 University of New Hampshire
from abc import ABC, abstractmethod
+from collections.abc import Iterable
from pathlib import PurePath
from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
from framework.settings import SETTINGS
-from framework.testbed_model import MesonArgs
+from framework.testbed_model import LogicalCore, MesonArgs
from framework.utils import EnvVarsDict
from .remote import RemoteSession, create_remote_session
@@ -130,3 +131,23 @@ def get_dpdk_version(self, version_path: str | PurePath) -> str:
"""
Inspect DPDK version on the remote node from version_path.
"""
+
+ @abstractmethod
+ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+ """
+ Compose a list of LogicalCores present on the remote node.
+ If use_first_core is False, the first physical core won't be used.
+ """
+
+ @abstractmethod
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ """
+ Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
+ dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+ """
+
+ @abstractmethod
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ """
+ Get the DPDK file prefix that will be used when running DPDK apps.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index 7a5c38c36e..9b05464d65 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+import re
+from collections.abc import Iterable
from pathlib import PurePath, PurePosixPath
from framework.config import Architecture
@@ -137,3 +139,84 @@ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
)
return out.stdout
+
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ self._logger.info("Cleaning up DPDK apps.")
+ dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
+ if dpdk_runtime_dirs:
+ # kill and cleanup only if DPDK is running
+ dpdk_pids = self._get_dpdk_pids(dpdk_runtime_dirs)
+ for dpdk_pid in dpdk_pids:
+ self.remote_session.send_command(f"kill -9 {dpdk_pid}", 20)
+ self._check_dpdk_hugepages(dpdk_runtime_dirs)
+ self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
+
+ def _get_dpdk_runtime_dirs(
+ self, dpdk_prefix_list: Iterable[str]
+ ) -> list[PurePosixPath]:
+ prefix = PurePosixPath("/var", "run", "dpdk")
+ if not dpdk_prefix_list:
+ remote_prefixes = self._list_remote_dirs(prefix)
+ if not remote_prefixes:
+ dpdk_prefix_list = []
+ else:
+ dpdk_prefix_list = remote_prefixes
+
+ return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
+
+ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
+ """
+ Return a list of directories of the remote_dir.
+ If remote_path doesn't exist, return None.
+ """
+ out = self.remote_session.send_command(
+ f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
+ ).stdout
+ if "No such file or directory" in out:
+ return None
+ else:
+ return out.splitlines()
+
+ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+ pids = []
+ pid_regex = r"p(\d+)"
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ dpdk_config_file = PurePosixPath(dpdk_runtime_dir, "config")
+ if self._remote_files_exists(dpdk_config_file):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {dpdk_config_file}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ for out_line in out.splitlines():
+ match = re.match(pid_regex, out_line)
+ if match:
+ pids.append(int(match.group(1)))
+ return pids
+
+ def _remote_files_exists(self, remote_path: PurePath) -> bool:
+ result = self.remote_session.send_command(f"test -e {remote_path}")
+ return not result.return_code
+
+ def _check_dpdk_hugepages(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
+ if self._remote_files_exists(hugepage_info):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {hugepage_info}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ self._logger.warning("Some DPDK processes did not free hugepages.")
+ self._logger.warning("*******************************************")
+ self._logger.warning(out)
+ self._logger.warning("*******************************************")
+
+ def _remove_dpdk_runtime_dirs(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ self.remove_remote_dir(dpdk_runtime_dir)
+
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ return ""
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 96e2ab7c3f..22c7c16708 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -10,5 +10,13 @@
# pylama:ignore=W0611
from .dpdk import MesonArgs
+from .hw import (
+ LogicalCore,
+ LogicalCoreCount,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+ VirtualDevice,
+ lcore_filter,
+)
from .node import Node
from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/dpdk.py b/dts/framework/testbed_model/dpdk.py
index 0526974f72..9b3a9e7381 100644
--- a/dts/framework/testbed_model/dpdk.py
+++ b/dts/framework/testbed_model/dpdk.py
@@ -6,6 +6,8 @@
Various utilities used for configuring, building and running DPDK.
"""
+from .hw import LogicalCoreList, VirtualDevice
+
class MesonArgs(object):
"""
@@ -31,3 +33,46 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
def __str__(self) -> str:
return " ".join(f"{self.default_library} {self.dpdk_args}".split())
+
+
+class EalParameters(object):
+ def __init__(
+ self,
+ lcore_list: LogicalCoreList,
+ memory_channels: int,
+ prefix: str,
+ no_pci: bool,
+ vdevs: list[VirtualDevice],
+ other_eal_param: str,
+ ):
+ """
+ Generate eal parameters character string;
+ :param lcore_list: the list of logical cores to use.
+ :param memory_channels: the number of memory channels to use.
+ :param prefix: set file prefix string, eg:
+ prefix='vf'
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1']
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments'
+ """
+ self._lcore_list = f"-l {lcore_list}"
+ self._memory_channels = f"-n {memory_channels}"
+ self._prefix = prefix
+ if prefix:
+ self._prefix = f"--file-prefix={prefix}"
+ self._no_pci = "--no-pci" if no_pci else ""
+ self._vdevs = " ".join(f"--vdev {vdev}" for vdev in vdevs)
+ self._other_eal_param = other_eal_param
+
+ def __str__(self) -> str:
+ return (
+ f"{self._lcore_list} "
+ f"{self._memory_channels} "
+ f"{self._prefix} "
+ f"{self._no_pci} "
+ f"{self._vdevs} "
+ f"{self._other_eal_param}"
+ )
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
new file mode 100644
index 0000000000..88ccac0b0e
--- /dev/null
+++ b/dts/framework/testbed_model/hw/__init__.py
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from .cpu import (
+ LogicalCore,
+ LogicalCoreCount,
+ LogicalCoreCountFilter,
+ LogicalCoreFilter,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+)
+from .virtual_device import VirtualDevice
+
+
+def lcore_filter(
+ core_list: list[LogicalCore],
+ filter_specifier: LogicalCoreCount | LogicalCoreList,
+ ascending: bool,
+) -> LogicalCoreFilter:
+ if isinstance(filter_specifier, LogicalCoreList):
+ return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+ elif isinstance(filter_specifier, LogicalCoreCount):
+ return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+ else:
+ raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/hw/cpu.py
new file mode 100644
index 0000000000..d1918a12dc
--- /dev/null
+++ b/dts/framework/testbed_model/hw/cpu.py
@@ -0,0 +1,274 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+import dataclasses
+from abc import ABC, abstractmethod
+from collections.abc import Iterable, ValuesView
+from dataclasses import dataclass
+
+from framework.utils import expand_range
+
+
+@dataclass(slots=True, frozen=True)
+class LogicalCore(object):
+ """
+ Representation of a CPU core. A physical core is represented in OS
+ by multiple logical cores (lcores) if CPU multithreading is enabled.
+ """
+
+ lcore: int
+ core: int
+ socket: int
+ node: int
+
+ def __int__(self) -> int:
+ return self.lcore
+
+
+class LogicalCoreList(object):
+ """
+ Convert these options into a list of logical core ids.
+ lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
+ lcore_list=[0,1,2,3] - a list of int indices
+ lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
+ lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
+
+ The class creates a unified format used across the framework and allows
+ the user to use either a str representation (using str(instance) or directly
+ in f-strings) or a list representation (by accessing instance.lcore_list).
+ Empty lcore_list is allowed.
+ """
+
+ _lcore_list: list[int]
+ _lcore_str: str
+
+ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+ self._lcore_list = []
+ if isinstance(lcore_list, str):
+ lcore_list = lcore_list.split(",")
+ for lcore in lcore_list:
+ if isinstance(lcore, str):
+ self._lcore_list.extend(expand_range(lcore))
+ else:
+ self._lcore_list.append(int(lcore))
+
+ # the input lcores may not be sorted
+ self._lcore_list.sort()
+ self._lcore_str = (
+ f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+ )
+
+ @property
+ def lcore_list(self) -> list[int]:
+ return self._lcore_list
+
+ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
+ formatted_core_list = []
+ segment = lcore_ids_list[:1]
+ for lcore_id in lcore_ids_list[1:]:
+ if lcore_id - segment[-1] == 1:
+ segment.append(lcore_id)
+ else:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}"
+ if len(segment) > 1
+ else f"{segment[0]}"
+ )
+ current_core_index = lcore_ids_list.index(lcore_id)
+ formatted_core_list.extend(
+ self._get_consecutive_lcores_range(
+ lcore_ids_list[current_core_index:]
+ )
+ )
+ segment.clear()
+ break
+ if len(segment) > 0:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
+ )
+ return formatted_core_list
+
+ def __str__(self) -> str:
+ return self._lcore_str
+
+
+@dataclasses.dataclass(slots=True, frozen=True)
+class LogicalCoreCount(object):
+ """
+ Define the number of logical cores to use.
+ If sockets is not None, socket_count is ignored.
+ """
+
+ lcores_per_core: int = 1
+ cores_per_socket: int = 2
+ socket_count: int = 1
+ sockets: list[int] | None = None
+
+
+class LogicalCoreFilter(ABC):
+ """
+ Filter according to the input filter specifier. Each filter needs to be
+ implemented in a derived class.
+ This class only implements operations common to all filters, such as sorting
+ the list to be filtered beforehand.
+ """
+
+ _filter_specifier: LogicalCoreCount | LogicalCoreList
+ _lcores_to_filter: list[LogicalCore]
+
+ def __init__(
+ self,
+ lcore_list: list[LogicalCore],
+ filter_specifier: LogicalCoreCount | LogicalCoreList,
+ ascending: bool = True,
+ ):
+ self._filter_specifier = filter_specifier
+
+ # sorting by core is needed in case hyperthreading is enabled
+ self._lcores_to_filter = sorted(
+ lcore_list, key=lambda x: x.core, reverse=not ascending
+ )
+ self.filter()
+
+ @abstractmethod
+ def filter(self) -> list[LogicalCore]:
+ """
+ Use self._filter_specifier to filter self._lcores_to_filter
+ and return the list of filtered LogicalCores.
+ self._lcores_to_filter is a sorted copy of the original list,
+ so it may be modified.
+ """
+
+
+class LogicalCoreCountFilter(LogicalCoreFilter):
+ """
+ Filter the input list of LogicalCores according to specified rules:
+ Use cores from the specified number of sockets or from the specified socket ids.
+ If sockets is specified, it takes precedence over socket_count.
+ From each of those sockets, use only cores_per_socket of cores.
+ And for each core, use lcores_per_core of logical cores. Hypertheading
+ must be enabled for this to take effect.
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+
+ _filter_specifier: LogicalCoreCount
+
+ def filter(self) -> list[LogicalCore]:
+ sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
+ filtered_lcores = []
+ for socket_to_filter in sockets_to_filter:
+ filtered_lcores.extend(self._filter_cores_from_socket(socket_to_filter))
+ return filtered_lcores
+
+ def _filter_sockets(
+ self, lcores_to_filter: Iterable[LogicalCore]
+ ) -> ValuesView[list[LogicalCore]]:
+ """
+ Remove all lcores that don't match the specified socket(s).
+ If self._filter_specifier.sockets is not None, keep lcores from those sockets,
+ otherwise keep lcores from the first
+ self._filter_specifier.socket_count sockets.
+ """
+ allowed_sockets: set[int] = set()
+ socket_count = self._filter_specifier.socket_count
+ if self._filter_specifier.sockets:
+ socket_count = len(self._filter_specifier.sockets)
+ allowed_sockets = set(self._filter_specifier.sockets)
+
+ filtered_lcores: dict[int, list[LogicalCore]] = {}
+ for lcore in lcores_to_filter:
+ if not self._filter_specifier.sockets:
+ if len(allowed_sockets) < socket_count:
+ allowed_sockets.add(lcore.socket)
+ if lcore.socket in allowed_sockets:
+ if lcore.socket in filtered_lcores:
+ filtered_lcores[lcore.socket].append(lcore)
+ else:
+ filtered_lcores[lcore.socket] = [lcore]
+
+ if len(allowed_sockets) < socket_count:
+ raise ValueError(
+ f"The actual number of sockets from which to use cores "
+ f"({len(allowed_sockets)}) is lower than required ({socket_count})."
+ )
+
+ return filtered_lcores.values()
+
+ def _filter_cores_from_socket(
+ self, lcores_to_filter: Iterable[LogicalCore]
+ ) -> list[LogicalCore]:
+ """
+ Keep only the first self._filter_specifier.cores_per_socket cores.
+ In multithreaded environments, keep only
+ the first self._filter_specifier.lcores_per_core lcores of those cores.
+ """
+
+ # no need to use ordered dict, from Python3.7 the dict
+ # insertion order is preserved (LIFO).
+ lcore_count_per_core_map: dict[int, int] = {}
+ filtered_lcores = []
+ for lcore in lcores_to_filter:
+ if lcore.core in lcore_count_per_core_map:
+ current_core_lcore_count = lcore_count_per_core_map[lcore.core]
+ if self._filter_specifier.lcores_per_core > current_core_lcore_count:
+ # only add lcores of the given core
+ lcore_count_per_core_map[lcore.core] += 1
+ filtered_lcores.append(lcore)
+ else:
+ # we have enough lcores per this core
+ continue
+ elif self._filter_specifier.cores_per_socket > len(
+ lcore_count_per_core_map
+ ):
+ # only add cores if we need more
+ lcore_count_per_core_map[lcore.core] = 1
+ filtered_lcores.append(lcore)
+ else:
+ # we have enough cores
+ break
+
+ cores_per_socket = len(lcore_count_per_core_map)
+ if cores_per_socket < self._filter_specifier.cores_per_socket:
+ raise ValueError(
+ f"The actual number of cores per socket ({cores_per_socket}) "
+ f"is lower than required ({self._filter_specifier.cores_per_socket})."
+ )
+
+ lcores_per_core = lcore_count_per_core_map[filtered_lcores[-1].core]
+ if lcores_per_core < self._filter_specifier.lcores_per_core:
+ raise ValueError(
+ f"The actual number of logical cores per core ({lcores_per_core}) "
+ f"is lower than required ({self._filter_specifier.lcores_per_core})."
+ )
+
+ return filtered_lcores
+
+
+class LogicalCoreListFilter(LogicalCoreFilter):
+ """
+ Filter the input list of Logical Cores according to the input list of
+ lcore indices.
+ An empty LogicalCoreList won't filter anything.
+ """
+
+ _filter_specifier: LogicalCoreList
+
+ def filter(self) -> list[LogicalCore]:
+ if not len(self._filter_specifier.lcore_list):
+ return self._lcores_to_filter
+
+ filtered_lcores = []
+ for core in self._lcores_to_filter:
+ if core.lcore in self._filter_specifier.lcore_list:
+ filtered_lcores.append(core)
+
+ if len(filtered_lcores) != len(self._filter_specifier.lcore_list):
+ raise ValueError(
+ f"Not all logical cores from {self._filter_specifier.lcore_list} "
+ f"were found among {self._lcores_to_filter}"
+ )
+
+ return filtered_lcores
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/hw/virtual_device.py
new file mode 100644
index 0000000000..eb664d9f17
--- /dev/null
+++ b/dts/framework/testbed_model/hw/virtual_device.py
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+
+class VirtualDevice(object):
+ """
+ Base class for virtual devices used by DPDK.
+ """
+
+ name: str
+
+ def __init__(self, name: str):
+ self.name = name
+
+ def __str__(self) -> str:
+ return self.name
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index a37f7921e0..b93b9d238e 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -15,6 +15,14 @@
from framework.logger import DTSLOG, getLogger
from framework.remote_session import OSSession, create_session
+from .hw import (
+ LogicalCore,
+ LogicalCoreCount,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+ lcore_filter,
+)
+
class Node(object):
"""
@@ -26,6 +34,7 @@ class Node(object):
main_session: OSSession
config: NodeConfiguration
name: str
+ lcores: list[LogicalCore]
_logger: DTSLOG
_other_sessions: list[OSSession]
@@ -35,6 +44,12 @@ def __init__(self, node_config: NodeConfiguration):
self._logger = getLogger(self.name)
self.main_session = create_session(self.config, self.name, self._logger)
+ self._get_remote_cpus()
+ # filter the node lcores according to user config
+ self.lcores = LogicalCoreListFilter(
+ self.lcores, LogicalCoreList(self.config.lcores)
+ ).filter()
+
self._other_sessions = []
self._logger.info(f"Created node: {self.name}")
@@ -107,6 +122,34 @@ def create_session(self, name: str) -> OSSession:
self._other_sessions.append(connection)
return connection
+ def filter_lcores(
+ self,
+ filter_specifier: LogicalCoreCount | LogicalCoreList,
+ ascending: bool = True,
+ ) -> list[LogicalCore]:
+ """
+ Filter the LogicalCores found on the Node according to
+ a LogicalCoreCount or a LogicalCoreList.
+
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+ self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
+ return lcore_filter(
+ self.lcores,
+ filter_specifier,
+ ascending,
+ ).filter()
+
+ def _get_remote_cpus(self) -> None:
+ """
+ Scan CPUs in the remote OS and store a list of LogicalCores.
+ """
+ self._logger.info("Getting CPU information.")
+ self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 442a41bdc8..a9ae2e4a6f 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -4,13 +4,16 @@
import os
import tarfile
+import time
from pathlib import PurePath
from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.remote_session import OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, skip_setup
-from .dpdk import MesonArgs
+from .dpdk import EalParameters, MesonArgs
+from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
from .node import Node
@@ -22,21 +25,29 @@ class SutNode(Node):
Another key capability is building DPDK according to given build target.
"""
+ _dpdk_prefix_list: list[str]
+ _dpdk_timestamp: str
_build_target_config: BuildTargetConfiguration | None
_env_vars: EnvVarsDict
_remote_tmp_dir: PurePath
__remote_dpdk_dir: PurePath | None
_dpdk_version: str | None
_app_compile_timeout: float
+ _dpdk_kill_session: OSSession | None
def __init__(self, node_config: NodeConfiguration):
super(SutNode, self).__init__(node_config)
+ self._dpdk_prefix_list = []
self._build_target_config = None
self._env_vars = EnvVarsDict()
self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
self.__remote_dpdk_dir = None
self._dpdk_version = None
self._app_compile_timeout = 90
+ self._dpdk_kill_session = None
+ self._dpdk_timestamp = (
+ f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"
+ )
@property
def _remote_dpdk_dir(self) -> PurePath:
@@ -169,3 +180,71 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
return self.main_session.join_remote_path(
self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
)
+
+ def kill_cleanup_dpdk_apps(self) -> None:
+ """
+ Kill all dpdk applications on the SUT. Cleanup hugepages.
+ """
+ if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
+ # we can use the session if it exists and responds
+ self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
+ else:
+ # otherwise, we need to (re)create it
+ self._dpdk_kill_session = self.create_session("dpdk_kill")
+ self._dpdk_prefix_list = []
+
+ def create_eal_parameters(
+ self,
+ lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
+ ascending_cores: bool = True,
+ prefix: str = "dpdk",
+ append_prefix_timestamp: bool = True,
+ no_pci: bool = False,
+ vdevs: list[VirtualDevice] = None,
+ other_eal_param: str = "",
+ ) -> EalParameters:
+ """
+ Generate eal parameters character string;
+ :param lcore_filter_specifier: a number of lcores/cores/sockets to use
+ or a list of lcore ids to use.
+ The default will select one lcore for each of two cores
+ on one socket, in ascending order of core ids.
+ :param ascending_cores: True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the
+ highest id and continue in descending order. This ordering
+ affects which sockets to consider first as well.
+ :param prefix: set file prefix string, eg:
+ prefix='vf'
+ :param append_prefix_timestamp: if True, will append a timestamp to
+ DPDK file prefix.
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True
+ :param vdevs: virtual device list, eg:
+ vdevs=['net_ring0', 'net_ring1']
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments'
+ :return: eal param string, eg:
+ '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+ """
+
+ lcore_list = LogicalCoreList(
+ self.filter_lcores(lcore_filter_specifier, ascending_cores)
+ )
+
+ if append_prefix_timestamp:
+ prefix = f"{prefix}_{self._dpdk_timestamp}"
+ prefix = self.main_session.get_dpdk_file_prefix(prefix)
+ if prefix:
+ self._dpdk_prefix_list.append(prefix)
+
+ if vdevs is None:
+ vdevs = []
+
+ return EalParameters(
+ lcore_list=lcore_list,
+ memory_channels=self.config.memory_channels,
+ prefix=prefix,
+ no_pci=no_pci,
+ vdevs=vdevs,
+ other_eal_param=other_eal_param,
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 611071604b..eebe76f16c 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -32,6 +32,26 @@ def skip_setup(func) -> Callable[..., None]:
return func
+def expand_range(range_str: str) -> list[int]:
+ """
+ Process range string into a list of integers. There are two possible formats:
+ n - a single integer
+ n-m - a range of integers
+
+ The returned range includes both n and m. Empty string returns an empty list.
+ """
+ expanded_range: list[int] = []
+ if range_str:
+ range_boundaries = range_str.split("-")
+ # will throw an exception when items in range_boundaries can't be converted,
+ # serving as type check
+ expanded_range.extend(
+ range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
+ )
+
+ return expanded_range
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 05/10] dts: add node memory setup
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (3 preceding siblings ...)
2023-02-23 15:28 ` [PATCH v5 04/10] dts: add dpdk execution handling Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 06/10] dts: add test suite module Juraj Linkeš
` (7 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Setup hugepages on nodes. This is useful not only on SUT nodes, but
also on TG nodes which use TGs that utilize hugepages.
The setup is opt-in, i.e. users need to supply hugepage configuration to
instruct DTS to configure them. It not configured, hugepage
configuration will be skipped. This is helpful if users don't want DTS
to tamper with hugepages on their system.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 3 +
dts/framework/config/__init__.py | 14 ++++
dts/framework/config/conf_yaml_schema.json | 21 +++++
dts/framework/remote_session/linux_session.py | 78 +++++++++++++++++++
dts/framework/remote_session/os_session.py | 8 ++
dts/framework/testbed_model/node.py | 12 +++
6 files changed, 136 insertions(+)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1648e5c3c5..6540a45ef7 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -18,3 +18,6 @@ nodes:
lcores: ""
use_first_core: false
memory_channels: 4
+ hugepages: # optional; if removed, will use system hugepage configuration
+ amount: 256
+ force_first_numa: false
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 17b917f3b3..0e5f493c5d 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -66,6 +66,12 @@ class Compiler(StrEnum):
#
# Frozen makes the object immutable. This enables further optimizations,
# and makes it thread safe should we every want to move in that direction.
+@dataclass(slots=True, frozen=True)
+class HugepageConfiguration:
+ amount: int
+ force_first_numa: bool
+
+
@dataclass(slots=True, frozen=True)
class NodeConfiguration:
name: str
@@ -77,9 +83,16 @@ class NodeConfiguration:
lcores: str
use_first_core: bool
memory_channels: int
+ hugepages: HugepageConfiguration | None
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
+ hugepage_config = d.get("hugepages")
+ if hugepage_config:
+ if "force_first_numa" not in hugepage_config:
+ hugepage_config["force_first_numa"] = False
+ hugepage_config = HugepageConfiguration(**hugepage_config)
+
return NodeConfiguration(
name=d["name"],
hostname=d["hostname"],
@@ -90,6 +103,7 @@ def from_dict(d: dict) -> "NodeConfiguration":
lcores=d.get("lcores", "1"),
use_first_core=d.get("use_first_core", False),
memory_channels=d.get("memory_channels", 1),
+ hugepages=hugepage_config,
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 334b4bd8ab..56f93def36 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -75,6 +75,24 @@
"cpu",
"compiler"
]
+ },
+ "hugepages": {
+ "type": "object",
+ "description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.",
+ "properties": {
+ "amount": {
+ "type": "integer",
+ "description": "The amount of hugepages to configure. Hugepage size will be the system default."
+ },
+ "force_first_numa": {
+ "type": "boolean",
+ "description": "Set to True to force configuring hugepages on the first NUMA node. Defaults to False."
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "amount"
+ ]
}
},
"type": "object",
@@ -118,6 +136,9 @@
"memory_channels": {
"type": "integer",
"description": "How many memory channels to use. Optional, defaults to 1."
+ },
+ "hugepages": {
+ "$ref": "#/definitions/hugepages"
}
},
"additionalProperties": false,
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index c49b6bb1d7..a1e3bc3a92 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,7 +2,9 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from framework.exception import RemoteCommandExecutionError
from framework.testbed_model import LogicalCore
+from framework.utils import expand_range
from .posix_session import PosixSession
@@ -27,3 +29,79 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
return dpdk_prefix
+
+ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+ self._logger.info("Getting Hugepage information.")
+ hugepage_size = self._get_hugepage_size()
+ hugepages_total = self._get_hugepages_total()
+ self._numa_nodes = self._get_numa_nodes()
+
+ if force_first_numa or hugepages_total != hugepage_amount:
+ # when forcing numa, we need to clear existing hugepages regardless
+ # of size, so they can be moved to the first numa node
+ self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
+ else:
+ self._logger.info("Hugepages already configured.")
+ self._mount_huge_pages()
+
+ def _get_hugepage_size(self) -> int:
+ hugepage_size = self.remote_session.send_command(
+ "awk '/Hugepagesize/ {print $2}' /proc/meminfo"
+ ).stdout
+ return int(hugepage_size)
+
+ def _get_hugepages_total(self) -> int:
+ hugepages_total = self.remote_session.send_command(
+ "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ ).stdout
+ return int(hugepages_total)
+
+ def _get_numa_nodes(self) -> list[int]:
+ try:
+ numa_count = self.remote_session.send_command(
+ "cat /sys/devices/system/node/online", verify=True
+ ).stdout
+ numa_range = expand_range(numa_count)
+ except RemoteCommandExecutionError:
+ # the file doesn't exist, meaning the node doesn't support numa
+ numa_range = []
+ return numa_range
+
+ def _mount_huge_pages(self) -> None:
+ self._logger.info("Re-mounting Hugepages.")
+ hugapge_fs_cmd = "awk '/hugetlbfs/ { print $2 }' /proc/mounts"
+ self.remote_session.send_command(f"umount $({hugapge_fs_cmd})")
+ result = self.remote_session.send_command(hugapge_fs_cmd)
+ if result.stdout == "":
+ remote_mount_path = "/mnt/huge"
+ self.remote_session.send_command(f"mkdir -p {remote_mount_path}")
+ self.remote_session.send_command(
+ f"mount -t hugetlbfs nodev {remote_mount_path}"
+ )
+
+ def _supports_numa(self) -> bool:
+ # the system supports numa if self._numa_nodes is non-empty and there are more
+ # than one numa node (in the latter case it may actually support numa, but
+ # there's no reason to do any numa specific configuration)
+ return len(self._numa_nodes) > 1
+
+ def _configure_huge_pages(
+ self, amount: int, size: int, force_first_numa: bool
+ ) -> None:
+ self._logger.info("Configuring Hugepages.")
+ hugepage_config_path = (
+ f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+ )
+ if force_first_numa and self._supports_numa():
+ # clear non-numa hugepages
+ self.remote_session.send_command(
+ f"echo 0 | sudo tee {hugepage_config_path}"
+ )
+ hugepage_config_path = (
+ f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages"
+ f"/hugepages-{size}kB/nr_hugepages"
+ )
+
+ self.remote_session.send_command(
+ f"echo {amount} | sudo tee {hugepage_config_path}"
+ )
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index c30753e0b8..4753c70e91 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -151,3 +151,11 @@ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
"""
Get the DPDK file prefix that will be used when running DPDK apps.
"""
+
+ @abstractmethod
+ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+ """
+ Get the node's Hugepage Size, configure the specified amount of hugepages
+ if needed and mount the hugepages if needed.
+ If force_first_numa is True, configure hugepages just on the first socket.
+ """
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index b93b9d238e..2dcbd64e71 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -59,6 +59,7 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
Perform the execution setup that will be done for each execution
this node is part of.
"""
+ self._setup_hugepages()
self._set_up_execution(execution_config)
def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
@@ -150,6 +151,17 @@ def _get_remote_cpus(self) -> None:
self._logger.info("Getting CPU information.")
self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
+ def _setup_hugepages(self):
+ """
+ Setup hugepages on the Node. Different architectures can supply different
+ amounts of memory for hugepages and numa-based hugepage allocation may need
+ to be considered.
+ """
+ if self.config.hugepages:
+ self.main_session.setup_hugepages(
+ self.config.hugepages.amount, self.config.hugepages.force_first_numa
+ )
+
def close(self) -> None:
"""
Close all connections and free other resources.
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 06/10] dts: add test suite module
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (4 preceding siblings ...)
2023-02-23 15:28 ` [PATCH v5 05/10] dts: add node memory setup Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 07/10] dts: add hello world testsuite Juraj Linkeš
` (6 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The module implements the base class that all test suites inherit from.
It implements methods common to all test suites.
The derived test suites implement test cases and any particular setup
needed for the suite or tests.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 2 +
dts/framework/config/__init__.py | 4 +
dts/framework/config/conf_yaml_schema.json | 10 +
dts/framework/exception.py | 16 ++
dts/framework/settings.py | 24 +++
dts/framework/test_suite.py | 228 +++++++++++++++++++++
6 files changed, 284 insertions(+)
create mode 100644 dts/framework/test_suite.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 6540a45ef7..75e33e8ccf 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -8,6 +8,8 @@ executions:
cpu: native
compiler: gcc
compiler_wrapper: ccache
+ perf: false
+ func: true
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 0e5f493c5d..544fceca6a 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -131,6 +131,8 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
+ perf: bool
+ func: bool
system_under_test: NodeConfiguration
@staticmethod
@@ -143,6 +145,8 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
return ExecutionConfiguration(
build_targets=build_targets,
+ perf=d["perf"],
+ func=d["func"],
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 56f93def36..878ca3aec2 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -164,6 +164,14 @@
},
"minimum": 1
},
+ "perf": {
+ "type": "boolean",
+ "description": "Enable performance testing."
+ },
+ "func": {
+ "type": "boolean",
+ "description": "Enable functional testing."
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -171,6 +179,8 @@
"additionalProperties": false,
"required": [
"build_targets",
+ "perf",
+ "func",
"system_under_test"
]
},
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index b4545a5a40..ca353d98fc 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -24,6 +24,7 @@ class ErrorSeverity(IntEnum):
REMOTE_CMD_EXEC_ERR = 3
SSH_ERR = 4
DPDK_BUILD_ERR = 10
+ TESTCASE_VERIFY_ERR = 20
class DTSError(Exception):
@@ -128,3 +129,18 @@ class DPDKBuildError(DTSError):
"""
severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
+
+
+class TestCaseVerifyError(DTSError):
+ """
+ Used in test cases to verify the expected behavior.
+ """
+
+ value: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
+
+ def __init__(self, value: str):
+ self.value = value
+
+ def __str__(self) -> str:
+ return repr(self.value)
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index f787187ade..4ccc98537d 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -66,6 +66,8 @@ class _Settings:
skip_setup: bool
dpdk_tarball_path: Path
compile_timeout: float
+ test_cases: list
+ re_run: int
def _get_parser() -> argparse.ArgumentParser:
@@ -137,6 +139,26 @@ def _get_parser() -> argparse.ArgumentParser:
help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
)
+ parser.add_argument(
+ "--test-cases",
+ action=_env_arg("DTS_TESTCASES"),
+ default="",
+ required=False,
+ help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
+ "Unknown test cases will be silently ignored.",
+ )
+
+ parser.add_argument(
+ "--re-run",
+ "--re_run",
+ action=_env_arg("DTS_RERUN"),
+ default=0,
+ type=int,
+ required=False,
+ help="[DTS_RERUN] Re-run each test case the specified amount of times "
+ "if a test failure occurs",
+ )
+
return parser
@@ -156,6 +178,8 @@ def _get_settings() -> _Settings:
skip_setup=(parsed_args.skip_setup == "Y"),
dpdk_tarball_path=parsed_args.tarball,
compile_timeout=parsed_args.compile_timeout,
+ test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+ re_run=parsed_args.re_run,
)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
new file mode 100644
index 0000000000..9002d43297
--- /dev/null
+++ b/dts/framework/test_suite.py
@@ -0,0 +1,228 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Base class for creating DTS test cases.
+"""
+
+import inspect
+import re
+from collections.abc import MutableSequence
+from types import MethodType
+
+from .exception import SSHTimeoutError, TestCaseVerifyError
+from .logger import DTSLOG, getLogger
+from .settings import SETTINGS
+from .testbed_model import SutNode
+
+
+class TestSuite(object):
+ """
+ The base TestSuite class provides methods for handling basic flow of a test suite:
+ * test case filtering and collection
+ * test suite setup/cleanup
+ * test setup/cleanup
+ * test case execution
+ * error handling and results storage
+ Test cases are implemented by derived classes. Test cases are all methods
+ starting with test_, further divided into performance test cases
+ (starting with test_perf_) and functional test cases (all other test cases).
+ By default, all test cases will be executed. A list of testcase str names
+ may be specified in conf.yaml or on the command line
+ to filter which test cases to run.
+ The methods named [set_up|tear_down]_[suite|test_case] should be overridden
+ in derived classes if the appropriate suite/test case fixtures are needed.
+ """
+
+ sut_node: SutNode
+ _logger: DTSLOG
+ _test_cases_to_run: list[str]
+ _func: bool
+ _errors: MutableSequence[Exception]
+
+ def __init__(
+ self,
+ sut_node: SutNode,
+ test_cases: list[str],
+ func: bool,
+ errors: MutableSequence[Exception],
+ ):
+ self.sut_node = sut_node
+ self._logger = getLogger(self.__class__.__name__)
+ self._test_cases_to_run = test_cases
+ self._test_cases_to_run.extend(SETTINGS.test_cases)
+ self._func = func
+ self._errors = errors
+
+ def set_up_suite(self) -> None:
+ """
+ Set up test fixtures common to all test cases; this is done before
+ any test case is run.
+ """
+
+ def tear_down_suite(self) -> None:
+ """
+ Tear down the previously created test fixtures common to all test cases.
+ """
+
+ def set_up_test_case(self) -> None:
+ """
+ Set up test fixtures before each test case.
+ """
+
+ def tear_down_test_case(self) -> None:
+ """
+ Tear down the previously created test fixtures after each test case.
+ """
+
+ def verify(self, condition: bool, failure_description: str) -> None:
+ if not condition:
+ self._logger.debug(
+ "A test case failed, showing the last 10 commands executed on SUT:"
+ )
+ for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ self._logger.debug(command_res.command)
+ raise TestCaseVerifyError(failure_description)
+
+ def run(self) -> None:
+ """
+ Setup, execute and teardown the whole suite.
+ Suite execution consists of running all test cases scheduled to be executed.
+ A test cast run consists of setup, execution and teardown of said test case.
+ """
+ test_suite_name = self.__class__.__name__
+
+ try:
+ self._logger.info(f"Starting test suite setup: {test_suite_name}")
+ self.set_up_suite()
+ self._logger.info(f"Test suite setup successful: {test_suite_name}")
+ except Exception as e:
+ self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
+ self._errors.append(e)
+
+ else:
+ self._execute_test_suite()
+
+ finally:
+ try:
+ self.tear_down_suite()
+ self.sut_node.kill_cleanup_dpdk_apps()
+ except Exception as e:
+ self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
+ self._logger.warning(
+ f"Test suite '{test_suite_name}' teardown failed, "
+ f"the next test suite may be affected."
+ )
+ self._errors.append(e)
+
+ def _execute_test_suite(self) -> None:
+ """
+ Execute all test cases scheduled to be executed in this suite.
+ """
+ if self._func:
+ for test_case_method in self._get_functional_test_cases():
+ all_attempts = SETTINGS.re_run + 1
+ attempt_nr = 1
+ while (
+ not self._run_test_case(test_case_method)
+ and attempt_nr < all_attempts
+ ):
+ attempt_nr += 1
+ self._logger.info(
+ f"Re-running FAILED test case '{test_case_method.__name__}'. "
+ f"Attempt number {attempt_nr} out of {all_attempts}."
+ )
+
+ def _get_functional_test_cases(self) -> list[MethodType]:
+ """
+ Get all functional test cases.
+ """
+ return self._get_test_cases(r"test_(?!perf_)")
+
+ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
+ """
+ Return a list of test cases matching test_case_regex.
+ """
+ self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
+ filtered_test_cases = []
+ for test_case_name, test_case in inspect.getmembers(self, inspect.ismethod):
+ if self._should_be_executed(test_case_name, test_case_regex):
+ filtered_test_cases.append(test_case)
+ cases_str = ", ".join((x.__name__ for x in filtered_test_cases))
+ self._logger.debug(
+ f"Found test cases '{cases_str}' in {self.__class__.__name__}."
+ )
+ return filtered_test_cases
+
+ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
+ """
+ Check whether the test case should be executed.
+ """
+ match = bool(re.match(test_case_regex, test_case_name))
+ if self._test_cases_to_run:
+ return match and test_case_name in self._test_cases_to_run
+
+ return match
+
+ def _run_test_case(self, test_case_method: MethodType) -> bool:
+ """
+ Setup, execute and teardown a test case in this suite.
+ Exceptions are caught and recorded in logs.
+ """
+ test_case_name = test_case_method.__name__
+ result = False
+
+ try:
+ # run set_up function for each case
+ self.set_up_test_case()
+ except SSHTimeoutError as e:
+ self._logger.exception(f"Test case setup FAILED: {test_case_name}")
+ self._errors.append(e)
+ except Exception as e:
+ self._logger.exception(f"Test case setup ERROR: {test_case_name}")
+ self._errors.append(e)
+
+ else:
+ # run test case if setup was successful
+ result = self._execute_test_case(test_case_method)
+
+ finally:
+ try:
+ self.tear_down_test_case()
+ except Exception as e:
+ self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
+ self._logger.warning(
+ f"Test case '{test_case_name}' teardown failed, "
+ f"the next test case may be affected."
+ )
+ self._errors.append(e)
+ result = False
+
+ return result
+
+ def _execute_test_case(self, test_case_method: MethodType) -> bool:
+ """
+ Execute one test case and handle failures.
+ """
+ test_case_name = test_case_method.__name__
+ result = False
+ try:
+ self._logger.info(f"Starting test case execution: {test_case_name}")
+ test_case_method()
+ result = True
+ self._logger.info(f"Test case execution PASSED: {test_case_name}")
+
+ except TestCaseVerifyError as e:
+ self._logger.exception(f"Test case execution FAILED: {test_case_name}")
+ self._errors.append(e)
+ except Exception as e:
+ self._logger.exception(f"Test case execution ERROR: {test_case_name}")
+ self._errors.append(e)
+ except KeyboardInterrupt:
+ self._logger.error(
+ f"Test case execution INTERRUPTED by user: {test_case_name}"
+ )
+ raise KeyboardInterrupt("Stop DTS")
+
+ return result
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 07/10] dts: add hello world testsuite
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (5 preceding siblings ...)
2023-02-23 15:28 ` [PATCH v5 06/10] dts: add test suite module Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 08/10] dts: add test suite config and runner Juraj Linkeš
` (5 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The test suite implements test cases defined in the corresponding test
plan.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/__init__.py | 2 +-
dts/framework/remote_session/os_session.py | 16 ++++-
.../remote_session/remote/__init__.py | 2 +-
dts/framework/testbed_model/__init__.py | 1 +
dts/framework/testbed_model/sut_node.py | 12 +++-
dts/tests/TestSuite_hello_world.py | 64 +++++++++++++++++++
6 files changed, 93 insertions(+), 4 deletions(-)
create mode 100644 dts/tests/TestSuite_hello_world.py
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 747316c78a..ee221503df 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,7 @@
from .linux_session import LinuxSession
from .os_session import OSSession
-from .remote import RemoteSession, SSHSession
+from .remote import CommandResult, RemoteSession, SSHSession
def create_session(
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 4753c70e91..2731a7ca0a 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -12,7 +12,7 @@
from framework.testbed_model import LogicalCore, MesonArgs
from framework.utils import EnvVarsDict
-from .remote import RemoteSession, create_remote_session
+from .remote import CommandResult, RemoteSession, create_remote_session
class OSSession(ABC):
@@ -50,6 +50,20 @@ def is_alive(self) -> bool:
"""
return self.remote_session.is_alive()
+ def send_command(
+ self,
+ command: str,
+ timeout: float,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
+ ) -> CommandResult:
+ """
+ An all-purpose API in case the command to be executed is already
+ OS-agnostic, such as when the path to the executed command has been
+ constructed beforehand.
+ """
+ return self.remote_session.send_command(command, timeout, verify, env)
+
@abstractmethod
def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
"""
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
index f3092f8bbe..8a1512210a 100644
--- a/dts/framework/remote_session/remote/__init__.py
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -6,7 +6,7 @@
from framework.config import NodeConfiguration
from framework.logger import DTSLOG
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
from .ssh_session import SSHSession
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 22c7c16708..485fc57703 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -13,6 +13,7 @@
from .hw import (
LogicalCore,
LogicalCoreCount,
+ LogicalCoreCountFilter,
LogicalCoreList,
LogicalCoreListFilter,
VirtualDevice,
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index a9ae2e4a6f..12b4c5573b 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -8,7 +8,7 @@
from pathlib import PurePath
from framework.config import BuildTargetConfiguration, NodeConfiguration
-from framework.remote_session import OSSession
+from framework.remote_session import CommandResult, OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, skip_setup
@@ -248,3 +248,13 @@ def create_eal_parameters(
vdevs=vdevs,
other_eal_param=other_eal_param,
)
+
+ def run_dpdk_app(
+ self, app_path: PurePath, eal_args: EalParameters, timeout: float = 30
+ ) -> CommandResult:
+ """
+ Run DPDK application on the remote node.
+ """
+ return self.main_session.send_command(
+ f"{app_path} {eal_args}", timeout, verify=True
+ )
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
new file mode 100644
index 0000000000..7e3d95c0cf
--- /dev/null
+++ b/dts/tests/TestSuite_hello_world.py
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+
+"""
+Run the helloworld example app and verify it prints a message for each used core.
+No other EAL parameters apart from cores are used.
+"""
+
+from framework.test_suite import TestSuite
+from framework.testbed_model import (
+ LogicalCoreCount,
+ LogicalCoreCountFilter,
+ LogicalCoreList,
+)
+
+
+class TestHelloWorld(TestSuite):
+ def set_up_suite(self) -> None:
+ """
+ Setup:
+ Build the app we're about to test - helloworld.
+ """
+ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
+
+ def test_hello_world_single_core(self) -> None:
+ """
+ Steps:
+ Run the helloworld app on the first usable logical core.
+ Verify:
+ The app prints a message from the used core:
+ "hello from core <core_id>"
+ """
+
+ # get the first usable core
+ lcore_amount = LogicalCoreCount(1, 1, 1)
+ lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
+ eal_para = self.sut_node.create_eal_parameters(
+ lcore_filter_specifier=lcore_amount
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para)
+ self.verify(
+ f"hello from core {int(lcores[0])}" in result.stdout,
+ f"helloworld didn't start on lcore{lcores[0]}",
+ )
+
+ def test_hello_world_all_cores(self) -> None:
+ """
+ Steps:
+ Run the helloworld app on all usable logical cores.
+ Verify:
+ The app prints a message from all used cores:
+ "hello from core <core_id>"
+ """
+
+ # get the maximum logical core number
+ eal_para = self.sut_node.create_eal_parameters(
+ lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para, 50)
+ for lcore in self.sut_node.lcores:
+ self.verify(
+ f"hello from core {int(lcore)}" in result.stdout,
+ f"helloworld didn't start on lcore{lcore}",
+ )
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 08/10] dts: add test suite config and runner
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (6 preceding siblings ...)
2023-02-23 15:28 ` [PATCH v5 07/10] dts: add hello world testsuite Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 09/10] dts: add test results module Juraj Linkeš
` (4 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The config allows users to specify which test suites and test cases
within test suites to run.
Also add test suite running capabilities to dts runner.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 2 ++
dts/framework/config/__init__.py | 29 +++++++++++++++-
dts/framework/config/conf_yaml_schema.json | 40 ++++++++++++++++++++++
dts/framework/dts.py | 19 ++++++++++
dts/framework/test_suite.py | 24 ++++++++++++-
5 files changed, 112 insertions(+), 2 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 75e33e8ccf..a9bd8a3ecf 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -10,6 +10,8 @@ executions:
compiler_wrapper: ccache
perf: false
func: true
+ test_suites:
+ - hello_world
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 544fceca6a..ebb0823ff5 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -12,7 +12,7 @@
import pathlib
from dataclasses import dataclass
from enum import Enum, auto, unique
-from typing import Any
+from typing import Any, TypedDict
import warlock # type: ignore
import yaml
@@ -128,11 +128,34 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
)
+class TestSuiteConfigDict(TypedDict):
+ suite: str
+ cases: list[str]
+
+
+@dataclass(slots=True, frozen=True)
+class TestSuiteConfig:
+ test_suite: str
+ test_cases: list[str]
+
+ @staticmethod
+ def from_dict(
+ entry: str | TestSuiteConfigDict,
+ ) -> "TestSuiteConfig":
+ if isinstance(entry, str):
+ return TestSuiteConfig(test_suite=entry, test_cases=[])
+ elif isinstance(entry, dict):
+ return TestSuiteConfig(test_suite=entry["suite"], test_cases=entry["cases"])
+ else:
+ raise TypeError(f"{type(entry)} is not valid for a test suite config.")
+
+
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
perf: bool
func: bool
+ test_suites: list[TestSuiteConfig]
system_under_test: NodeConfiguration
@staticmethod
@@ -140,6 +163,9 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets: list[BuildTargetConfiguration] = list(
map(BuildTargetConfiguration.from_dict, d["build_targets"])
)
+ test_suites: list[TestSuiteConfig] = list(
+ map(TestSuiteConfig.from_dict, d["test_suites"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
@@ -147,6 +173,7 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets=build_targets,
perf=d["perf"],
func=d["func"],
+ test_suites=test_suites,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 878ca3aec2..ca2d4a1ef2 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -93,6 +93,32 @@
"required": [
"amount"
]
+ },
+ "test_suite": {
+ "type": "string",
+ "enum": [
+ "hello_world"
+ ]
+ },
+ "test_target": {
+ "type": "object",
+ "properties": {
+ "suite": {
+ "$ref": "#/definitions/test_suite"
+ },
+ "cases": {
+ "type": "array",
+ "description": "If specified, only this subset of test suite's test cases will be run. Unknown test cases will be silently ignored.",
+ "items": {
+ "type": "string"
+ },
+ "minimum": 1
+ }
+ },
+ "required": [
+ "suite"
+ ],
+ "additionalProperties": false
}
},
"type": "object",
@@ -172,6 +198,19 @@
"type": "boolean",
"description": "Enable functional testing."
},
+ "test_suites": {
+ "type": "array",
+ "items": {
+ "oneOf": [
+ {
+ "$ref": "#/definitions/test_suite"
+ },
+ {
+ "$ref": "#/definitions/test_target"
+ }
+ ]
+ }
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -181,6 +220,7 @@
"build_targets",
"perf",
"func",
+ "test_suites",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 6ea7c6e736..f98000450f 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -8,6 +8,7 @@
from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .test_suite import get_test_suites
from .testbed_model import SutNode
from .utils import check_dts_python_version
@@ -132,6 +133,24 @@ def _run_suites(
with possibly only a subset of test cases.
If no subset is specified, run all test cases.
"""
+ for test_suite_config in execution.test_suites:
+ try:
+ full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
+ test_suite_classes = get_test_suites(full_suite_path)
+ suites_str = ", ".join((x.__name__ for x in test_suite_classes))
+ dts_logger.debug(
+ f"Found test suites '{suites_str}' in '{full_suite_path}'."
+ )
+ except Exception as e:
+ dts_logger.exception("An error occurred when searching for test suites.")
+ errors.append(e)
+
+ else:
+ for test_suite_class in test_suite_classes:
+ test_suite = test_suite_class(
+ sut_node, test_suite_config.test_cases, execution.func, errors
+ )
+ test_suite.run()
def _exit_dts() -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 9002d43297..12bf3b6420 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -6,12 +6,13 @@
Base class for creating DTS test cases.
"""
+import importlib
import inspect
import re
from collections.abc import MutableSequence
from types import MethodType
-from .exception import SSHTimeoutError, TestCaseVerifyError
+from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
from .testbed_model import SutNode
@@ -226,3 +227,24 @@ def _execute_test_case(self, test_case_method: MethodType) -> bool:
raise KeyboardInterrupt("Stop DTS")
return result
+
+
+def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+ def is_test_suite(object) -> bool:
+ try:
+ if issubclass(object, TestSuite) and object is not TestSuite:
+ return True
+ except TypeError:
+ return False
+ return False
+
+ try:
+ testcase_module = importlib.import_module(testsuite_module_path)
+ except ModuleNotFoundError as e:
+ raise ConfigurationError(
+ f"Test suite '{testsuite_module_path}' not found."
+ ) from e
+ return [
+ test_suite_class
+ for _, test_suite_class in inspect.getmembers(testcase_module, is_test_suite)
+ ]
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 09/10] dts: add test results module
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (7 preceding siblings ...)
2023-02-23 15:28 ` [PATCH v5 08/10] dts: add test suite config and runner Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 10/10] doc: update DTS setup and test suite cookbook Juraj Linkeš
` (3 subsequent siblings)
12 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The module stores the results and errors from all executions, build
targets, test suites and test cases.
The result consist of the result of the setup and the teardown of each
testing stage (listed above) and the results of the inner stages. The
innermost stage is the case, which also contains the result of test case
itself.
The modules also produces a brief overview of the results and the
number of executed tests.
It also finds the proper return code to exit with from among the stored
errors.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/dts.py | 64 +++----
dts/framework/settings.py | 2 -
dts/framework/test_result.py | 316 +++++++++++++++++++++++++++++++++++
dts/framework/test_suite.py | 60 +++----
4 files changed, 382 insertions(+), 60 deletions(-)
create mode 100644 dts/framework/test_result.py
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index f98000450f..117b7cae83 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,14 +6,14 @@
import sys
from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
-from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
from .test_suite import get_test_suites
from .testbed_model import SutNode
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("dts_runner")
-errors = []
+result: DTSResult = DTSResult(dts_logger)
def run_all() -> None:
@@ -22,7 +22,7 @@ def run_all() -> None:
config file.
"""
global dts_logger
- global errors
+ global result
# check the python version of the server that run dts
check_dts_python_version()
@@ -39,29 +39,31 @@ def run_all() -> None:
# the SUT has not been initialized yet
try:
sut_node = SutNode(execution.system_under_test)
+ result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception(
f"Connection to node {execution.system_under_test} failed."
)
- errors.append(e)
+ result.update_setup(Result.FAIL, e)
else:
nodes[sut_node.name] = sut_node
if sut_node:
- _run_execution(sut_node, execution)
+ _run_execution(sut_node, execution, result)
except Exception as e:
dts_logger.exception("An unexpected error has occurred.")
- errors.append(e)
+ result.add_error(e)
raise
finally:
try:
for node in nodes.values():
node.close()
+ result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Final cleanup of nodes failed.")
- errors.append(e)
+ result.update_teardown(Result.ERROR, e)
# we need to put the sys.exit call outside the finally clause to make sure
# that unexpected exceptions will propagate
@@ -72,61 +74,72 @@ def run_all() -> None:
_exit_dts()
-def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+def _run_execution(
+ sut_node: SutNode, execution: ExecutionConfiguration, result: DTSResult
+) -> None:
"""
Run the given execution. This involves running the execution setup as well as
running all build targets in the given execution.
"""
dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+ execution_result = result.add_execution(sut_node.config)
try:
sut_node.set_up_execution(execution)
+ execution_result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception("Execution setup failed.")
- errors.append(e)
+ execution_result.update_setup(Result.FAIL, e)
else:
for build_target in execution.build_targets:
- _run_build_target(sut_node, build_target, execution)
+ _run_build_target(sut_node, build_target, execution, execution_result)
finally:
try:
sut_node.tear_down_execution()
+ execution_result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Execution teardown failed.")
- errors.append(e)
+ execution_result.update_teardown(Result.FAIL, e)
def _run_build_target(
sut_node: SutNode,
build_target: BuildTargetConfiguration,
execution: ExecutionConfiguration,
+ execution_result: ExecutionResult,
) -> None:
"""
Run the given build target.
"""
dts_logger.info(f"Running build target '{build_target.name}'.")
+ build_target_result = execution_result.add_build_target(build_target)
try:
sut_node.set_up_build_target(build_target)
+ result.dpdk_version = sut_node.dpdk_version
+ build_target_result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception("Build target setup failed.")
- errors.append(e)
+ build_target_result.update_setup(Result.FAIL, e)
else:
- _run_suites(sut_node, execution)
+ _run_suites(sut_node, execution, build_target_result)
finally:
try:
sut_node.tear_down_build_target()
+ build_target_result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Build target teardown failed.")
- errors.append(e)
+ build_target_result.update_teardown(Result.FAIL, e)
def _run_suites(
sut_node: SutNode,
execution: ExecutionConfiguration,
+ build_target_result: BuildTargetResult,
) -> None:
"""
Use the given build_target to run execution's test suites
@@ -143,12 +156,15 @@ def _run_suites(
)
except Exception as e:
dts_logger.exception("An error occurred when searching for test suites.")
- errors.append(e)
+ result.update_setup(Result.ERROR, e)
else:
for test_suite_class in test_suite_classes:
test_suite = test_suite_class(
- sut_node, test_suite_config.test_cases, execution.func, errors
+ sut_node,
+ test_suite_config.test_cases,
+ execution.func,
+ build_target_result,
)
test_suite.run()
@@ -157,20 +173,8 @@ def _exit_dts() -> None:
"""
Process all errors and exit with the proper exit code.
"""
- if errors and dts_logger:
- dts_logger.debug("Summary of errors:")
- for error in errors:
- dts_logger.debug(repr(error))
-
- return_code = ErrorSeverity.NO_ERR
- for error in errors:
- error_return_code = ErrorSeverity.GENERIC_ERR
- if isinstance(error, DTSError):
- error_return_code = error.severity
-
- if error_return_code > return_code:
- return_code = error_return_code
+ result.process()
if dts_logger:
dts_logger.info("DTS execution has ended.")
- sys.exit(return_code)
+ sys.exit(result.get_return_code())
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 4ccc98537d..71955f4581 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -143,7 +143,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--test-cases",
action=_env_arg("DTS_TESTCASES"),
default="",
- required=False,
help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
"Unknown test cases will be silently ignored.",
)
@@ -154,7 +153,6 @@ def _get_parser() -> argparse.ArgumentParser:
action=_env_arg("DTS_RERUN"),
default=0,
type=int,
- required=False,
help="[DTS_RERUN] Re-run each test case the specified amount of times "
"if a test failure occurs",
)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
new file mode 100644
index 0000000000..743919820c
--- /dev/null
+++ b/dts/framework/test_result.py
@@ -0,0 +1,316 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Generic result container and reporters
+"""
+
+import os.path
+from collections.abc import MutableSequence
+from enum import Enum, auto
+
+from .config import (
+ OS,
+ Architecture,
+ BuildTargetConfiguration,
+ Compiler,
+ CPUType,
+ NodeConfiguration,
+)
+from .exception import DTSError, ErrorSeverity
+from .logger import DTSLOG
+from .settings import SETTINGS
+
+
+class Result(Enum):
+ """
+ An Enum defining the possible states that
+ a setup, a teardown or a test case may end up in.
+ """
+
+ PASS = auto()
+ FAIL = auto()
+ ERROR = auto()
+ SKIP = auto()
+
+ def __bool__(self) -> bool:
+ return self is self.PASS
+
+
+class FixtureResult(object):
+ """
+ A record that stored the result of a setup or a teardown.
+ The default is FAIL because immediately after creating the object
+ the setup of the corresponding stage will be executed, which also guarantees
+ the execution of teardown.
+ """
+
+ result: Result
+ error: Exception | None = None
+
+ def __init__(
+ self,
+ result: Result = Result.FAIL,
+ error: Exception | None = None,
+ ):
+ self.result = result
+ self.error = error
+
+ def __bool__(self) -> bool:
+ return bool(self.result)
+
+
+class Statistics(dict):
+ """
+ A helper class used to store the number of test cases by its result
+ along a few other basic information.
+ Using a dict provides a convenient way to format the data.
+ """
+
+ def __init__(self, dpdk_version):
+ super(Statistics, self).__init__()
+ for result in Result:
+ self[result.name] = 0
+ self["PASS RATE"] = 0.0
+ self["DPDK VERSION"] = dpdk_version
+
+ def __iadd__(self, other: Result) -> "Statistics":
+ """
+ Add a Result to the final count.
+ """
+ self[other.name] += 1
+ self["PASS RATE"] = (
+ float(self[Result.PASS.name])
+ * 100
+ / sum(self[result.name] for result in Result)
+ )
+ return self
+
+ def __str__(self) -> str:
+ """
+ Provide a string representation of the data.
+ """
+ stats_str = ""
+ for key, value in self.items():
+ stats_str += f"{key:<12} = {value}\n"
+ # according to docs, we should use \n when writing to text files
+ # on all platforms
+ return stats_str
+
+
+class BaseResult(object):
+ """
+ The Base class for all results. Stores the results of
+ the setup and teardown portions of the corresponding stage
+ and a list of results from each inner stage in _inner_results.
+ """
+
+ setup_result: FixtureResult
+ teardown_result: FixtureResult
+ _inner_results: MutableSequence["BaseResult"]
+
+ def __init__(self):
+ self.setup_result = FixtureResult()
+ self.teardown_result = FixtureResult()
+ self._inner_results = []
+
+ def update_setup(self, result: Result, error: Exception | None = None) -> None:
+ self.setup_result.result = result
+ self.setup_result.error = error
+
+ def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+ self.teardown_result.result = result
+ self.teardown_result.error = error
+
+ def _get_setup_teardown_errors(self) -> list[Exception]:
+ errors = []
+ if self.setup_result.error:
+ errors.append(self.setup_result.error)
+ if self.teardown_result.error:
+ errors.append(self.teardown_result.error)
+ return errors
+
+ def _get_inner_errors(self) -> list[Exception]:
+ return [
+ error
+ for inner_result in self._inner_results
+ for error in inner_result.get_errors()
+ ]
+
+ def get_errors(self) -> list[Exception]:
+ return self._get_setup_teardown_errors() + self._get_inner_errors()
+
+ def add_stats(self, statistics: Statistics) -> None:
+ for inner_result in self._inner_results:
+ inner_result.add_stats(statistics)
+
+
+class TestCaseResult(BaseResult, FixtureResult):
+ """
+ The test case specific result.
+ Stores the result of the actual test case.
+ Also stores the test case name.
+ """
+
+ test_case_name: str
+
+ def __init__(self, test_case_name: str):
+ super(TestCaseResult, self).__init__()
+ self.test_case_name = test_case_name
+
+ def update(self, result: Result, error: Exception | None = None) -> None:
+ self.result = result
+ self.error = error
+
+ def _get_inner_errors(self) -> list[Exception]:
+ if self.error:
+ return [self.error]
+ return []
+
+ def add_stats(self, statistics: Statistics) -> None:
+ statistics += self.result
+
+ def __bool__(self) -> bool:
+ return (
+ bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
+ )
+
+
+class TestSuiteResult(BaseResult):
+ """
+ The test suite specific result.
+ The _inner_results list stores results of test cases in a given test suite.
+ Also stores the test suite name.
+ """
+
+ suite_name: str
+
+ def __init__(self, suite_name: str):
+ super(TestSuiteResult, self).__init__()
+ self.suite_name = suite_name
+
+ def add_test_case(self, test_case_name: str) -> TestCaseResult:
+ test_case_result = TestCaseResult(test_case_name)
+ self._inner_results.append(test_case_result)
+ return test_case_result
+
+
+class BuildTargetResult(BaseResult):
+ """
+ The build target specific result.
+ The _inner_results list stores results of test suites in a given build target.
+ Also stores build target specifics, such as compiler used to build DPDK.
+ """
+
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+
+ def __init__(self, build_target: BuildTargetConfiguration):
+ super(BuildTargetResult, self).__init__()
+ self.arch = build_target.arch
+ self.os = build_target.os
+ self.cpu = build_target.cpu
+ self.compiler = build_target.compiler
+
+ def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+ test_suite_result = TestSuiteResult(test_suite_name)
+ self._inner_results.append(test_suite_result)
+ return test_suite_result
+
+
+class ExecutionResult(BaseResult):
+ """
+ The execution specific result.
+ The _inner_results list stores results of build targets in a given execution.
+ Also stores the SUT node configuration.
+ """
+
+ sut_node: NodeConfiguration
+
+ def __init__(self, sut_node: NodeConfiguration):
+ super(ExecutionResult, self).__init__()
+ self.sut_node = sut_node
+
+ def add_build_target(
+ self, build_target: BuildTargetConfiguration
+ ) -> BuildTargetResult:
+ build_target_result = BuildTargetResult(build_target)
+ self._inner_results.append(build_target_result)
+ return build_target_result
+
+
+class DTSResult(BaseResult):
+ """
+ Stores environment information and test results from a DTS run, which are:
+ * Execution level information, such as SUT and TG hardware.
+ * Build target level information, such as compiler, target OS and cpu.
+ * Test suite results.
+ * All errors that are caught and recorded during DTS execution.
+
+ The information is stored in nested objects.
+
+ The class is capable of computing the return code used to exit DTS with
+ from the stored error.
+
+ It also provides a brief statistical summary of passed/failed test cases.
+ """
+
+ dpdk_version: str | None
+ _logger: DTSLOG
+ _errors: list[Exception]
+ _return_code: ErrorSeverity
+ _stats_result: Statistics | None
+ _stats_filename: str
+
+ def __init__(self, logger: DTSLOG):
+ super(DTSResult, self).__init__()
+ self.dpdk_version = None
+ self._logger = logger
+ self._errors = []
+ self._return_code = ErrorSeverity.NO_ERR
+ self._stats_result = None
+ self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
+
+ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+ execution_result = ExecutionResult(sut_node)
+ self._inner_results.append(execution_result)
+ return execution_result
+
+ def add_error(self, error) -> None:
+ self._errors.append(error)
+
+ def process(self) -> None:
+ """
+ Process the data after a DTS run.
+ The data is added to nested objects during runtime and this parent object
+ is not updated at that time. This requires us to process the nested data
+ after it's all been gathered.
+
+ The processing gathers all errors and the result statistics of test cases.
+ """
+ self._errors += self.get_errors()
+ if self._errors and self._logger:
+ self._logger.debug("Summary of errors:")
+ for error in self._errors:
+ self._logger.debug(repr(error))
+
+ self._stats_result = Statistics(self.dpdk_version)
+ self.add_stats(self._stats_result)
+ with open(self._stats_filename, "w+") as stats_file:
+ stats_file.write(str(self._stats_result))
+
+ def get_return_code(self) -> int:
+ """
+ Go through all stored Exceptions and return the highest error code found.
+ """
+ for error in self._errors:
+ error_return_code = ErrorSeverity.GENERIC_ERR
+ if isinstance(error, DTSError):
+ error_return_code = error.severity
+
+ if error_return_code > self._return_code:
+ self._return_code = error_return_code
+
+ return int(self._return_code)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 12bf3b6420..0705f38f98 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -9,12 +9,12 @@
import importlib
import inspect
import re
-from collections.abc import MutableSequence
from types import MethodType
from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
+from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
from .testbed_model import SutNode
@@ -40,21 +40,21 @@ class TestSuite(object):
_logger: DTSLOG
_test_cases_to_run: list[str]
_func: bool
- _errors: MutableSequence[Exception]
+ _result: TestSuiteResult
def __init__(
self,
sut_node: SutNode,
test_cases: list[str],
func: bool,
- errors: MutableSequence[Exception],
+ build_target_result: BuildTargetResult,
):
self.sut_node = sut_node
self._logger = getLogger(self.__class__.__name__)
self._test_cases_to_run = test_cases
self._test_cases_to_run.extend(SETTINGS.test_cases)
self._func = func
- self._errors = errors
+ self._result = build_target_result.add_test_suite(self.__class__.__name__)
def set_up_suite(self) -> None:
"""
@@ -97,10 +97,11 @@ def run(self) -> None:
try:
self._logger.info(f"Starting test suite setup: {test_suite_name}")
self.set_up_suite()
+ self._result.update_setup(Result.PASS)
self._logger.info(f"Test suite setup successful: {test_suite_name}")
except Exception as e:
self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
- self._errors.append(e)
+ self._result.update_setup(Result.ERROR, e)
else:
self._execute_test_suite()
@@ -109,13 +110,14 @@ def run(self) -> None:
try:
self.tear_down_suite()
self.sut_node.kill_cleanup_dpdk_apps()
+ self._result.update_teardown(Result.PASS)
except Exception as e:
self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
self._logger.warning(
f"Test suite '{test_suite_name}' teardown failed, "
f"the next test suite may be affected."
)
- self._errors.append(e)
+ self._result.update_setup(Result.ERROR, e)
def _execute_test_suite(self) -> None:
"""
@@ -123,17 +125,18 @@ def _execute_test_suite(self) -> None:
"""
if self._func:
for test_case_method in self._get_functional_test_cases():
+ test_case_name = test_case_method.__name__
+ test_case_result = self._result.add_test_case(test_case_name)
all_attempts = SETTINGS.re_run + 1
attempt_nr = 1
- while (
- not self._run_test_case(test_case_method)
- and attempt_nr < all_attempts
- ):
+ self._run_test_case(test_case_method, test_case_result)
+ while not test_case_result and attempt_nr < all_attempts:
attempt_nr += 1
self._logger.info(
- f"Re-running FAILED test case '{test_case_method.__name__}'. "
+ f"Re-running FAILED test case '{test_case_name}'. "
f"Attempt number {attempt_nr} out of {all_attempts}."
)
+ self._run_test_case(test_case_method, test_case_result)
def _get_functional_test_cases(self) -> list[MethodType]:
"""
@@ -166,68 +169,69 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
return match
- def _run_test_case(self, test_case_method: MethodType) -> bool:
+ def _run_test_case(
+ self, test_case_method: MethodType, test_case_result: TestCaseResult
+ ) -> None:
"""
Setup, execute and teardown a test case in this suite.
- Exceptions are caught and recorded in logs.
+ Exceptions are caught and recorded in logs and results.
"""
test_case_name = test_case_method.__name__
- result = False
try:
# run set_up function for each case
self.set_up_test_case()
+ test_case_result.update_setup(Result.PASS)
except SSHTimeoutError as e:
self._logger.exception(f"Test case setup FAILED: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update_setup(Result.FAIL, e)
except Exception as e:
self._logger.exception(f"Test case setup ERROR: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update_setup(Result.ERROR, e)
else:
# run test case if setup was successful
- result = self._execute_test_case(test_case_method)
+ self._execute_test_case(test_case_method, test_case_result)
finally:
try:
self.tear_down_test_case()
+ test_case_result.update_teardown(Result.PASS)
except Exception as e:
self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
self._logger.warning(
f"Test case '{test_case_name}' teardown failed, "
f"the next test case may be affected."
)
- self._errors.append(e)
- result = False
+ test_case_result.update_teardown(Result.ERROR, e)
+ test_case_result.update(Result.ERROR)
- return result
-
- def _execute_test_case(self, test_case_method: MethodType) -> bool:
+ def _execute_test_case(
+ self, test_case_method: MethodType, test_case_result: TestCaseResult
+ ) -> None:
"""
Execute one test case and handle failures.
"""
test_case_name = test_case_method.__name__
- result = False
try:
self._logger.info(f"Starting test case execution: {test_case_name}")
test_case_method()
- result = True
+ test_case_result.update(Result.PASS)
self._logger.info(f"Test case execution PASSED: {test_case_name}")
except TestCaseVerifyError as e:
self._logger.exception(f"Test case execution FAILED: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update(Result.FAIL, e)
except Exception as e:
self._logger.exception(f"Test case execution ERROR: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update(Result.ERROR, e)
except KeyboardInterrupt:
self._logger.error(
f"Test case execution INTERRUPTED by user: {test_case_name}"
)
+ test_case_result.update(Result.SKIP)
raise KeyboardInterrupt("Stop DTS")
- return result
-
def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
def is_test_suite(object) -> bool:
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v5 10/10] doc: update DTS setup and test suite cookbook
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (8 preceding siblings ...)
2023-02-23 15:28 ` [PATCH v5 09/10] dts: add test results module Juraj Linkeš
@ 2023-02-23 15:28 ` Juraj Linkeš
2023-03-03 8:31 ` Huang, ChenyuX
2023-02-23 16:13 ` [PATCH v5 00/10] dts: add hello world testcase Bruce Richardson
` (2 subsequent siblings)
12 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-23 15:28 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Document how to configure and run DTS.
Also add documentation related to new features: SUT setup and a brief
test suite implementation cookbook.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
doc/guides/tools/dts.rst | 165 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 163 insertions(+), 2 deletions(-)
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index daf54359ed..ebd6dceb6a 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -1,5 +1,5 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2022 PANTHEON.tech s.r.o.
+ Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
DPDK Test Suite
===============
@@ -56,7 +56,7 @@ DTS runtime environment or just plain DTS environment are used interchangeably.
Setting up DTS environment
---------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~
#. **Python Version**
@@ -93,6 +93,167 @@ Setting up DTS environment
poetry install
poetry shell
+#. **SSH Connection**
+
+ DTS uses Python pexpect for SSH connections between DTS environment and the other hosts.
+ The pexpect implementation is a wrapper around the ssh command in the DTS environment.
+ This means it'll use the SSH agent providing the ssh command and its keys.
+
+
+Setting up System Under Test
+----------------------------
+
+There are two areas that need to be set up on a System Under Test:
+
+#. **DPDK dependencies**
+
+ DPDK will be built and run on the SUT.
+ Consult the Getting Started guides for the list of dependencies for each distribution.
+
+#. **Hardware dependencies**
+
+ Any hardware DPDK uses needs a proper driver
+ and most OS distributions provide those, but the version may not be satisfactory.
+ It's up to each user to install the driver they're interested in testing.
+ The hardware also may also need firmware upgrades, which is also left at user discretion.
+
+#. **Hugepages**
+
+ There are two ways to configure hugepages:
+
+ * DTS configuration
+
+ You may specify the optional hugepage configuration in the DTS config file.
+ If you do, DTS will take care of configuring hugepages,
+ overwriting your current SUT hugepage configuration.
+
+ * System under test configuration
+
+ It's possible to use the hugepage configuration already present on the SUT.
+ If you wish to do so, don't specify the hugepage configuration in the DTS config file.
+
+
+Running DTS
+-----------
+
+DTS needs to know which nodes to connect to and what hardware to use on those nodes.
+Once that's configured, DTS needs a DPDK tarball and it's ready to run.
+
+Configuring DTS
+~~~~~~~~~~~~~~~
+
+DTS configuration is split into nodes and executions and build targets within executions.
+By default, DTS will try to use the ``dts/conf.yaml`` config file,
+which is a template that illustrates what can be configured in DTS:
+
+ .. literalinclude:: ../../../dts/conf.yaml
+ :language: yaml
+ :start-at: executions:
+
+
+The user must be root or any other user with prompt starting with ``#``.
+The other fields are mostly self-explanatory
+and documented in more detail in ``dts/framework/config/conf_yaml_schema.json``.
+
+DTS Execution
+~~~~~~~~~~~~~
+
+DTS is run with ``main.py`` located in the ``dts`` directory after entering Poetry shell::
+
+ usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT]
+ [-v VERBOSE] [-s SKIP_SETUP] [--tarball TARBALL]
+ [--compile-timeout COMPILE_TIMEOUT] [--test-cases TEST_CASES]
+ [--re-run RE_RUN]
+
+ Run DPDK test suites. All options may be specified with the environment variables provided in
+ brackets. Command line arguments have higher priority.
+
+ options:
+ -h, --help show this help message and exit
+ --config-file CONFIG_FILE
+ [DTS_CFG_FILE] configuration file that describes the test cases, SUTs
+ and targets. (default: conf.yaml)
+ --output-dir OUTPUT_DIR, --output OUTPUT_DIR
+ [DTS_OUTPUT_DIR] Output directory where dts logs and results are
+ saved. (default: output)
+ -t TIMEOUT, --timeout TIMEOUT
+ [DTS_TIMEOUT] The default timeout for all DTS operations except for
+ compiling DPDK. (default: 15)
+ -v VERBOSE, --verbose VERBOSE
+ [DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all
+ messages to the console. (default: N)
+ -s SKIP_SETUP, --skip-setup SKIP_SETUP
+ [DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG
+ nodes. (default: N)
+ --tarball TARBALL, --snapshot TARBALL
+ [DTS_DPDK_TARBALL] Path to DPDK source code tarball which will be
+ used in testing. (default: dpdk.tar.xz)
+ --compile-timeout COMPILE_TIMEOUT
+ [DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK. (default: 1200)
+ --test-cases TEST_CASES
+ [DTS_TESTCASES] Comma-separated list of test cases to execute.
+ Unknown test cases will be silently ignored. (default: )
+ --re-run RE_RUN, --re_run RE_RUN
+ [DTS_RERUN] Re-run each test case the specified amount of times if a
+ test failure occurs (default: 0)
+
+
+The brackets contain the names of environment variables that set the same thing.
+The minimum DTS needs is a config file and a DPDK tarball.
+You may pass those to DTS using the command line arguments or use the default paths.
+
+
+DTS Results
+~~~~~~~~~~~
+
+Results are stored in the output dir by default
+which be changed with the ``--output-dir`` command line argument.
+The results contain basic statistics of passed/failed test cases and DPDK version.
+
+
+How To Write a Test Suite
+-------------------------
+
+All test suites inherit from ``TestSuite`` defined in ``dts/framework/test_suite.py``.
+There are four types of methods that comprise a test suite:
+
+#. **Test cases**
+
+ | Test cases are methods that start with a particular prefix.
+ | Functional test cases start with ``test_``, e.g. ``test_hello_world_single_core``.
+ | Performance test cases start with ``test_perf_``, e.g. ``test_perf_nic_single_core``.
+ | A test suite may have any number of functional and/or performance test cases.
+ However, these test cases must test the same feature,
+ following the rule of one feature = one test suite.
+ Test cases for one feature don't need to be grouped in just one test suite, though.
+ If the feature requires many testing scenarios to cover,
+ the test cases would be better off spread over multiple test suites
+ so that each test suite doesn't take too long to execute.
+
+#. **Setup and Teardown methods**
+
+ | There are setup and teardown methods for the whole test suite and each individual test case.
+ | Methods ``set_up_suite`` and ``tear_down_suite`` will be executed
+ before any and after all test cases have been executed, respectively.
+ | Methods ``set_up_test_case`` and ``tear_down_test_case`` will be executed
+ before and after each test case, respectively.
+ | These methods don't need to be implemented if there's no need for them in a test suite.
+ In that case, nothing will happen when they're is executed.
+
+#. **Test case verification**
+
+ Test case verification should be done with the ``verify`` method, which records the result.
+ The method should be called at the end of each test case.
+
+#. **Other methods**
+
+ Of course, all test suite code should adhere to coding standards.
+ Only the above methods will be treated specially and any other methods may be defined
+ (which should be mostly private methods needed by each particular test suite).
+ Any specific features (such as NIC configuration) required by a test suite
+ should be implemented in the ``SutNode`` class (and the underlying classes that ``SutNode`` uses)
+ and used by the test suite via the ``sut_node`` field.
+
DTS Developer Tools
-------------------
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v5 00/10] dts: add hello world testcase
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (9 preceding siblings ...)
2023-02-23 15:28 ` [PATCH v5 10/10] doc: update DTS setup and test suite cookbook Juraj Linkeš
@ 2023-02-23 16:13 ` Bruce Richardson
2023-02-26 19:11 ` Wathsala Wathawana Vithanage
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
12 siblings, 0 replies; 97+ messages in thread
From: Bruce Richardson @ 2023-02-23 16:13 UTC (permalink / raw)
To: Juraj Linkeš; +Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, probb, dev
On Thu, Feb 23, 2023 at 04:28:30PM +0100, Juraj Linkeš wrote:
> Add code needed to run the HelloWorld testcase which just runs the hello
> world dpdk application.
>
> The patchset currently heavily refactors this original DTS code needed
> to run the testcase:
> * The whole architecture has been redone into more sensible class
> hierarchy
> * DPDK build on the System under Test
> * DPDK eal args construction, app running and shutting down
> * Optional SUT hugepage memory configuration
> The optional part is DTS either configuring them or not. They still
> must be configured even the user doesn't want DTS to do that.
> * Test runner
> * Test results
> * TestSuite class
> * Test runner parts interfacing with TestSuite
> * The HelloWorld testsuite itself
>
> The code is divided into sub-packages, some of which are divided
> further.
>
> There patch may need to be divided into smaller chunks. If so, proposals
> on where exactly to split it would be very helpful.
>
> v4:
> Made hugepage config optional, users may now specify that in the main
> config file.
> Removed HelloWorld test plan and incorporated parts of it into the test
> suite python file.
> Updated documentation.
>
> v5:
> Documentation updates about running as root and hugepage configuration.
> Fixed multiple problems with cpu filtering.
> Other minor issues, such as typos and renaming variables.
>
The helloworld unit tests all pass for me now, don't see any errors.
Series-tested-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 97+ messages in thread
* RE: [PATCH v5 00/10] dts: add hello world testcase
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (10 preceding siblings ...)
2023-02-23 16:13 ` [PATCH v5 00/10] dts: add hello world testcase Bruce Richardson
@ 2023-02-26 19:11 ` Wathsala Wathawana Vithanage
2023-02-27 8:28 ` Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
12 siblings, 1 reply; 97+ messages in thread
From: Wathsala Wathawana Vithanage @ 2023-02-26 19:11 UTC (permalink / raw)
To: Juraj Linkeš,
thomas, Honnappa Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, nd, nd
> -----Original Message-----
> From: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Sent: Thursday, February 23, 2023 10:29 AM
> To: thomas@monjalon.net; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; lijuan.tu@intel.com;
> bruce.richardson@intel.com; probb@iol.unh.edu
> Cc: dev@dpdk.org; Juraj Linkeš <juraj.linkes@pantheon.tech>
> Subject: [PATCH v5 00/10] dts: add hello world testcase
>
> Add code needed to run the HelloWorld testcase which just runs the hello world
> dpdk application.
>
> The patchset currently heavily refactors this original DTS code needed to run
> the testcase:
> * The whole architecture has been redone into more sensible class
> hierarchy
> * DPDK build on the System under Test
> * DPDK eal args construction, app running and shutting down
> * Optional SUT hugepage memory configuration
> The optional part is DTS either configuring them or not. They still must be
> configured even the user doesn't want DTS to do that.
> * Test runner
> * Test results
> * TestSuite class
> * Test runner parts interfacing with TestSuite
> * The HelloWorld testsuite itself
>
> The code is divided into sub-packages, some of which are divided further.
>
> There patch may need to be divided into smaller chunks. If so, proposals on
> where exactly to split it would be very helpful.
>
> v4:
> Made hugepage config optional, users may now specify that in the main config
> file.
> Removed HelloWorld test plan and incorporated parts of it into the test suite
> python file.
> Updated documentation.
>
> v5:
> Documentation updates about running as root and hugepage configuration.
> Fixed multiple problems with cpu filtering.
> Other minor issues, such as typos and renaming variables.
>
Hi Juraj,
Everything looks good except for couple of comments/suggestions.
If I’m not mistaken dpdk tarball is copied to the SUT over scp. However, scp is already deprecated [1,2]. Is it possible to use rsync over ssh instead?
Looks like ssh password needs to be stored in the configuration file which is not a good practice. Suggests giving users two options (a) using an ssh key instead of password (b) prompting for user password if no key is provided.
It's somewhat cumbersome for a developer to create a tarball every time they run a test case. Therefore, would it be possible to automate the creation of tarball from a git repo + branch or a local directory when user doesn’t provide a tarball?
[1] https://lwn.net/Articles/835962/
[2] https://www.redhat.com/en/blog/openssh-scp-deprecation-rhel-9-what-you-need-know
> Juraj Linkeš (10):
> dts: add node and os abstractions
> dts: add ssh command verification
> dts: add dpdk build on sut
> dts: add dpdk execution handling
> dts: add node memory setup
> dts: add test suite module
> dts: add hello world testsuite
> dts: add test suite config and runner
> dts: add test results module
> doc: update DTS setup and test suite cookbook
>
> doc/guides/tools/dts.rst | 165 ++++++++-
> dts/conf.yaml | 22 +-
> dts/framework/config/__init__.py | 130 ++++++-
> dts/framework/config/conf_yaml_schema.json | 172 +++++++++-
> dts/framework/dts.py | 185 ++++++++--
> dts/framework/exception.py | 100 +++++-
> dts/framework/logger.py | 24 +-
> dts/framework/remote_session/__init__.py | 30 +-
> dts/framework/remote_session/linux_session.py | 107 ++++++
> dts/framework/remote_session/os_session.py | 175 ++++++++++
> dts/framework/remote_session/posix_session.py | 222 ++++++++++++
> .../remote_session/remote/__init__.py | 16 +
> .../remote_session/remote/remote_session.py | 155 +++++++++
> .../{ => remote}/ssh_session.py | 92 ++++-
> .../remote_session/remote_session.py | 95 ------
> dts/framework/settings.py | 81 ++++-
> dts/framework/test_result.py | 316 ++++++++++++++++++
> dts/framework/test_suite.py | 254 ++++++++++++++
> dts/framework/testbed_model/__init__.py | 20 +-
> dts/framework/testbed_model/dpdk.py | 78 +++++
> dts/framework/testbed_model/hw/__init__.py | 27 ++
> dts/framework/testbed_model/hw/cpu.py | 274 +++++++++++++++
> .../testbed_model/hw/virtual_device.py | 16 +
> dts/framework/testbed_model/node.py | 159 +++++++--
> dts/framework/testbed_model/sut_node.py | 260 ++++++++++++++
> dts/framework/utils.py | 39 ++-
> dts/tests/TestSuite_hello_world.py | 64 ++++
> 27 files changed, 3068 insertions(+), 210 deletions(-) create mode 100644
> dts/framework/remote_session/linux_session.py
> create mode 100644 dts/framework/remote_session/os_session.py
> create mode 100644 dts/framework/remote_session/posix_session.py
> create mode 100644 dts/framework/remote_session/remote/__init__.py
> create mode 100644
> dts/framework/remote_session/remote/remote_session.py
> rename dts/framework/remote_session/{ => remote}/ssh_session.py (64%)
> delete mode 100644 dts/framework/remote_session/remote_session.py
> create mode 100644 dts/framework/test_result.py create mode 100644
> dts/framework/test_suite.py create mode 100644
> dts/framework/testbed_model/dpdk.py
> create mode 100644 dts/framework/testbed_model/hw/__init__.py
> create mode 100644 dts/framework/testbed_model/hw/cpu.py
> create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
> create mode 100644 dts/framework/testbed_model/sut_node.py
> create mode 100644 dts/tests/TestSuite_hello_world.py
>
> --
> 2.30.2
>
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v5 00/10] dts: add hello world testcase
2023-02-26 19:11 ` Wathsala Wathawana Vithanage
@ 2023-02-27 8:28 ` Juraj Linkeš
2023-02-28 15:27 ` Wathsala Wathawana Vithanage
0 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2023-02-27 8:28 UTC (permalink / raw)
To: Wathsala Wathawana Vithanage
Cc: thomas, Honnappa Nagarahalli, lijuan.tu, bruce.richardson, probb,
dev, nd
[-- Attachment #1: Type: text/plain, Size: 4912 bytes --]
> Hi Juraj,
>
Hi Wathsala, thanks for the comments.
> Everything looks good except for couple of comments/suggestions.
> If I’m not mistaken dpdk tarball is copied to the SUT over scp. However,
> scp is already deprecated [1,2]. Is it possible to use rsync over ssh
> instead?
>
We're going to replace the pexpect implementation with the Fabric library
in a separate patch (the helloworld patch is already very big), which will
address this - Fabric uses SFTP for file transfer.
> Looks like ssh password needs to be stored in the configuration file which
> is not a good practice. Suggests giving users two options (a) using an ssh
> key instead of password (b) prompting for user password if no key is
> provided.
>
This is an optional and heavily discouraged option (useful for quick
debugging, so we left it in). SSH keys are the default. The Fabric patch
will also include the support for non-root users (with passwordless sudo).
> It's somewhat cumbersome for a developer to create a tarball every time
> they run a test case. Therefore, would it be possible to automate the
> creation of tarball from a git repo + branch or a local directory when user
> doesn’t provide a tarball?
>
That is also a separate patch in the making - users will be able to supply
a git ref that DTS will use. I haven't thought about local directories,
what additional scenarios would that cover?
Regards,
Juraj
>
> [1] https://lwn.net/Articles/835962/
> [2]
> https://www.redhat.com/en/blog/openssh-scp-deprecation-rhel-9-what-you-need-know
>
> > Juraj Linkeš (10):
> > dts: add node and os abstractions
> > dts: add ssh command verification
> > dts: add dpdk build on sut
> > dts: add dpdk execution handling
> > dts: add node memory setup
> > dts: add test suite module
> > dts: add hello world testsuite
> > dts: add test suite config and runner
> > dts: add test results module
> > doc: update DTS setup and test suite cookbook
> >
> > doc/guides/tools/dts.rst | 165 ++++++++-
> > dts/conf.yaml | 22 +-
> > dts/framework/config/__init__.py | 130 ++++++-
> > dts/framework/config/conf_yaml_schema.json | 172 +++++++++-
> > dts/framework/dts.py | 185 ++++++++--
> > dts/framework/exception.py | 100 +++++-
> > dts/framework/logger.py | 24 +-
> > dts/framework/remote_session/__init__.py | 30 +-
> > dts/framework/remote_session/linux_session.py | 107 ++++++
> > dts/framework/remote_session/os_session.py | 175 ++++++++++
> > dts/framework/remote_session/posix_session.py | 222 ++++++++++++
> > .../remote_session/remote/__init__.py | 16 +
> > .../remote_session/remote/remote_session.py | 155 +++++++++
> > .../{ => remote}/ssh_session.py | 92 ++++-
> > .../remote_session/remote_session.py | 95 ------
> > dts/framework/settings.py | 81 ++++-
> > dts/framework/test_result.py | 316 ++++++++++++++++++
> > dts/framework/test_suite.py | 254 ++++++++++++++
> > dts/framework/testbed_model/__init__.py | 20 +-
> > dts/framework/testbed_model/dpdk.py | 78 +++++
> > dts/framework/testbed_model/hw/__init__.py | 27 ++
> > dts/framework/testbed_model/hw/cpu.py | 274 +++++++++++++++
> > .../testbed_model/hw/virtual_device.py | 16 +
> > dts/framework/testbed_model/node.py | 159 +++++++--
> > dts/framework/testbed_model/sut_node.py | 260 ++++++++++++++
> > dts/framework/utils.py | 39 ++-
> > dts/tests/TestSuite_hello_world.py | 64 ++++
> > 27 files changed, 3068 insertions(+), 210 deletions(-) create mode
> 100644
> > dts/framework/remote_session/linux_session.py
> > create mode 100644 dts/framework/remote_session/os_session.py
> > create mode 100644 dts/framework/remote_session/posix_session.py
> > create mode 100644 dts/framework/remote_session/remote/__init__.py
> > create mode 100644
> > dts/framework/remote_session/remote/remote_session.py
> > rename dts/framework/remote_session/{ => remote}/ssh_session.py (64%)
> > delete mode 100644 dts/framework/remote_session/remote_session.py
> > create mode 100644 dts/framework/test_result.py create mode 100644
> > dts/framework/test_suite.py create mode 100644
> > dts/framework/testbed_model/dpdk.py
> > create mode 100644 dts/framework/testbed_model/hw/__init__.py
> > create mode 100644 dts/framework/testbed_model/hw/cpu.py
> > create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
> > create mode 100644 dts/framework/testbed_model/sut_node.py
> > create mode 100644 dts/tests/TestSuite_hello_world.py
> >
> > --
> > 2.30.2
> >
>
>
[-- Attachment #2: Type: text/html, Size: 6569 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* RE: [PATCH v5 00/10] dts: add hello world testcase
2023-02-27 8:28 ` Juraj Linkeš
@ 2023-02-28 15:27 ` Wathsala Wathawana Vithanage
2023-03-01 8:35 ` Juraj Linkeš
0 siblings, 1 reply; 97+ messages in thread
From: Wathsala Wathawana Vithanage @ 2023-02-28 15:27 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa Nagarahalli, lijuan.tu, bruce.richardson, probb,
dev, nd, nd
[-- Attachment #1: Type: text/plain, Size: 5199 bytes --]
From: Juraj Linkeš <juraj.linkes@pantheon.tech>
Sent: Monday, February 27, 2023 3:29 AM
To: Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com>
Cc: thomas@monjalon.net; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; lijuan.tu@intel.com; bruce.richardson@intel.com; probb@iol.unh.edu; dev@dpdk.org; nd <nd@arm.com>
Subject: Re: [PATCH v5 00/10] dts: add hello world testcase
Hi Juraj,
Hi Wathsala, thanks for the comments.
Everything looks good except for couple of comments/suggestions.
If I’m not mistaken dpdk tarball is copied to the SUT over scp. However, scp is already deprecated [1,2]. Is it possible to use rsync over ssh instead?
We're going to replace the pexpect implementation with the Fabric library in a separate patch (the helloworld patch is already very big), which will address this - Fabric uses SFTP for file transfer.
Looks like ssh password needs to be stored in the configuration file which is not a good practice. Suggests giving users two options (a) using an ssh key instead of password (b) prompting for user password if no key is provided.
This is an optional and heavily discouraged option (useful for quick debugging, so we left it in). SSH keys are the default. The Fabric patch will also include the support for non-root users (with passwordless sudo).
It's somewhat cumbersome for a developer to create a tarball every time they run a test case. Therefore, would it be possible to automate the creation of tarball from a git repo + branch or a local directory when user doesn’t provide a tarball?
That is also a separate patch in the making - users will be able to supply a git ref that DTS will use. I haven't thought about local directories, what additional scenarios would that cover?
It may come in handy if working with a tarball without git or access to Internet.
Regards,
Juraj
[1] https://lwn.net/Articles/835962/
[2] https://www.redhat.com/en/blog/openssh-scp-deprecation-rhel-9-what-you-need-know
> Juraj Linkeš (10):
> dts: add node and os abstractions
> dts: add ssh command verification
> dts: add dpdk build on sut
> dts: add dpdk execution handling
> dts: add node memory setup
> dts: add test suite module
> dts: add hello world testsuite
> dts: add test suite config and runner
> dts: add test results module
> doc: update DTS setup and test suite cookbook
>
> doc/guides/tools/dts.rst | 165 ++++++++-
> dts/conf.yaml | 22 +-
> dts/framework/config/__init__.py | 130 ++++++-
> dts/framework/config/conf_yaml_schema.json | 172 +++++++++-
> dts/framework/dts.py | 185 ++++++++--
> dts/framework/exception.py | 100 +++++-
> dts/framework/logger.py | 24 +-
> dts/framework/remote_session/__init__.py | 30 +-
> dts/framework/remote_session/linux_session.py | 107 ++++++
> dts/framework/remote_session/os_session.py | 175 ++++++++++
> dts/framework/remote_session/posix_session.py | 222 ++++++++++++
> .../remote_session/remote/__init__.py | 16 +
> .../remote_session/remote/remote_session.py | 155 +++++++++
> .../{ => remote}/ssh_session.py | 92 ++++-
> .../remote_session/remote_session.py | 95 ------
> dts/framework/settings.py | 81 ++++-
> dts/framework/test_result.py | 316 ++++++++++++++++++
> dts/framework/test_suite.py | 254 ++++++++++++++
> dts/framework/testbed_model/__init__.py | 20 +-
> dts/framework/testbed_model/dpdk.py | 78 +++++
> dts/framework/testbed_model/hw/__init__.py | 27 ++
> dts/framework/testbed_model/hw/cpu.py | 274 +++++++++++++++
> .../testbed_model/hw/virtual_device.py | 16 +
> dts/framework/testbed_model/node.py | 159 +++++++--
> dts/framework/testbed_model/sut_node.py | 260 ++++++++++++++
> dts/framework/utils.py | 39 ++-
> dts/tests/TestSuite_hello_world.py | 64 ++++
> 27 files changed, 3068 insertions(+), 210 deletions(-) create mode 100644
> dts/framework/remote_session/linux_session.py
> create mode 100644 dts/framework/remote_session/os_session.py
> create mode 100644 dts/framework/remote_session/posix_session.py
> create mode 100644 dts/framework/remote_session/remote/__init__.py
> create mode 100644
> dts/framework/remote_session/remote/remote_session.py
> rename dts/framework/remote_session/{ => remote}/ssh_session.py (64%)
> delete mode 100644 dts/framework/remote_session/remote_session.py
> create mode 100644 dts/framework/test_result.py create mode 100644
> dts/framework/test_suite.py create mode 100644
> dts/framework/testbed_model/dpdk.py
> create mode 100644 dts/framework/testbed_model/hw/__init__.py
> create mode 100644 dts/framework/testbed_model/hw/cpu.py
> create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
> create mode 100644 dts/framework/testbed_model/sut_node.py
> create mode 100644 dts/tests/TestSuite_hello_world.py
>
> --
> 2.30.2
>
[-- Attachment #2: Type: text/html, Size: 10931 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v5 00/10] dts: add hello world testcase
2023-02-28 15:27 ` Wathsala Wathawana Vithanage
@ 2023-03-01 8:35 ` Juraj Linkeš
0 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-01 8:35 UTC (permalink / raw)
To: Wathsala Wathawana Vithanage
Cc: thomas, Honnappa Nagarahalli, lijuan.tu, bruce.richardson, probb,
dev, nd
[-- Attachment #1: Type: text/plain, Size: 6158 bytes --]
On Tue, Feb 28, 2023 at 4:27 PM Wathsala Wathawana Vithanage <
wathsala.vithanage@arm.com> wrote:
>
>
>
>
> *From:* Juraj Linkeš <juraj.linkes@pantheon.tech>
> *Sent:* Monday, February 27, 2023 3:29 AM
> *To:* Wathsala Wathawana Vithanage <wathsala.vithanage@arm.com>
> *Cc:* thomas@monjalon.net; Honnappa Nagarahalli <
> Honnappa.Nagarahalli@arm.com>; lijuan.tu@intel.com;
> bruce.richardson@intel.com; probb@iol.unh.edu; dev@dpdk.org; nd <
> nd@arm.com>
> *Subject:* Re: [PATCH v5 00/10] dts: add hello world testcase
>
>
>
>
>
> Hi Juraj,
>
>
>
> Hi Wathsala, thanks for the comments.
>
>
>
> Everything looks good except for couple of comments/suggestions.
> If I’m not mistaken dpdk tarball is copied to the SUT over scp. However,
> scp is already deprecated [1,2]. Is it possible to use rsync over ssh
> instead?
>
>
>
> We're going to replace the pexpect implementation with the Fabric library
> in a separate patch (the helloworld patch is already very big), which will
> address this - Fabric uses SFTP for file transfer.
>
>
>
> Looks like ssh password needs to be stored in the configuration file which
> is not a good practice. Suggests giving users two options (a) using an ssh
> key instead of password (b) prompting for user password if no key is
> provided.
>
>
>
> This is an optional and heavily discouraged option (useful for quick
> debugging, so we left it in). SSH keys are the default. The Fabric patch
> will also include the support for non-root users (with passwordless sudo).
>
>
>
> It's somewhat cumbersome for a developer to create a tarball every time
> they run a test case. Therefore, would it be possible to automate the
> creation of tarball from a git repo + branch or a local directory when user
> doesn’t provide a tarball?
>
>
>
> That is also a separate patch in the making - users will be able to supply
> a git ref that DTS will use. I haven't thought about local directories,
> what additional scenarios would that cover?
>
>
>
> It may come in handy if working with a tarball without git or access to
> Internet.
>
>
>
Is that an additional scenario though? With the git ref support patch,
users will be able to pass either a tarball (the tarball won't be deleted
so the user can reuse it) or a git ref (when running DTS from the
repository). What I wanted to know is how do you arrive at a setup where
you have a local directory that's not a git repository and you don't have a
tarball - i.e. how do you have a local non-repo directory without a tarball
(you have to get the local directory from somewhere, presumable a tarball)?
> Regards,
>
> Juraj
>
>
>
>
> [1] https://lwn.net/Articles/835962/
> [2]
> https://www.redhat.com/en/blog/openssh-scp-deprecation-rhel-9-what-you-need-know
>
> > Juraj Linkeš (10):
> > dts: add node and os abstractions
> > dts: add ssh command verification
> > dts: add dpdk build on sut
> > dts: add dpdk execution handling
> > dts: add node memory setup
> > dts: add test suite module
> > dts: add hello world testsuite
> > dts: add test suite config and runner
> > dts: add test results module
> > doc: update DTS setup and test suite cookbook
> >
> > doc/guides/tools/dts.rst | 165 ++++++++-
> > dts/conf.yaml | 22 +-
> > dts/framework/config/__init__.py | 130 ++++++-
> > dts/framework/config/conf_yaml_schema.json | 172 +++++++++-
> > dts/framework/dts.py | 185 ++++++++--
> > dts/framework/exception.py | 100 +++++-
> > dts/framework/logger.py | 24 +-
> > dts/framework/remote_session/__init__.py | 30 +-
> > dts/framework/remote_session/linux_session.py | 107 ++++++
> > dts/framework/remote_session/os_session.py | 175 ++++++++++
> > dts/framework/remote_session/posix_session.py | 222 ++++++++++++
> > .../remote_session/remote/__init__.py | 16 +
> > .../remote_session/remote/remote_session.py | 155 +++++++++
> > .../{ => remote}/ssh_session.py | 92 ++++-
> > .../remote_session/remote_session.py | 95 ------
> > dts/framework/settings.py | 81 ++++-
> > dts/framework/test_result.py | 316 ++++++++++++++++++
> > dts/framework/test_suite.py | 254 ++++++++++++++
> > dts/framework/testbed_model/__init__.py | 20 +-
> > dts/framework/testbed_model/dpdk.py | 78 +++++
> > dts/framework/testbed_model/hw/__init__.py | 27 ++
> > dts/framework/testbed_model/hw/cpu.py | 274 +++++++++++++++
> > .../testbed_model/hw/virtual_device.py | 16 +
> > dts/framework/testbed_model/node.py | 159 +++++++--
> > dts/framework/testbed_model/sut_node.py | 260 ++++++++++++++
> > dts/framework/utils.py | 39 ++-
> > dts/tests/TestSuite_hello_world.py | 64 ++++
> > 27 files changed, 3068 insertions(+), 210 deletions(-) create mode
> 100644
> > dts/framework/remote_session/linux_session.py
> > create mode 100644 dts/framework/remote_session/os_session.py
> > create mode 100644 dts/framework/remote_session/posix_session.py
> > create mode 100644 dts/framework/remote_session/remote/__init__.py
> > create mode 100644
> > dts/framework/remote_session/remote/remote_session.py
> > rename dts/framework/remote_session/{ => remote}/ssh_session.py (64%)
> > delete mode 100644 dts/framework/remote_session/remote_session.py
> > create mode 100644 dts/framework/test_result.py create mode 100644
> > dts/framework/test_suite.py create mode 100644
> > dts/framework/testbed_model/dpdk.py
> > create mode 100644 dts/framework/testbed_model/hw/__init__.py
> > create mode 100644 dts/framework/testbed_model/hw/cpu.py
> > create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
> > create mode 100644 dts/framework/testbed_model/sut_node.py
> > create mode 100644 dts/tests/TestSuite_hello_world.py
> >
> > --
> > 2.30.2
> >
>
>
[-- Attachment #2: Type: text/html, Size: 11197 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* RE: [PATCH v5 10/10] doc: update DTS setup and test suite cookbook
2023-02-23 15:28 ` [PATCH v5 10/10] doc: update DTS setup and test suite cookbook Juraj Linkeš
@ 2023-03-03 8:31 ` Huang, ChenyuX
0 siblings, 0 replies; 97+ messages in thread
From: Huang, ChenyuX @ 2023-03-03 8:31 UTC (permalink / raw)
To: Juraj Linkeš,
thomas, Honnappa.Nagarahalli, Tu, Lijuan, Richardson, Bruce,
probb
Cc: dev
> -----Original Message-----
> From: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Sent: Thursday, February 23, 2023 11:29 PM
> To: thomas@monjalon.net; Honnappa.Nagarahalli@arm.com; Tu, Lijuan
> <lijuan.tu@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> probb@iol.unh.edu
> Cc: dev@dpdk.org; Juraj Linkeš <juraj.linkes@pantheon.tech>
> Subject: [PATCH v5 10/10] doc: update DTS setup and test suite cookbook
>
> Document how to configure and run DTS.
> Also add documentation related to new features: SUT setup and a brief test
> suite implementation cookbook.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Tested-by: Chenyu Huang <chenyux.huang@intel.com>
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 00/10] dts: add hello world test case
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
` (11 preceding siblings ...)
2023-02-26 19:11 ` Wathsala Wathawana Vithanage
@ 2023-03-03 10:24 ` Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 01/10] dts: add node and os abstractions Juraj Linkeš
` (10 more replies)
12 siblings, 11 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:24 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Add code needed to run the HelloWorld testcase which just runs
the helloworld dpdk application.
The patchset currently heavily refactors this original DTS code needed
to run the testcase:
* The whole architecture has been redone into more sensible class
hierarchy
* DPDK build on the System under Test
* DPDK eal args construction, app running and shutting down
* Optional SUT hugepage memory configuration
The optional part is DTS either configuring them or not. They still
must be configured even the user doesn't want DTS to do that.
* Test runner
* Test results
* TestSuite class
* Test runner parts interfacing with TestSuite
* The HelloWorld testsuite itself
The code is divided into sub-packages, some of which are divided
further.
There patch may need to be divided into smaller chunks. If so, proposals
on where exactly to split it would be very helpful.
v5:
Documentation updates about running as root and hugepage configuration.
Fixed multiple problems with cpu filtering.
Other minor issues, such as typos and renaming variables.
v6:
Moved utility functions into different files, mainly to remove util.py's
dependency on settings.py, but also better code organization.
Juraj Linkeš (10):
dts: add node and os abstractions
dts: add ssh command verification
dts: add dpdk build on sut
dts: add dpdk execution handling
dts: add node memory setup
dts: add test suite module
dts: add hello world testsuite
dts: add test suite config and runner
dts: add test results module
doc: update dts setup and test suite cookbook
doc/guides/tools/dts.rst | 165 ++++++++-
dts/conf.yaml | 22 +-
dts/framework/config/__init__.py | 130 ++++++-
dts/framework/config/conf_yaml_schema.json | 172 +++++++++-
dts/framework/dts.py | 185 ++++++++--
dts/framework/exception.py | 100 +++++-
dts/framework/logger.py | 24 +-
dts/framework/remote_session/__init__.py | 30 +-
dts/framework/remote_session/linux_session.py | 107 ++++++
dts/framework/remote_session/os_session.py | 175 ++++++++++
dts/framework/remote_session/posix_session.py | 221 ++++++++++++
.../remote_session/remote/__init__.py | 16 +
.../remote_session/remote/remote_session.py | 155 +++++++++
.../{ => remote}/ssh_session.py | 90 ++++-
.../remote_session/remote_session.py | 95 ------
dts/framework/settings.py | 81 ++++-
dts/framework/test_result.py | 316 ++++++++++++++++++
dts/framework/test_suite.py | 254 ++++++++++++++
dts/framework/testbed_model/__init__.py | 19 +-
dts/framework/testbed_model/hw/__init__.py | 27 ++
dts/framework/testbed_model/hw/cpu.py | 274 +++++++++++++++
.../testbed_model/hw/virtual_device.py | 16 +
dts/framework/testbed_model/node.py | 172 ++++++++--
dts/framework/testbed_model/sut_node.py | 309 +++++++++++++++++
dts/framework/utils.py | 56 +++-
dts/tests/TestSuite_hello_world.py | 64 ++++
26 files changed, 3064 insertions(+), 211 deletions(-)
create mode 100644 dts/framework/remote_session/linux_session.py
create mode 100644 dts/framework/remote_session/os_session.py
create mode 100644 dts/framework/remote_session/posix_session.py
create mode 100644 dts/framework/remote_session/remote/__init__.py
create mode 100644 dts/framework/remote_session/remote/remote_session.py
rename dts/framework/remote_session/{ => remote}/ssh_session.py (65%)
delete mode 100644 dts/framework/remote_session/remote_session.py
create mode 100644 dts/framework/test_result.py
create mode 100644 dts/framework/test_suite.py
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
create mode 100644 dts/framework/testbed_model/sut_node.py
create mode 100644 dts/tests/TestSuite_hello_world.py
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 01/10] dts: add node and os abstractions
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
@ 2023-03-03 10:24 ` Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 02/10] dts: add ssh command verification Juraj Linkeš
` (9 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:24 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The abstraction model in DTS is as follows:
Node, defining and implementing methods common to and the base of SUT
(system under test) Node and TG (traffic generator) Node.
Remote Session, defining and implementing methods common to any remote
session implementation, such as SSH Session.
OSSession, defining and implementing methods common to any operating
system/distribution, such as Linux.
OSSession uses a derived Remote Session and Node in turn uses a derived
OSSession. This split delegates OS-specific and connection-specific code
to specialized classes designed to handle the differences.
The base classes implement the methods or parts of methods that are
common to all implementations and defines abstract methods that must be
implemented by derived classes.
Part of the abstractions is the DTS test execution skeleton:
execution setup, build setup and then test execution.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 11 +-
dts/framework/config/__init__.py | 73 +++++++-
dts/framework/config/conf_yaml_schema.json | 76 +++++++-
dts/framework/dts.py | 162 ++++++++++++++----
dts/framework/exception.py | 46 ++++-
dts/framework/logger.py | 24 +--
dts/framework/remote_session/__init__.py | 30 +++-
dts/framework/remote_session/linux_session.py | 11 ++
dts/framework/remote_session/os_session.py | 46 +++++
dts/framework/remote_session/posix_session.py | 12 ++
.../remote_session/remote/__init__.py | 16 ++
.../{ => remote}/remote_session.py | 41 +++--
.../{ => remote}/ssh_session.py | 20 +--
dts/framework/settings.py | 15 +-
dts/framework/testbed_model/__init__.py | 10 +-
dts/framework/testbed_model/node.py | 109 +++++++++---
dts/framework/testbed_model/sut_node.py | 13 ++
17 files changed, 594 insertions(+), 121 deletions(-)
create mode 100644 dts/framework/remote_session/linux_session.py
create mode 100644 dts/framework/remote_session/os_session.py
create mode 100644 dts/framework/remote_session/posix_session.py
create mode 100644 dts/framework/remote_session/remote/__init__.py
rename dts/framework/remote_session/{ => remote}/remote_session.py (61%)
rename dts/framework/remote_session/{ => remote}/ssh_session.py (91%)
create mode 100644 dts/framework/testbed_model/sut_node.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1aaa593612..03696d2bab 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -1,9 +1,16 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2022 The DPDK contributors
+# Copyright 2022-2023 The DPDK contributors
executions:
- - system_under_test: "SUT 1"
+ - build_targets:
+ - arch: x86_64
+ os: linux
+ cpu: native
+ compiler: gcc
+ compiler_wrapper: ccache
+ system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ os: linux
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 214be8e7f4..e3e2d74eac 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -1,15 +1,17 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2021 Intel Corporation
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
"""
-Generic port and topology nodes configuration file load function
+Yaml config parsing methods
"""
import json
import os.path
import pathlib
from dataclasses import dataclass
+from enum import Enum, auto, unique
from typing import Any
import warlock # type: ignore
@@ -18,6 +20,47 @@
from framework.settings import SETTINGS
+class StrEnum(Enum):
+ @staticmethod
+ def _generate_next_value_(
+ name: str, start: int, count: int, last_values: object
+ ) -> str:
+ return name
+
+
+@unique
+class Architecture(StrEnum):
+ i686 = auto()
+ x86_64 = auto()
+ x86_32 = auto()
+ arm64 = auto()
+ ppc64le = auto()
+
+
+@unique
+class OS(StrEnum):
+ linux = auto()
+ freebsd = auto()
+ windows = auto()
+
+
+@unique
+class CPUType(StrEnum):
+ native = auto()
+ armv8a = auto()
+ dpaa2 = auto()
+ thunderx = auto()
+ xgene1 = auto()
+
+
+@unique
+class Compiler(StrEnum):
+ gcc = auto()
+ clang = auto()
+ icc = auto()
+ msvc = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -29,6 +72,7 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ os: OS
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -37,19 +81,44 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ os=OS(d["os"]),
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class BuildTargetConfiguration:
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+ name: str
+
+ @staticmethod
+ def from_dict(d: dict) -> "BuildTargetConfiguration":
+ return BuildTargetConfiguration(
+ arch=Architecture(d["arch"]),
+ os=OS(d["os"]),
+ cpu=CPUType(d["cpu"]),
+ compiler=Compiler(d["compiler"]),
+ name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
+ build_targets: list[BuildTargetConfiguration]
system_under_test: NodeConfiguration
@staticmethod
def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
+ build_targets: list[BuildTargetConfiguration] = list(
+ map(BuildTargetConfiguration.from_dict, d["build_targets"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
return ExecutionConfiguration(
+ build_targets=build_targets,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 6b8d6ccd05..9170307fbe 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -5,6 +5,68 @@
"node_name": {
"type": "string",
"description": "A unique identifier for a node"
+ },
+ "OS": {
+ "type": "string",
+ "enum": [
+ "linux"
+ ]
+ },
+ "cpu": {
+ "type": "string",
+ "description": "Native should be the default on x86",
+ "enum": [
+ "native",
+ "armv8a",
+ "dpaa2",
+ "thunderx",
+ "xgene1"
+ ]
+ },
+ "compiler": {
+ "type": "string",
+ "enum": [
+ "gcc",
+ "clang",
+ "icc",
+ "mscv"
+ ]
+ },
+ "build_target": {
+ "type": "object",
+ "description": "Targets supported by DTS",
+ "properties": {
+ "arch": {
+ "type": "string",
+ "enum": [
+ "ALL",
+ "x86_64",
+ "arm64",
+ "ppc64le",
+ "other"
+ ]
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
+ },
+ "cpu": {
+ "$ref": "#/definitions/cpu"
+ },
+ "compiler": {
+ "$ref": "#/definitions/compiler"
+ },
+ "compiler_wrapper": {
+ "type": "string",
+ "description": "This will be added before compiler to the CC variable when building DPDK. Optional."
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "arch",
+ "os",
+ "cpu",
+ "compiler"
+ ]
}
},
"type": "object",
@@ -29,13 +91,17 @@
"password": {
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
+ },
+ "os": {
+ "$ref": "#/definitions/OS"
}
},
"additionalProperties": false,
"required": [
"name",
"hostname",
- "user"
+ "user",
+ "os"
]
},
"minimum": 1
@@ -45,12 +111,20 @@
"items": {
"type": "object",
"properties": {
+ "build_targets": {
+ "type": "array",
+ "items": {
+ "$ref": "#/definitions/build_target"
+ },
+ "minimum": 1
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
},
"additionalProperties": false,
"required": [
+ "build_targets",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index d23cfc4526..3d4170d10f 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -1,67 +1,157 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2019 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import sys
-import traceback
-from collections.abc import Iterable
-from framework.testbed_model.node import Node
-
-from .config import CONFIGURATION
+from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
+from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .testbed_model import SutNode
from .utils import check_dts_python_version
-dts_logger: DTSLOG | None = None
+dts_logger: DTSLOG = getLogger("DTSRunner")
+errors = []
def run_all() -> None:
"""
- Main process of DTS, it will run all test suites in the config file.
+ The main process of DTS. Runs all build targets in all executions from the main
+ config file.
"""
-
global dts_logger
+ global errors
# check the python version of the server that run dts
check_dts_python_version()
- dts_logger = getLogger("dts")
-
- nodes = {}
- # This try/finally block means "Run the try block, if there is an exception,
- # run the finally block before passing it upward. If there is not an exception,
- # run the finally block after the try block is finished." This helps avoid the
- # problem of python's interpreter exit context, which essentially prevents you
- # from making certain system calls. This makes cleaning up resources difficult,
- # since most of the resources in DTS are network-based, which is restricted.
+ nodes: dict[str, SutNode] = {}
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
- sut_config = execution.system_under_test
- if sut_config.name not in nodes:
- node = Node(sut_config)
- nodes[sut_config.name] = node
- node.send_command("echo Hello World")
+ sut_node = None
+ if execution.system_under_test.name in nodes:
+ # a Node with the same name already exists
+ sut_node = nodes[execution.system_under_test.name]
+ else:
+ # the SUT has not been initialized yet
+ try:
+ sut_node = SutNode(execution.system_under_test)
+ except Exception as e:
+ dts_logger.exception(
+ f"Connection to node {execution.system_under_test} failed."
+ )
+ errors.append(e)
+ else:
+ nodes[sut_node.name] = sut_node
+
+ if sut_node:
+ _run_execution(sut_node, execution)
+
+ except Exception as e:
+ dts_logger.exception("An unexpected error has occurred.")
+ errors.append(e)
+ raise
+
+ finally:
+ try:
+ for node in nodes.values():
+ node.close()
+ except Exception as e:
+ dts_logger.exception("Final cleanup of nodes failed.")
+ errors.append(e)
+ # we need to put the sys.exit call outside the finally clause to make sure
+ # that unexpected exceptions will propagate
+ # in that case, the error that should be reported is the uncaught exception as
+ # that is a severe error originating from the framework
+ # at that point, we'll only have partial results which could be impacted by the
+ # error causing the uncaught exception, making them uninterpretable
+ _exit_dts()
+
+
+def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+ """
+ Run the given execution. This involves running the execution setup as well as
+ running all build targets in the given execution.
+ """
+ dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+
+ try:
+ sut_node.set_up_execution(execution)
except Exception as e:
- # sys.exit() doesn't produce a stack trace, need to print it explicitly
- traceback.print_exc()
- raise e
+ dts_logger.exception("Execution setup failed.")
+ errors.append(e)
+
+ else:
+ for build_target in execution.build_targets:
+ _run_build_target(sut_node, build_target, execution)
finally:
- quit_execution(nodes.values())
+ try:
+ sut_node.tear_down_execution()
+ except Exception as e:
+ dts_logger.exception("Execution teardown failed.")
+ errors.append(e)
-def quit_execution(sut_nodes: Iterable[Node]) -> None:
+def _run_build_target(
+ sut_node: SutNode,
+ build_target: BuildTargetConfiguration,
+ execution: ExecutionConfiguration,
+) -> None:
"""
- Close session to SUT and TG before quit.
- Return exit status when failure occurred.
+ Run the given build target.
"""
- for sut_node in sut_nodes:
- # close all session
- sut_node.node_exit()
+ dts_logger.info(f"Running build target '{build_target.name}'.")
+
+ try:
+ sut_node.set_up_build_target(build_target)
+ except Exception as e:
+ dts_logger.exception("Build target setup failed.")
+ errors.append(e)
+
+ else:
+ _run_suites(sut_node, execution)
+
+ finally:
+ try:
+ sut_node.tear_down_build_target()
+ except Exception as e:
+ dts_logger.exception("Build target teardown failed.")
+ errors.append(e)
+
+
+def _run_suites(
+ sut_node: SutNode,
+ execution: ExecutionConfiguration,
+) -> None:
+ """
+ Use the given build_target to run execution's test suites
+ with possibly only a subset of test cases.
+ If no subset is specified, run all test cases.
+ """
+
+
+def _exit_dts() -> None:
+ """
+ Process all errors and exit with the proper exit code.
+ """
+ if errors and dts_logger:
+ dts_logger.debug("Summary of errors:")
+ for error in errors:
+ dts_logger.debug(repr(error))
+
+ return_code = ErrorSeverity.NO_ERR
+ for error in errors:
+ error_return_code = ErrorSeverity.GENERIC_ERR
+ if isinstance(error, DTSError):
+ error_return_code = error.severity
+
+ if error_return_code > return_code:
+ return_code = error_return_code
- if dts_logger is not None:
+ if dts_logger:
dts_logger.info("DTS execution has ended.")
- sys.exit(0)
+ sys.exit(return_code)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 8b2f08a8f0..121a0f7296 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -1,20 +1,46 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
User-defined exceptions used across the framework.
"""
+from enum import IntEnum, unique
+from typing import ClassVar
-class SSHTimeoutError(Exception):
+
+@unique
+class ErrorSeverity(IntEnum):
+ """
+ The severity of errors that occur during DTS execution.
+ All exceptions are caught and the most severe error is used as return code.
+ """
+
+ NO_ERR = 0
+ GENERIC_ERR = 1
+ CONFIG_ERR = 2
+ SSH_ERR = 3
+
+
+class DTSError(Exception):
+ """
+ The base exception from which all DTS exceptions are derived.
+ Stores error severity.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
+
+
+class SSHTimeoutError(DTSError):
"""
Command execution timeout.
"""
command: str
output: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, command: str, output: str):
self.command = command
@@ -27,12 +53,13 @@ def get_output(self) -> str:
return self.output
-class SSHConnectionError(Exception):
+class SSHConnectionError(DTSError):
"""
SSH connection error.
"""
host: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, host: str):
self.host = host
@@ -41,16 +68,25 @@ def __str__(self) -> str:
return f"Error trying to connect with {self.host}"
-class SSHSessionDeadError(Exception):
+class SSHSessionDeadError(DTSError):
"""
SSH session is not alive.
It can no longer be used.
"""
host: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
def __init__(self, host: str):
self.host = host
def __str__(self) -> str:
return f"SSH session with {self.host} has died"
+
+
+class ConfigurationError(DTSError):
+ """
+ Raised when an invalid configuration is encountered.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index a31fcc8242..bb2991e994 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
DTS logger module with several log level. DTS framework and TestSuite logs
@@ -33,17 +33,17 @@ class DTSLOG(logging.LoggerAdapter):
DTS log class for framework and testsuite.
"""
- logger: logging.Logger
+ _logger: logging.Logger
node: str
sh: logging.StreamHandler
fh: logging.FileHandler
verbose_fh: logging.FileHandler
def __init__(self, logger: logging.Logger, node: str = "suite"):
- self.logger = logger
+ self._logger = logger
# 1 means log everything, this will be used by file handlers if their level
# is not set
- self.logger.setLevel(1)
+ self._logger.setLevel(1)
self.node = node
@@ -55,9 +55,13 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
if SETTINGS.verbose is True:
sh.setLevel(logging.DEBUG)
- self.logger.addHandler(sh)
+ self._logger.addHandler(sh)
self.sh = sh
+ # prepare the output folder
+ if not os.path.exists(SETTINGS.output_dir):
+ os.mkdir(SETTINGS.output_dir)
+
logging_path_prefix = os.path.join(SETTINGS.output_dir, node)
fh = logging.FileHandler(f"{logging_path_prefix}.log")
@@ -68,7 +72,7 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
)
)
- self.logger.addHandler(fh)
+ self._logger.addHandler(fh)
self.fh = fh
# This outputs EVERYTHING, intended for post-mortem debugging
@@ -82,10 +86,10 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
)
)
- self.logger.addHandler(verbose_fh)
+ self._logger.addHandler(verbose_fh)
self.verbose_fh = verbose_fh
- super(DTSLOG, self).__init__(self.logger, dict(node=self.node))
+ super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
def logger_exit(self) -> None:
"""
@@ -93,7 +97,7 @@ def logger_exit(self) -> None:
"""
for handler in (self.sh, self.fh, self.verbose_fh):
handler.flush()
- self.logger.removeHandler(handler)
+ self._logger.removeHandler(handler)
def getLogger(name: str, node: str = "suite") -> DTSLOG:
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index a227d8db22..747316c78a 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -1,14 +1,30 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
-from framework.config import NodeConfiguration
+"""
+The package provides modules for managing remote connections to a remote host (node),
+differentiated by OS.
+The package provides a factory function, create_session, that returns the appropriate
+remote connection based on the passed configuration. The differences are in the
+underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""
+
+# pylama:ignore=W0611
+
+from framework.config import OS, NodeConfiguration
+from framework.exception import ConfigurationError
from framework.logger import DTSLOG
-from .remote_session import RemoteSession
-from .ssh_session import SSHSession
+from .linux_session import LinuxSession
+from .os_session import OSSession
+from .remote import RemoteSession, SSHSession
-def create_remote_session(
+def create_session(
node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
- return SSHSession(node_config, name, logger)
+) -> OSSession:
+ match node_config.os:
+ case OS.linux:
+ return LinuxSession(node_config, name, logger)
+ case _:
+ raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
new file mode 100644
index 0000000000..9d14166077
--- /dev/null
+++ b/dts/framework/remote_session/linux_session.py
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from .posix_session import PosixSession
+
+
+class LinuxSession(PosixSession):
+ """
+ The implementation of non-Posix compliant parts of Linux remote sessions.
+ """
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
new file mode 100644
index 0000000000..7a4cc5e669
--- /dev/null
+++ b/dts/framework/remote_session/os_session.py
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from abc import ABC
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote import RemoteSession, create_remote_session
+
+
+class OSSession(ABC):
+ """
+ The OS classes create a DTS node remote session and implement OS specific
+ behavior. There a few control methods implemented by the base class, the rest need
+ to be implemented by derived classes.
+ """
+
+ _config: NodeConfiguration
+ name: str
+ _logger: DTSLOG
+ remote_session: RemoteSession
+
+ def __init__(
+ self,
+ node_config: NodeConfiguration,
+ name: str,
+ logger: DTSLOG,
+ ):
+ self._config = node_config
+ self.name = name
+ self._logger = logger
+ self.remote_session = create_remote_session(node_config, name, logger)
+
+ def close(self, force: bool = False) -> None:
+ """
+ Close the remote session.
+ """
+ self.remote_session.close(force)
+
+ def is_alive(self) -> bool:
+ """
+ Check whether the remote session is still responding.
+ """
+ return self.remote_session.is_alive()
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
new file mode 100644
index 0000000000..110b6a4804
--- /dev/null
+++ b/dts/framework/remote_session/posix_session.py
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2023 University of New Hampshire
+
+from .os_session import OSSession
+
+
+class PosixSession(OSSession):
+ """
+ An intermediary class implementing the Posix compliant parts of
+ Linux and other OS remote sessions.
+ """
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
new file mode 100644
index 0000000000..f3092f8bbe
--- /dev/null
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from framework.config import NodeConfiguration
+from framework.logger import DTSLOG
+
+from .remote_session import RemoteSession
+from .ssh_session import SSHSession
+
+
+def create_remote_session(
+ node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+ return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
similarity index 61%
rename from dts/framework/remote_session/remote_session.py
rename to dts/framework/remote_session/remote/remote_session.py
index 33047d9d0a..7c7b30225f 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import dataclasses
from abc import ABC, abstractmethod
@@ -19,14 +19,23 @@ class HistoryRecord:
class RemoteSession(ABC):
+ """
+ The base class for defining which methods must be implemented in order to connect
+ to a remote host (node) and maintain a remote session. The derived classes are
+ supposed to implement/use some underlying transport protocol (e.g. SSH) to
+ implement the methods. On top of that, it provides some basic services common to
+ all derived classes, such as keeping history and logging what's being executed
+ on the remote node.
+ """
+
name: str
hostname: str
ip: str
port: int | None
username: str
password: str
- logger: DTSLOG
history: list[HistoryRecord]
+ _logger: DTSLOG
_node_config: NodeConfiguration
def __init__(
@@ -46,31 +55,34 @@ def __init__(
self.port = int(port)
self.username = node_config.user
self.password = node_config.password or ""
- self.logger = logger
self.history = []
- self.logger.info(f"Connecting to {self.username}@{self.hostname}.")
+ self._logger = logger
+ self._logger.info(f"Connecting to {self.username}@{self.hostname}.")
self._connect()
- self.logger.info(f"Connection to {self.username}@{self.hostname} successful.")
+ self._logger.info(f"Connection to {self.username}@{self.hostname} successful.")
@abstractmethod
def _connect(self) -> None:
"""
Create connection to assigned node.
"""
- pass
def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
- self.logger.info(f"Sending: {command}")
+ """
+ Send a command and return the output.
+ """
+ self._logger.info(f"Sending: {command}")
out = self._send_command(command, timeout)
- self.logger.debug(f"Received from {command}: {out}")
+ self._logger.debug(f"Received from {command}: {out}")
self._history_add(command=command, output=out)
return out
@abstractmethod
def _send_command(self, command: str, timeout: float) -> str:
"""
- Send a command and return the output.
+ Use the underlying protocol to execute the command and return the output
+ of the command.
"""
def _history_add(self, command: str, output: str) -> None:
@@ -79,17 +91,20 @@ def _history_add(self, command: str, output: str) -> None:
)
def close(self, force: bool = False) -> None:
- self.logger.logger_exit()
+ """
+ Close the remote session and free all used resources.
+ """
+ self._logger.logger_exit()
self._close(force)
@abstractmethod
def _close(self, force: bool = False) -> None:
"""
- Close the remote session, freeing all used resources.
+ Execute protocol specific steps needed to close the session properly.
"""
@abstractmethod
def is_alive(self) -> bool:
"""
- Check whether the session is still responding.
+ Check whether the remote session is still responding.
"""
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/ssh_session.py
rename to dts/framework/remote_session/remote/ssh_session.py
index 7ec327054d..96175f5284 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import time
@@ -17,7 +17,7 @@
class SSHSession(RemoteSession):
"""
- Module for creating Pexpect SSH sessions to a node.
+ Module for creating Pexpect SSH remote sessions.
"""
session: pxssh.pxssh
@@ -56,9 +56,9 @@ def _connect(self) -> None:
)
break
except Exception as e:
- self.logger.warning(e)
+ self._logger.warning(e)
time.sleep(2)
- self.logger.info(
+ self._logger.info(
f"Retrying connection: retry number {retry_attempt + 1}."
)
else:
@@ -67,13 +67,13 @@ def _connect(self) -> None:
self.send_expect("stty -echo", "#")
self.send_expect("stty columns 1000", "#")
except Exception as e:
- self.logger.error(RED(str(e)))
+ self._logger.error(RED(str(e)))
if getattr(self, "port", None):
suggestion = (
f"\nSuggestion: Check if the firewall on {self.hostname} is "
f"stopped.\n"
)
- self.logger.info(GREEN(suggestion))
+ self._logger.info(GREEN(suggestion))
raise SSHConnectionError(self.hostname)
@@ -87,8 +87,8 @@ def send_expect(
try:
retval = int(ret_status)
if retval:
- self.logger.error(f"Command: {command} failure!")
- self.logger.error(ret)
+ self._logger.error(f"Command: {command} failure!")
+ self._logger.error(ret)
return retval
else:
return ret
@@ -97,7 +97,7 @@ def send_expect(
else:
return ret
except Exception as e:
- self.logger.error(
+ self._logger.error(
f"Exception happened in [{command}] and output is "
f"[{self._get_output()}]"
)
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 800f2c7b7f..6422b23499 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2021 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
# Copyright(c) 2022 University of New Hampshire
import argparse
@@ -23,7 +23,7 @@ def __init__(
default: str = None,
type: Callable[[str], _T | argparse.FileType | None] = None,
choices: Iterable[_T] | None = None,
- required: bool = True,
+ required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None:
@@ -63,13 +63,17 @@ class _Settings:
def _get_parser() -> argparse.ArgumentParser:
- parser = argparse.ArgumentParser(description="DPDK test framework.")
+ parser = argparse.ArgumentParser(
+ description="Run DPDK test suites. All options may be specified with "
+ "the environment variables provided in brackets. "
+ "Command line arguments have higher priority.",
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter,
+ )
parser.add_argument(
"--config-file",
action=_env_arg("DTS_CFG_FILE"),
default="conf.yaml",
- required=False,
help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
"and targets.",
)
@@ -79,7 +83,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--output",
action=_env_arg("DTS_OUTPUT_DIR"),
default="output",
- required=False,
help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
)
@@ -88,7 +91,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--timeout",
action=_env_arg("DTS_TIMEOUT"),
default=15,
- required=False,
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
"compiling DPDK.",
)
@@ -98,7 +100,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--verbose",
action=_env_arg("DTS_VERBOSE"),
default="N",
- required=False,
help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
"to the console.",
)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index c5512e5812..8ead9db482 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -1,7 +1,13 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
"""
-This module contains the classes used to model the physical traffic generator,
+This package contains the classes used to model the physical traffic generator,
system under test and any other components that need to be interacted with.
"""
+
+# pylama:ignore=W0611
+
+from .node import Node
+from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 8437975416..e1f06bc389 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -1,62 +1,119 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
"""
A node is a generic host that DTS connects to and manages.
"""
-from framework.config import NodeConfiguration
+from framework.config import (
+ BuildTargetConfiguration,
+ ExecutionConfiguration,
+ NodeConfiguration,
+)
from framework.logger import DTSLOG, getLogger
-from framework.remote_session import RemoteSession, create_remote_session
-from framework.settings import SETTINGS
+from framework.remote_session import OSSession, create_session
class Node(object):
"""
- Basic module for node management. This module implements methods that
+ Basic class for node management. This class implements methods that
manage a node, such as information gathering (of CPU/PCI/NIC) and
environment setup.
"""
+ main_session: OSSession
+ config: NodeConfiguration
name: str
- main_session: RemoteSession
- logger: DTSLOG
- _config: NodeConfiguration
- _other_sessions: list[RemoteSession]
+ _logger: DTSLOG
+ _other_sessions: list[OSSession]
def __init__(self, node_config: NodeConfiguration):
- self._config = node_config
+ self.config = node_config
+ self.name = node_config.name
+ self._logger = getLogger(self.name)
+ self.main_session = create_session(self.config, self.name, self._logger)
+
self._other_sessions = []
- self.name = node_config.name
- self.logger = getLogger(self.name)
- self.logger.info(f"Created node: {self.name}")
- self.main_session = create_remote_session(self._config, self.name, self.logger)
+ self._logger.info(f"Created node: {self.name}")
- def send_command(self, cmds: str, timeout: float = SETTINGS.timeout) -> str:
+ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
"""
- Send commands to node and return string before timeout.
+ Perform the execution setup that will be done for each execution
+ this node is part of.
"""
+ self._set_up_execution(execution_config)
- return self.main_session.send_command(cmds, timeout)
+ def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
- def create_session(self, name: str) -> RemoteSession:
- connection = create_remote_session(
- self._config,
- name,
- getLogger(name, node=self.name),
+ def tear_down_execution(self) -> None:
+ """
+ Perform the execution teardown that will be done after each execution
+ this node is part of concludes.
+ """
+ self._tear_down_execution()
+
+ def _tear_down_execution(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Perform the build target setup that will be done for each build target
+ tested on this node.
+ """
+ self._set_up_build_target(build_target_config)
+
+ def _set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def tear_down_build_target(self) -> None:
+ """
+ Perform the build target teardown that will be done after each build target
+ tested on this node.
+ """
+ self._tear_down_build_target()
+
+ def _tear_down_build_target(self) -> None:
+ """
+ This method exists to be optionally overwritten by derived classes and
+ is not decorated so that the derived class doesn't have to use the decorator.
+ """
+
+ def create_session(self, name: str) -> OSSession:
+ """
+ Create and return a new OSSession tailored to the remote OS.
+ """
+ session_name = f"{self.name} {name}"
+ connection = create_session(
+ self.config,
+ session_name,
+ getLogger(session_name, node=self.name),
)
self._other_sessions.append(connection)
return connection
- def node_exit(self) -> None:
+ def close(self) -> None:
"""
- Recover all resource before node exit
+ Close all connections and free other resources.
"""
if self.main_session:
self.main_session.close()
for session in self._other_sessions:
session.close()
- self.logger.logger_exit()
+ self._logger.logger_exit()
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
new file mode 100644
index 0000000000..42acb6f9b2
--- /dev/null
+++ b/dts/framework/testbed_model/sut_node.py
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from .node import Node
+
+
+class SutNode(Node):
+ """
+ A class for managing connections to the System under Test, providing
+ methods that retrieve the necessary information about the node (such as
+ CPU, memory and NIC details) and configuration capabilities.
+ """
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 02/10] dts: add ssh command verification
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 01/10] dts: add node and os abstractions Juraj Linkeš
@ 2023-03-03 10:24 ` Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 03/10] dts: add dpdk build on sut Juraj Linkeš
` (8 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:24 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
This is a basic capability needed to check whether the command execution
was successful or not. If not, raise a RemoteCommandExecutionError. When
a failure is expected, the caller is supposed to catch the exception.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/exception.py | 23 +++++++-
.../remote_session/remote/remote_session.py | 55 +++++++++++++------
.../remote_session/remote/ssh_session.py | 12 +++-
3 files changed, 69 insertions(+), 21 deletions(-)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 121a0f7296..e776b42bd9 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -21,7 +21,8 @@ class ErrorSeverity(IntEnum):
NO_ERR = 0
GENERIC_ERR = 1
CONFIG_ERR = 2
- SSH_ERR = 3
+ REMOTE_CMD_EXEC_ERR = 3
+ SSH_ERR = 4
class DTSError(Exception):
@@ -90,3 +91,23 @@ class ConfigurationError(DTSError):
"""
severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
+
+
+class RemoteCommandExecutionError(DTSError):
+ """
+ Raised when a command executed on a Node returns a non-zero exit status.
+ """
+
+ command: str
+ command_return_code: int
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+
+ def __init__(self, command: str, command_return_code: int):
+ self.command = command
+ self.command_return_code = command_return_code
+
+ def __str__(self) -> str:
+ return (
+ f"Command {self.command} returned a non-zero exit code: "
+ f"{self.command_return_code}"
+ )
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 7c7b30225f..5ac395ec79 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -7,15 +7,29 @@
from abc import ABC, abstractmethod
from framework.config import NodeConfiguration
+from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
@dataclasses.dataclass(slots=True, frozen=True)
-class HistoryRecord:
+class CommandResult:
+ """
+ The result of remote execution of a command.
+ """
+
name: str
command: str
- output: str | int
+ stdout: str
+ stderr: str
+ return_code: int
+
+ def __str__(self) -> str:
+ return (
+ f"stdout: '{self.stdout}'\n"
+ f"stderr: '{self.stderr}'\n"
+ f"return_code: '{self.return_code}'"
+ )
class RemoteSession(ABC):
@@ -34,7 +48,7 @@ class RemoteSession(ABC):
port: int | None
username: str
password: str
- history: list[HistoryRecord]
+ history: list[CommandResult]
_logger: DTSLOG
_node_config: NodeConfiguration
@@ -68,28 +82,33 @@ def _connect(self) -> None:
Create connection to assigned node.
"""
- def send_command(self, command: str, timeout: float = SETTINGS.timeout) -> str:
+ def send_command(
+ self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ ) -> CommandResult:
"""
- Send a command and return the output.
+ Send a command to the connected node and return CommandResult.
+ If verify is True, check the return code of the executed command
+ and raise a RemoteCommandExecutionError if the command failed.
"""
- self._logger.info(f"Sending: {command}")
- out = self._send_command(command, timeout)
- self._logger.debug(f"Received from {command}: {out}")
- self._history_add(command=command, output=out)
- return out
+ self._logger.info(f"Sending: '{command}'")
+ result = self._send_command(command, timeout)
+ if verify and result.return_code:
+ self._logger.debug(
+ f"Command '{command}' failed with return code '{result.return_code}'"
+ )
+ self._logger.debug(f"stdout: '{result.stdout}'")
+ self._logger.debug(f"stderr: '{result.stderr}'")
+ raise RemoteCommandExecutionError(command, result.return_code)
+ self._logger.debug(f"Received from '{command}':\n{result}")
+ self.history.append(result)
+ return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return the output
- of the command.
+ Use the underlying protocol to execute the command and return CommandResult.
"""
- def _history_add(self, command: str, output: str) -> None:
- self.history.append(
- HistoryRecord(name=self.name, command=command, output=output)
- )
-
def close(self, force: bool = False) -> None:
"""
Close the remote session and free all used resources.
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index 96175f5284..c2362e2fdf 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -12,7 +12,7 @@
from framework.logger import DTSLOG
from framework.utils import GREEN, RED
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
class SSHSession(RemoteSession):
@@ -66,6 +66,7 @@ def _connect(self) -> None:
self.send_expect("stty -echo", "#")
self.send_expect("stty columns 1000", "#")
+ self.send_expect("bind 'set enable-bracketed-paste off'", "#")
except Exception as e:
self._logger.error(RED(str(e)))
if getattr(self, "port", None):
@@ -163,7 +164,14 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> str:
+ def _send_command(self, command: str, timeout: float) -> CommandResult:
+ output = self._send_command_get_output(command, timeout)
+ return_code = int(self._send_command_get_output("echo $?", timeout))
+
+ # we're capturing only stdout
+ return CommandResult(self.name, command, output, "", return_code)
+
+ def _send_command_get_output(self, command: str, timeout: float) -> str:
try:
self._clean_session()
self._send_line(command)
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 03/10] dts: add dpdk build on sut
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 01/10] dts: add node and os abstractions Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 02/10] dts: add ssh command verification Juraj Linkeš
@ 2023-03-03 10:25 ` Juraj Linkeš
2023-03-20 8:30 ` David Marchand
2023-03-03 10:25 ` [PATCH v6 04/10] dts: add dpdk execution handling Juraj Linkeš
` (7 subsequent siblings)
10 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:25 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Add the ability to build DPDK and apps on the SUT, using a configured
target.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/config/__init__.py | 2 +
dts/framework/exception.py | 17 ++
dts/framework/remote_session/os_session.py | 89 +++++++++-
dts/framework/remote_session/posix_session.py | 126 ++++++++++++++
.../remote_session/remote/remote_session.py | 38 ++++-
.../remote_session/remote/ssh_session.py | 66 +++++++-
dts/framework/settings.py | 44 ++++-
dts/framework/testbed_model/node.py | 10 ++
dts/framework/testbed_model/sut_node.py | 158 ++++++++++++++++++
dts/framework/utils.py | 36 +++-
10 files changed, 570 insertions(+), 16 deletions(-)
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index e3e2d74eac..ca61cb10fe 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -91,6 +91,7 @@ class BuildTargetConfiguration:
os: OS
cpu: CPUType
compiler: Compiler
+ compiler_wrapper: str
name: str
@staticmethod
@@ -100,6 +101,7 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
os=OS(d["os"]),
cpu=CPUType(d["cpu"]),
compiler=Compiler(d["compiler"]),
+ compiler_wrapper=d.get("compiler_wrapper", ""),
name=f"{d['arch']}-{d['os']}-{d['cpu']}-{d['compiler']}",
)
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index e776b42bd9..b4545a5a40 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -23,6 +23,7 @@ class ErrorSeverity(IntEnum):
CONFIG_ERR = 2
REMOTE_CMD_EXEC_ERR = 3
SSH_ERR = 4
+ DPDK_BUILD_ERR = 10
class DTSError(Exception):
@@ -111,3 +112,19 @@ def __str__(self) -> str:
f"Command {self.command} returned a non-zero exit code: "
f"{self.command_return_code}"
)
+
+
+class RemoteDirectoryExistsError(DTSError):
+ """
+ Raised when a remote directory to be created already exists.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+
+
+class DPDKBuildError(DTSError):
+ """
+ Raised when DPDK build fails for any reason.
+ """
+
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 7a4cc5e669..47e9f2889b 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -2,10 +2,13 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
-from abc import ABC
+from abc import ABC, abstractmethod
+from pathlib import PurePath
-from framework.config import NodeConfiguration
+from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict, MesonArgs
from .remote import RemoteSession, create_remote_session
@@ -44,3 +47,85 @@ def is_alive(self) -> bool:
Check whether the remote session is still responding.
"""
return self.remote_session.is_alive()
+
+ @abstractmethod
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+ """
+ Try to find DPDK remote dir in remote_dir.
+ """
+
+ @abstractmethod
+ def get_remote_tmp_dir(self) -> PurePath:
+ """
+ Get the path of the temporary directory of the remote OS.
+ """
+
+ @abstractmethod
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for the target architecture. Get
+ information from the node if needed.
+ """
+
+ @abstractmethod
+ def join_remote_path(self, *args: str | PurePath) -> PurePath:
+ """
+ Join path parts using the path separator that fits the remote OS.
+ """
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local filesystem to destination_file
+ on the remote Node associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated remote Node to destination_file on local storage.
+ """
+
+ @abstractmethod
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ """
+ Remove remote directory, by default remove recursively and forcefully.
+ """
+
+ @abstractmethod
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ """
+ Extract remote tarball in place. If expected_dir is a non-empty string, check
+ whether the dir exists after extracting the archive.
+ """
+
+ @abstractmethod
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: MesonArgs,
+ remote_dpdk_dir: str | PurePath,
+ remote_dpdk_build_dir: str | PurePath,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> None:
+ """
+ Build DPDK in the input dir with specified environment variables and meson
+ arguments.
+ """
+
+ @abstractmethod
+ def get_dpdk_version(self, version_path: str | PurePath) -> str:
+ """
+ Inspect DPDK version on the remote node from version_path.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index 110b6a4804..c2580f6a42 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -2,6 +2,13 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from pathlib import PurePath, PurePosixPath
+
+from framework.config import Architecture
+from framework.exception import DPDKBuildError, RemoteCommandExecutionError
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict, MesonArgs
+
from .os_session import OSSession
@@ -10,3 +17,122 @@ class PosixSession(OSSession):
An intermediary class implementing the Posix compliant parts of
Linux and other OS remote sessions.
"""
+
+ @staticmethod
+ def combine_short_options(**opts: bool) -> str:
+ ret_opts = ""
+ for opt, include in opts.items():
+ if include:
+ ret_opts = f"{ret_opts}{opt}"
+
+ if ret_opts:
+ ret_opts = f" -{ret_opts}"
+
+ return ret_opts
+
+ def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+ remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
+ result = self.remote_session.send_command(f"ls -d {remote_guess} | tail -1")
+ return PurePosixPath(result.stdout)
+
+ def get_remote_tmp_dir(self) -> PurePosixPath:
+ return PurePosixPath("/tmp")
+
+ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
+ """
+ Create extra environment variables needed for i686 arch build. Get information
+ from the node if needed.
+ """
+ env_vars = {}
+ if arch == Architecture.i686:
+ # find the pkg-config path and store it in PKG_CONFIG_LIBDIR
+ out = self.remote_session.send_command("find /usr -type d -name pkgconfig")
+ pkg_path = ""
+ res_path = out.stdout.split("\r\n")
+ for cur_path in res_path:
+ if "i386" in cur_path:
+ pkg_path = cur_path
+ break
+ assert pkg_path != "", "i386 pkg-config path not found"
+
+ env_vars["CFLAGS"] = "-m32"
+ env_vars["PKG_CONFIG_LIBDIR"] = pkg_path
+
+ return env_vars
+
+ def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+ return PurePosixPath(*args)
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ self.remote_session.copy_file(source_file, destination_file, source_remote)
+
+ def remove_remote_dir(
+ self,
+ remote_dir_path: str | PurePath,
+ recursive: bool = True,
+ force: bool = True,
+ ) -> None:
+ opts = PosixSession.combine_short_options(r=recursive, f=force)
+ self.remote_session.send_command(f"rm{opts} {remote_dir_path}")
+
+ def extract_remote_tarball(
+ self,
+ remote_tarball_path: str | PurePath,
+ expected_dir: str | PurePath | None = None,
+ ) -> None:
+ self.remote_session.send_command(
+ f"tar xfm {remote_tarball_path} "
+ f"-C {PurePosixPath(remote_tarball_path).parent}",
+ 60,
+ )
+ if expected_dir:
+ self.remote_session.send_command(f"ls {expected_dir}", verify=True)
+
+ def build_dpdk(
+ self,
+ env_vars: EnvVarsDict,
+ meson_args: MesonArgs,
+ remote_dpdk_dir: str | PurePath,
+ remote_dpdk_build_dir: str | PurePath,
+ rebuild: bool = False,
+ timeout: float = SETTINGS.compile_timeout,
+ ) -> None:
+ try:
+ if rebuild:
+ # reconfigure, then build
+ self._logger.info("Reconfiguring DPDK build.")
+ self.remote_session.send_command(
+ f"meson configure {meson_args} {remote_dpdk_build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+ else:
+ # fresh build - remove target dir first, then build from scratch
+ self._logger.info("Configuring DPDK build from scratch.")
+ self.remove_remote_dir(remote_dpdk_build_dir)
+ self.remote_session.send_command(
+ f"meson setup "
+ f"{meson_args} {remote_dpdk_dir} {remote_dpdk_build_dir}",
+ timeout,
+ verify=True,
+ env=env_vars,
+ )
+
+ self._logger.info("Building DPDK.")
+ self.remote_session.send_command(
+ f"ninja -C {remote_dpdk_build_dir}", timeout, verify=True, env=env_vars
+ )
+ except RemoteCommandExecutionError as e:
+ raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
+
+ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+ out = self.remote_session.send_command(
+ f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
+ )
+ return out.stdout
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 5ac395ec79..91dee3cb4f 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -5,11 +5,13 @@
import dataclasses
from abc import ABC, abstractmethod
+from pathlib import PurePath
from framework.config import NodeConfiguration
from framework.exception import RemoteCommandExecutionError
from framework.logger import DTSLOG
from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict
@dataclasses.dataclass(slots=True, frozen=True)
@@ -83,15 +85,22 @@ def _connect(self) -> None:
"""
def send_command(
- self, command: str, timeout: float = SETTINGS.timeout, verify: bool = False
+ self,
+ command: str,
+ timeout: float = SETTINGS.timeout,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
) -> CommandResult:
"""
- Send a command to the connected node and return CommandResult.
+ Send a command to the connected node using optional env vars
+ and return CommandResult.
If verify is True, check the return code of the executed command
and raise a RemoteCommandExecutionError if the command failed.
"""
- self._logger.info(f"Sending: '{command}'")
- result = self._send_command(command, timeout)
+ self._logger.info(
+ f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
+ )
+ result = self._send_command(command, timeout, env)
if verify and result.return_code:
self._logger.debug(
f"Command '{command}' failed with return code '{result.return_code}'"
@@ -104,9 +113,12 @@ def send_command(
return result
@abstractmethod
- def _send_command(self, command: str, timeout: float) -> CommandResult:
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
"""
- Use the underlying protocol to execute the command and return CommandResult.
+ Use the underlying protocol to execute the command using optional env vars
+ and return CommandResult.
"""
def close(self, force: bool = False) -> None:
@@ -127,3 +139,17 @@ def is_alive(self) -> bool:
"""
Check whether the remote session is still responding.
"""
+
+ @abstractmethod
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Copy source_file from local filesystem to destination_file on the remote Node
+ associated with the remote session.
+ If source_remote is True, reverse the direction - copy source_file from the
+ associated Node to destination_file on local filesystem.
+ """
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/remote/ssh_session.py
index c2362e2fdf..42ff9498a2 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/remote/ssh_session.py
@@ -4,13 +4,15 @@
# Copyright(c) 2022-2023 University of New Hampshire
import time
+from pathlib import PurePath
+import pexpect # type: ignore
from pexpect import pxssh # type: ignore
from framework.config import NodeConfiguration
from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
from framework.logger import DTSLOG
-from framework.utils import GREEN, RED
+from framework.utils import GREEN, RED, EnvVarsDict
from .remote_session import CommandResult, RemoteSession
@@ -164,16 +166,22 @@ def _flush(self) -> None:
def is_alive(self) -> bool:
return self.session.isalive()
- def _send_command(self, command: str, timeout: float) -> CommandResult:
- output = self._send_command_get_output(command, timeout)
- return_code = int(self._send_command_get_output("echo $?", timeout))
+ def _send_command(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> CommandResult:
+ output = self._send_command_get_output(command, timeout, env)
+ return_code = int(self._send_command_get_output("echo $?", timeout, None))
# we're capturing only stdout
return CommandResult(self.name, command, output, "", return_code)
- def _send_command_get_output(self, command: str, timeout: float) -> str:
+ def _send_command_get_output(
+ self, command: str, timeout: float, env: EnvVarsDict | None
+ ) -> str:
try:
self._clean_session()
+ if env:
+ command = f"{env} {command}"
self._send_line(command)
except Exception as e:
raise e
@@ -190,3 +198,51 @@ def _close(self, force: bool = False) -> None:
else:
if self.is_alive():
self.session.logout()
+
+ def copy_file(
+ self,
+ source_file: str | PurePath,
+ destination_file: str | PurePath,
+ source_remote: bool = False,
+ ) -> None:
+ """
+ Send a local file to a remote host.
+ """
+ if source_remote:
+ source_file = f"{self.username}@{self.ip}:{source_file}"
+ else:
+ destination_file = f"{self.username}@{self.ip}:{destination_file}"
+
+ port = ""
+ if self.port:
+ port = f" -P {self.port}"
+
+ command = (
+ f"scp -v{port} -o NoHostAuthenticationForLocalhost=yes"
+ f" {source_file} {destination_file}"
+ )
+
+ self._spawn_scp(command)
+
+ def _spawn_scp(self, scp_cmd: str) -> None:
+ """
+ Transfer a file with SCP
+ """
+ self._logger.info(scp_cmd)
+ p: pexpect.spawn = pexpect.spawn(scp_cmd)
+ time.sleep(0.5)
+ ssh_newkey: str = "Are you sure you want to continue connecting"
+ i: int = p.expect(
+ [ssh_newkey, "[pP]assword", "# ", pexpect.EOF, pexpect.TIMEOUT], 120
+ )
+ if i == 0: # add once in trust list
+ p.sendline("yes")
+ i = p.expect([ssh_newkey, "[pP]assword", pexpect.EOF], 2)
+
+ if i == 1:
+ time.sleep(0.5)
+ p.sendline(self.password)
+ p.expect("Exit status 0", 60)
+ if i == 4:
+ self._logger.error("SCP TIMEOUT error %d" % i)
+ p.close()
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 6422b23499..f787187ade 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -7,8 +7,11 @@
import os
from collections.abc import Callable, Iterable, Sequence
from dataclasses import dataclass
+from pathlib import Path
from typing import Any, TypeVar
+from .exception import ConfigurationError
+
_T = TypeVar("_T")
@@ -60,6 +63,9 @@ class _Settings:
output_dir: str
timeout: float
verbose: bool
+ skip_setup: bool
+ dpdk_tarball_path: Path
+ compile_timeout: float
def _get_parser() -> argparse.ArgumentParser:
@@ -91,6 +97,7 @@ def _get_parser() -> argparse.ArgumentParser:
"--timeout",
action=_env_arg("DTS_TIMEOUT"),
default=15,
+ type=float,
help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
"compiling DPDK.",
)
@@ -104,16 +111,51 @@ def _get_parser() -> argparse.ArgumentParser:
"to the console.",
)
+ parser.add_argument(
+ "-s",
+ "--skip-setup",
+ action=_env_arg("DTS_SKIP_SETUP"),
+ default="N",
+ help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+ )
+
+ parser.add_argument(
+ "--tarball",
+ "--snapshot",
+ action=_env_arg("DTS_DPDK_TARBALL"),
+ default="dpdk.tar.xz",
+ type=Path,
+ help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball "
+ "which will be used in testing.",
+ )
+
+ parser.add_argument(
+ "--compile-timeout",
+ action=_env_arg("DTS_COMPILE_TIMEOUT"),
+ default=1200,
+ type=float,
+ help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
+ )
+
return parser
+def _check_tarball_path(parsed_args: argparse.Namespace) -> None:
+ if not os.path.exists(parsed_args.tarball):
+ raise ConfigurationError(f"DPDK tarball '{parsed_args.tarball}' doesn't exist.")
+
+
def _get_settings() -> _Settings:
parsed_args = _get_parser().parse_args()
+ _check_tarball_path(parsed_args)
return _Settings(
config_file_path=parsed_args.config_file,
output_dir=parsed_args.output_dir,
- timeout=float(parsed_args.timeout),
+ timeout=parsed_args.timeout,
verbose=(parsed_args.verbose == "Y"),
+ skip_setup=(parsed_args.skip_setup == "Y"),
+ dpdk_tarball_path=parsed_args.tarball,
+ compile_timeout=parsed_args.compile_timeout,
)
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e1f06bc389..a7059b5856 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -7,6 +7,8 @@
A node is a generic host that DTS connects to and manages.
"""
+from typing import Any, Callable
+
from framework.config import (
BuildTargetConfiguration,
ExecutionConfiguration,
@@ -14,6 +16,7 @@
)
from framework.logger import DTSLOG, getLogger
from framework.remote_session import OSSession, create_session
+from framework.settings import SETTINGS
class Node(object):
@@ -117,3 +120,10 @@ def close(self) -> None:
for session in self._other_sessions:
session.close()
self._logger.logger_exit()
+
+ @staticmethod
+ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+ if SETTINGS.skip_setup:
+ return lambda *args: None
+ else:
+ return func
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 42acb6f9b2..21da33d6b3 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -2,6 +2,14 @@
# Copyright(c) 2010-2014 Intel Corporation
# Copyright(c) 2023 PANTHEON.tech s.r.o.
+import os
+import tarfile
+from pathlib import PurePath
+
+from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.settings import SETTINGS
+from framework.utils import EnvVarsDict, MesonArgs
+
from .node import Node
@@ -10,4 +18,154 @@ class SutNode(Node):
A class for managing connections to the System under Test, providing
methods that retrieve the necessary information about the node (such as
CPU, memory and NIC details) and configuration capabilities.
+ Another key capability is building DPDK according to given build target.
"""
+
+ _build_target_config: BuildTargetConfiguration | None
+ _env_vars: EnvVarsDict
+ _remote_tmp_dir: PurePath
+ __remote_dpdk_dir: PurePath | None
+ _dpdk_version: str | None
+ _app_compile_timeout: float
+
+ def __init__(self, node_config: NodeConfiguration):
+ super(SutNode, self).__init__(node_config)
+ self._build_target_config = None
+ self._env_vars = EnvVarsDict()
+ self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
+ self.__remote_dpdk_dir = None
+ self._dpdk_version = None
+ self._app_compile_timeout = 90
+
+ @property
+ def _remote_dpdk_dir(self) -> PurePath:
+ if self.__remote_dpdk_dir is None:
+ self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
+ return self.__remote_dpdk_dir
+
+ @_remote_dpdk_dir.setter
+ def _remote_dpdk_dir(self, value: PurePath) -> None:
+ self.__remote_dpdk_dir = value
+
+ @property
+ def remote_dpdk_build_dir(self) -> PurePath:
+ if self._build_target_config:
+ return self.main_session.join_remote_path(
+ self._remote_dpdk_dir, self._build_target_config.name
+ )
+ else:
+ return self.main_session.join_remote_path(self._remote_dpdk_dir, "build")
+
+ @property
+ def dpdk_version(self) -> str:
+ if self._dpdk_version is None:
+ self._dpdk_version = self.main_session.get_dpdk_version(
+ self._remote_dpdk_dir
+ )
+ return self._dpdk_version
+
+ def _guess_dpdk_remote_dir(self) -> PurePath:
+ return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
+
+ def _set_up_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Setup DPDK on the SUT node.
+ """
+ self._configure_build_target(build_target_config)
+ self._copy_dpdk_tarball()
+ self._build_dpdk()
+
+ def _configure_build_target(
+ self, build_target_config: BuildTargetConfiguration
+ ) -> None:
+ """
+ Populate common environment variables and set build target config.
+ """
+ self._env_vars = EnvVarsDict()
+ self._build_target_config = build_target_config
+ self._env_vars.update(
+ self.main_session.get_dpdk_build_env_vars(build_target_config.arch)
+ )
+ self._env_vars["CC"] = build_target_config.compiler.name
+ if build_target_config.compiler_wrapper:
+ self._env_vars["CC"] = (
+ f"'{build_target_config.compiler_wrapper} "
+ f"{build_target_config.compiler.name}'"
+ )
+
+ @Node.skip_setup
+ def _copy_dpdk_tarball(self) -> None:
+ """
+ Copy to and extract DPDK tarball on the SUT node.
+ """
+ self._logger.info("Copying DPDK tarball to SUT.")
+ self.main_session.copy_file(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
+
+ # construct remote tarball path
+ # the basename is the same on local host and on remote Node
+ remote_tarball_path = self.main_session.join_remote_path(
+ self._remote_tmp_dir, os.path.basename(SETTINGS.dpdk_tarball_path)
+ )
+
+ # construct remote path after extracting
+ with tarfile.open(SETTINGS.dpdk_tarball_path) as dpdk_tar:
+ dpdk_top_dir = dpdk_tar.getnames()[0]
+ self._remote_dpdk_dir = self.main_session.join_remote_path(
+ self._remote_tmp_dir, dpdk_top_dir
+ )
+
+ self._logger.info(
+ f"Extracting DPDK tarball on SUT: "
+ f"'{remote_tarball_path}' into '{self._remote_dpdk_dir}'."
+ )
+ # clean remote path where we're extracting
+ self.main_session.remove_remote_dir(self._remote_dpdk_dir)
+
+ # then extract to remote path
+ self.main_session.extract_remote_tarball(
+ remote_tarball_path, self._remote_dpdk_dir
+ )
+
+ @Node.skip_setup
+ def _build_dpdk(self) -> None:
+ """
+ Build DPDK. Uses the already configured target. Assumes that the tarball has
+ already been copied to and extracted on the SUT node.
+ """
+ self.main_session.build_dpdk(
+ self._env_vars,
+ MesonArgs(default_library="static", enable_kmods=True, libdir="lib"),
+ self._remote_dpdk_dir,
+ self.remote_dpdk_build_dir,
+ )
+
+ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
+ """
+ Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
+ When app_name is 'all', build all example apps.
+ When app_name is any other string, tries to build that example app.
+ Return the directory path of the built app. If building all apps, return
+ the path to the examples directory (where all apps reside).
+ The meson_dpdk_args are keyword arguments
+ found in meson_option.txt in root DPDK directory. Do not use -D with them,
+ for example: enable_kmods=True.
+ """
+ self.main_session.build_dpdk(
+ self._env_vars,
+ MesonArgs(examples=app_name, **meson_dpdk_args), # type: ignore [arg-type]
+ # ^^ https://github.com/python/mypy/issues/11583
+ self._remote_dpdk_dir,
+ self.remote_dpdk_build_dir,
+ rebuild=True,
+ timeout=self._app_compile_timeout,
+ )
+
+ if app_name == "all":
+ return self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, "examples"
+ )
+ return self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index c28c8f1082..0ed591ac23 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
-# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 University of New Hampshire
import sys
@@ -28,3 +28,35 @@ def GREEN(text: str) -> str:
def RED(text: str) -> str:
return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+
+class EnvVarsDict(dict):
+ def __str__(self) -> str:
+ return " ".join(["=".join(item) for item in self.items()])
+
+
+class MesonArgs(object):
+ """
+ Aggregate the arguments needed to build DPDK:
+ default_library: Default library type, Meson allows "shared", "static" and "both".
+ Defaults to None, in which case the argument won't be used.
+ Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
+ Do not use -D with them, for example:
+ meson_args = MesonArgs(enable_kmods=True).
+ """
+
+ _default_library: str
+
+ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+ self._default_library = (
+ f"--default-library={default_library}" if default_library else ""
+ )
+ self._dpdk_args = " ".join(
+ (
+ f"-D{dpdk_arg_name}={dpdk_arg_value}"
+ for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
+ )
+ )
+
+ def __str__(self) -> str:
+ return " ".join(f"{self._default_library} {self._dpdk_args}".split())
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 04/10] dts: add dpdk execution handling
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
` (2 preceding siblings ...)
2023-03-03 10:25 ` [PATCH v6 03/10] dts: add dpdk build on sut Juraj Linkeš
@ 2023-03-03 10:25 ` Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 05/10] dts: add node memory setup Juraj Linkeš
` (6 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:25 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Add methods for setting up and shutting down DPDK apps and for
constructing EAL parameters.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 4 +
dts/framework/config/__init__.py | 8 +
dts/framework/config/conf_yaml_schema.json | 25 ++
dts/framework/remote_session/linux_session.py | 18 ++
dts/framework/remote_session/os_session.py | 22 ++
dts/framework/remote_session/posix_session.py | 83 ++++++
dts/framework/testbed_model/__init__.py | 8 +
dts/framework/testbed_model/hw/__init__.py | 27 ++
dts/framework/testbed_model/hw/cpu.py | 274 ++++++++++++++++++
.../testbed_model/hw/virtual_device.py | 16 +
dts/framework/testbed_model/node.py | 43 +++
dts/framework/testbed_model/sut_node.py | 128 ++++++++
dts/framework/utils.py | 20 ++
13 files changed, 676 insertions(+)
create mode 100644 dts/framework/testbed_model/hw/__init__.py
create mode 100644 dts/framework/testbed_model/hw/cpu.py
create mode 100644 dts/framework/testbed_model/hw/virtual_device.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 03696d2bab..1648e5c3c5 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,4 +13,8 @@ nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
user: root
+ arch: x86_64
os: linux
+ lcores: ""
+ use_first_core: false
+ memory_channels: 4
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ca61cb10fe..17b917f3b3 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -72,7 +72,11 @@ class NodeConfiguration:
hostname: str
user: str
password: str | None
+ arch: Architecture
os: OS
+ lcores: str
+ use_first_core: bool
+ memory_channels: int
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
@@ -81,7 +85,11 @@ def from_dict(d: dict) -> "NodeConfiguration":
hostname=d["hostname"],
user=d["user"],
password=d.get("password"),
+ arch=Architecture(d["arch"]),
os=OS(d["os"]),
+ lcores=d.get("lcores", "1"),
+ use_first_core=d.get("use_first_core", False),
+ memory_channels=d.get("memory_channels", 1),
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 9170307fbe..334b4bd8ab 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -6,6 +6,14 @@
"type": "string",
"description": "A unique identifier for a node"
},
+ "ARCH": {
+ "type": "string",
+ "enum": [
+ "x86_64",
+ "arm64",
+ "ppc64le"
+ ]
+ },
"OS": {
"type": "string",
"enum": [
@@ -92,8 +100,24 @@
"type": "string",
"description": "The password to use on this node. Use only as a last resort. SSH keys are STRONGLY preferred."
},
+ "arch": {
+ "$ref": "#/definitions/ARCH"
+ },
"os": {
"$ref": "#/definitions/OS"
+ },
+ "lcores": {
+ "type": "string",
+ "pattern": "^(([0-9]+|([0-9]+-[0-9]+))(,([0-9]+|([0-9]+-[0-9]+)))*)?$",
+ "description": "Optional comma-separated list of logical cores to use, e.g.: 1,2,3,4,5,18-22. Defaults to 1. An empty string means use all lcores."
+ },
+ "use_first_core": {
+ "type": "boolean",
+ "description": "Indicate whether DPDK should use the first physical core. It won't be used by default."
+ },
+ "memory_channels": {
+ "type": "integer",
+ "description": "How many memory channels to use. Optional, defaults to 1."
}
},
"additionalProperties": false,
@@ -101,6 +125,7 @@
"name",
"hostname",
"user",
+ "arch",
"os"
]
},
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index 9d14166077..c49b6bb1d7 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from framework.testbed_model import LogicalCore
+
from .posix_session import PosixSession
@@ -9,3 +11,19 @@ class LinuxSession(PosixSession):
"""
The implementation of non-Posix compliant parts of Linux remote sessions.
"""
+
+ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+ cpu_info = self.remote_session.send_command(
+ "lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#"
+ ).stdout
+ lcores = []
+ for cpu_line in cpu_info.splitlines():
+ lcore, core, socket, node = map(int, cpu_line.split(","))
+ if core == 0 and socket == 0 and not use_first_core:
+ self._logger.info("Not using the first physical core.")
+ continue
+ lcores.append(LogicalCore(lcore, core, socket, node))
+ return lcores
+
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ return dpdk_prefix
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 47e9f2889b..0a42f40a86 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -3,11 +3,13 @@
# Copyright(c) 2023 University of New Hampshire
from abc import ABC, abstractmethod
+from collections.abc import Iterable
from pathlib import PurePath
from framework.config import Architecture, NodeConfiguration
from framework.logger import DTSLOG
from framework.settings import SETTINGS
+from framework.testbed_model import LogicalCore
from framework.utils import EnvVarsDict, MesonArgs
from .remote import RemoteSession, create_remote_session
@@ -129,3 +131,23 @@ def get_dpdk_version(self, version_path: str | PurePath) -> str:
"""
Inspect DPDK version on the remote node from version_path.
"""
+
+ @abstractmethod
+ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+ """
+ Compose a list of LogicalCores present on the remote node.
+ If use_first_core is False, the first physical core won't be used.
+ """
+
+ @abstractmethod
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ """
+ Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
+ dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+ """
+
+ @abstractmethod
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ """
+ Get the DPDK file prefix that will be used when running DPDK apps.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index c2580f6a42..d38062e8d6 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -2,6 +2,8 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+import re
+from collections.abc import Iterable
from pathlib import PurePath, PurePosixPath
from framework.config import Architecture
@@ -136,3 +138,84 @@ def get_dpdk_version(self, build_dir: str | PurePath) -> str:
f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
)
return out.stdout
+
+ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+ self._logger.info("Cleaning up DPDK apps.")
+ dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
+ if dpdk_runtime_dirs:
+ # kill and cleanup only if DPDK is running
+ dpdk_pids = self._get_dpdk_pids(dpdk_runtime_dirs)
+ for dpdk_pid in dpdk_pids:
+ self.remote_session.send_command(f"kill -9 {dpdk_pid}", 20)
+ self._check_dpdk_hugepages(dpdk_runtime_dirs)
+ self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
+
+ def _get_dpdk_runtime_dirs(
+ self, dpdk_prefix_list: Iterable[str]
+ ) -> list[PurePosixPath]:
+ prefix = PurePosixPath("/var", "run", "dpdk")
+ if not dpdk_prefix_list:
+ remote_prefixes = self._list_remote_dirs(prefix)
+ if not remote_prefixes:
+ dpdk_prefix_list = []
+ else:
+ dpdk_prefix_list = remote_prefixes
+
+ return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
+
+ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
+ """
+ Return a list of directories of the remote_dir.
+ If remote_path doesn't exist, return None.
+ """
+ out = self.remote_session.send_command(
+ f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
+ ).stdout
+ if "No such file or directory" in out:
+ return None
+ else:
+ return out.splitlines()
+
+ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+ pids = []
+ pid_regex = r"p(\d+)"
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ dpdk_config_file = PurePosixPath(dpdk_runtime_dir, "config")
+ if self._remote_files_exists(dpdk_config_file):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {dpdk_config_file}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ for out_line in out.splitlines():
+ match = re.match(pid_regex, out_line)
+ if match:
+ pids.append(int(match.group(1)))
+ return pids
+
+ def _remote_files_exists(self, remote_path: PurePath) -> bool:
+ result = self.remote_session.send_command(f"test -e {remote_path}")
+ return not result.return_code
+
+ def _check_dpdk_hugepages(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
+ if self._remote_files_exists(hugepage_info):
+ out = self.remote_session.send_command(
+ f"lsof -Fp {hugepage_info}"
+ ).stdout
+ if out and "No such file or directory" not in out:
+ self._logger.warning("Some DPDK processes did not free hugepages.")
+ self._logger.warning("*******************************************")
+ self._logger.warning(out)
+ self._logger.warning("*******************************************")
+
+ def _remove_dpdk_runtime_dirs(
+ self, dpdk_runtime_dirs: Iterable[str | PurePath]
+ ) -> None:
+ for dpdk_runtime_dir in dpdk_runtime_dirs:
+ self.remove_remote_dir(dpdk_runtime_dir)
+
+ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+ return ""
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ead9db482..5be3e4c48d 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,5 +9,13 @@
# pylama:ignore=W0611
+from .hw import (
+ LogicalCore,
+ LogicalCoreCount,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+ VirtualDevice,
+ lcore_filter,
+)
from .node import Node
from .sut_node import SutNode
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
new file mode 100644
index 0000000000..88ccac0b0e
--- /dev/null
+++ b/dts/framework/testbed_model/hw/__init__.py
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+# pylama:ignore=W0611
+
+from .cpu import (
+ LogicalCore,
+ LogicalCoreCount,
+ LogicalCoreCountFilter,
+ LogicalCoreFilter,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+)
+from .virtual_device import VirtualDevice
+
+
+def lcore_filter(
+ core_list: list[LogicalCore],
+ filter_specifier: LogicalCoreCount | LogicalCoreList,
+ ascending: bool,
+) -> LogicalCoreFilter:
+ if isinstance(filter_specifier, LogicalCoreList):
+ return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+ elif isinstance(filter_specifier, LogicalCoreCount):
+ return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+ else:
+ raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/hw/cpu.py
new file mode 100644
index 0000000000..d1918a12dc
--- /dev/null
+++ b/dts/framework/testbed_model/hw/cpu.py
@@ -0,0 +1,274 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+import dataclasses
+from abc import ABC, abstractmethod
+from collections.abc import Iterable, ValuesView
+from dataclasses import dataclass
+
+from framework.utils import expand_range
+
+
+@dataclass(slots=True, frozen=True)
+class LogicalCore(object):
+ """
+ Representation of a CPU core. A physical core is represented in OS
+ by multiple logical cores (lcores) if CPU multithreading is enabled.
+ """
+
+ lcore: int
+ core: int
+ socket: int
+ node: int
+
+ def __int__(self) -> int:
+ return self.lcore
+
+
+class LogicalCoreList(object):
+ """
+ Convert these options into a list of logical core ids.
+ lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
+ lcore_list=[0,1,2,3] - a list of int indices
+ lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
+ lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
+
+ The class creates a unified format used across the framework and allows
+ the user to use either a str representation (using str(instance) or directly
+ in f-strings) or a list representation (by accessing instance.lcore_list).
+ Empty lcore_list is allowed.
+ """
+
+ _lcore_list: list[int]
+ _lcore_str: str
+
+ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+ self._lcore_list = []
+ if isinstance(lcore_list, str):
+ lcore_list = lcore_list.split(",")
+ for lcore in lcore_list:
+ if isinstance(lcore, str):
+ self._lcore_list.extend(expand_range(lcore))
+ else:
+ self._lcore_list.append(int(lcore))
+
+ # the input lcores may not be sorted
+ self._lcore_list.sort()
+ self._lcore_str = (
+ f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+ )
+
+ @property
+ def lcore_list(self) -> list[int]:
+ return self._lcore_list
+
+ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
+ formatted_core_list = []
+ segment = lcore_ids_list[:1]
+ for lcore_id in lcore_ids_list[1:]:
+ if lcore_id - segment[-1] == 1:
+ segment.append(lcore_id)
+ else:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}"
+ if len(segment) > 1
+ else f"{segment[0]}"
+ )
+ current_core_index = lcore_ids_list.index(lcore_id)
+ formatted_core_list.extend(
+ self._get_consecutive_lcores_range(
+ lcore_ids_list[current_core_index:]
+ )
+ )
+ segment.clear()
+ break
+ if len(segment) > 0:
+ formatted_core_list.append(
+ f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
+ )
+ return formatted_core_list
+
+ def __str__(self) -> str:
+ return self._lcore_str
+
+
+@dataclasses.dataclass(slots=True, frozen=True)
+class LogicalCoreCount(object):
+ """
+ Define the number of logical cores to use.
+ If sockets is not None, socket_count is ignored.
+ """
+
+ lcores_per_core: int = 1
+ cores_per_socket: int = 2
+ socket_count: int = 1
+ sockets: list[int] | None = None
+
+
+class LogicalCoreFilter(ABC):
+ """
+ Filter according to the input filter specifier. Each filter needs to be
+ implemented in a derived class.
+ This class only implements operations common to all filters, such as sorting
+ the list to be filtered beforehand.
+ """
+
+ _filter_specifier: LogicalCoreCount | LogicalCoreList
+ _lcores_to_filter: list[LogicalCore]
+
+ def __init__(
+ self,
+ lcore_list: list[LogicalCore],
+ filter_specifier: LogicalCoreCount | LogicalCoreList,
+ ascending: bool = True,
+ ):
+ self._filter_specifier = filter_specifier
+
+ # sorting by core is needed in case hyperthreading is enabled
+ self._lcores_to_filter = sorted(
+ lcore_list, key=lambda x: x.core, reverse=not ascending
+ )
+ self.filter()
+
+ @abstractmethod
+ def filter(self) -> list[LogicalCore]:
+ """
+ Use self._filter_specifier to filter self._lcores_to_filter
+ and return the list of filtered LogicalCores.
+ self._lcores_to_filter is a sorted copy of the original list,
+ so it may be modified.
+ """
+
+
+class LogicalCoreCountFilter(LogicalCoreFilter):
+ """
+ Filter the input list of LogicalCores according to specified rules:
+ Use cores from the specified number of sockets or from the specified socket ids.
+ If sockets is specified, it takes precedence over socket_count.
+ From each of those sockets, use only cores_per_socket of cores.
+ And for each core, use lcores_per_core of logical cores. Hypertheading
+ must be enabled for this to take effect.
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+
+ _filter_specifier: LogicalCoreCount
+
+ def filter(self) -> list[LogicalCore]:
+ sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
+ filtered_lcores = []
+ for socket_to_filter in sockets_to_filter:
+ filtered_lcores.extend(self._filter_cores_from_socket(socket_to_filter))
+ return filtered_lcores
+
+ def _filter_sockets(
+ self, lcores_to_filter: Iterable[LogicalCore]
+ ) -> ValuesView[list[LogicalCore]]:
+ """
+ Remove all lcores that don't match the specified socket(s).
+ If self._filter_specifier.sockets is not None, keep lcores from those sockets,
+ otherwise keep lcores from the first
+ self._filter_specifier.socket_count sockets.
+ """
+ allowed_sockets: set[int] = set()
+ socket_count = self._filter_specifier.socket_count
+ if self._filter_specifier.sockets:
+ socket_count = len(self._filter_specifier.sockets)
+ allowed_sockets = set(self._filter_specifier.sockets)
+
+ filtered_lcores: dict[int, list[LogicalCore]] = {}
+ for lcore in lcores_to_filter:
+ if not self._filter_specifier.sockets:
+ if len(allowed_sockets) < socket_count:
+ allowed_sockets.add(lcore.socket)
+ if lcore.socket in allowed_sockets:
+ if lcore.socket in filtered_lcores:
+ filtered_lcores[lcore.socket].append(lcore)
+ else:
+ filtered_lcores[lcore.socket] = [lcore]
+
+ if len(allowed_sockets) < socket_count:
+ raise ValueError(
+ f"The actual number of sockets from which to use cores "
+ f"({len(allowed_sockets)}) is lower than required ({socket_count})."
+ )
+
+ return filtered_lcores.values()
+
+ def _filter_cores_from_socket(
+ self, lcores_to_filter: Iterable[LogicalCore]
+ ) -> list[LogicalCore]:
+ """
+ Keep only the first self._filter_specifier.cores_per_socket cores.
+ In multithreaded environments, keep only
+ the first self._filter_specifier.lcores_per_core lcores of those cores.
+ """
+
+ # no need to use ordered dict, from Python3.7 the dict
+ # insertion order is preserved (LIFO).
+ lcore_count_per_core_map: dict[int, int] = {}
+ filtered_lcores = []
+ for lcore in lcores_to_filter:
+ if lcore.core in lcore_count_per_core_map:
+ current_core_lcore_count = lcore_count_per_core_map[lcore.core]
+ if self._filter_specifier.lcores_per_core > current_core_lcore_count:
+ # only add lcores of the given core
+ lcore_count_per_core_map[lcore.core] += 1
+ filtered_lcores.append(lcore)
+ else:
+ # we have enough lcores per this core
+ continue
+ elif self._filter_specifier.cores_per_socket > len(
+ lcore_count_per_core_map
+ ):
+ # only add cores if we need more
+ lcore_count_per_core_map[lcore.core] = 1
+ filtered_lcores.append(lcore)
+ else:
+ # we have enough cores
+ break
+
+ cores_per_socket = len(lcore_count_per_core_map)
+ if cores_per_socket < self._filter_specifier.cores_per_socket:
+ raise ValueError(
+ f"The actual number of cores per socket ({cores_per_socket}) "
+ f"is lower than required ({self._filter_specifier.cores_per_socket})."
+ )
+
+ lcores_per_core = lcore_count_per_core_map[filtered_lcores[-1].core]
+ if lcores_per_core < self._filter_specifier.lcores_per_core:
+ raise ValueError(
+ f"The actual number of logical cores per core ({lcores_per_core}) "
+ f"is lower than required ({self._filter_specifier.lcores_per_core})."
+ )
+
+ return filtered_lcores
+
+
+class LogicalCoreListFilter(LogicalCoreFilter):
+ """
+ Filter the input list of Logical Cores according to the input list of
+ lcore indices.
+ An empty LogicalCoreList won't filter anything.
+ """
+
+ _filter_specifier: LogicalCoreList
+
+ def filter(self) -> list[LogicalCore]:
+ if not len(self._filter_specifier.lcore_list):
+ return self._lcores_to_filter
+
+ filtered_lcores = []
+ for core in self._lcores_to_filter:
+ if core.lcore in self._filter_specifier.lcore_list:
+ filtered_lcores.append(core)
+
+ if len(filtered_lcores) != len(self._filter_specifier.lcore_list):
+ raise ValueError(
+ f"Not all logical cores from {self._filter_specifier.lcore_list} "
+ f"were found among {self._lcores_to_filter}"
+ )
+
+ return filtered_lcores
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/hw/virtual_device.py
new file mode 100644
index 0000000000..eb664d9f17
--- /dev/null
+++ b/dts/framework/testbed_model/hw/virtual_device.py
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+
+class VirtualDevice(object):
+ """
+ Base class for virtual devices used by DPDK.
+ """
+
+ name: str
+
+ def __init__(self, name: str):
+ self.name = name
+
+ def __str__(self) -> str:
+ return self.name
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index a7059b5856..f63b755801 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -18,6 +18,14 @@
from framework.remote_session import OSSession, create_session
from framework.settings import SETTINGS
+from .hw import (
+ LogicalCore,
+ LogicalCoreCount,
+ LogicalCoreList,
+ LogicalCoreListFilter,
+ lcore_filter,
+)
+
class Node(object):
"""
@@ -29,6 +37,7 @@ class Node(object):
main_session: OSSession
config: NodeConfiguration
name: str
+ lcores: list[LogicalCore]
_logger: DTSLOG
_other_sessions: list[OSSession]
@@ -38,6 +47,12 @@ def __init__(self, node_config: NodeConfiguration):
self._logger = getLogger(self.name)
self.main_session = create_session(self.config, self.name, self._logger)
+ self._get_remote_cpus()
+ # filter the node lcores according to user config
+ self.lcores = LogicalCoreListFilter(
+ self.lcores, LogicalCoreList(self.config.lcores)
+ ).filter()
+
self._other_sessions = []
self._logger.info(f"Created node: {self.name}")
@@ -111,6 +126,34 @@ def create_session(self, name: str) -> OSSession:
self._other_sessions.append(connection)
return connection
+ def filter_lcores(
+ self,
+ filter_specifier: LogicalCoreCount | LogicalCoreList,
+ ascending: bool = True,
+ ) -> list[LogicalCore]:
+ """
+ Filter the LogicalCores found on the Node according to
+ a LogicalCoreCount or a LogicalCoreList.
+
+ If ascending is True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the highest
+ id and continue in descending order. This ordering affects which
+ sockets to consider first as well.
+ """
+ self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
+ return lcore_filter(
+ self.lcores,
+ filter_specifier,
+ ascending,
+ ).filter()
+
+ def _get_remote_cpus(self) -> None:
+ """
+ Scan CPUs in the remote OS and store a list of LogicalCores.
+ """
+ self._logger.info("Getting CPU information.")
+ self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 21da33d6b3..3672f5f6e5 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -4,12 +4,15 @@
import os
import tarfile
+import time
from pathlib import PurePath
from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.remote_session import OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, MesonArgs
+from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
from .node import Node
@@ -21,21 +24,29 @@ class SutNode(Node):
Another key capability is building DPDK according to given build target.
"""
+ _dpdk_prefix_list: list[str]
+ _dpdk_timestamp: str
_build_target_config: BuildTargetConfiguration | None
_env_vars: EnvVarsDict
_remote_tmp_dir: PurePath
__remote_dpdk_dir: PurePath | None
_dpdk_version: str | None
_app_compile_timeout: float
+ _dpdk_kill_session: OSSession | None
def __init__(self, node_config: NodeConfiguration):
super(SutNode, self).__init__(node_config)
+ self._dpdk_prefix_list = []
self._build_target_config = None
self._env_vars = EnvVarsDict()
self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
self.__remote_dpdk_dir = None
self._dpdk_version = None
self._app_compile_timeout = 90
+ self._dpdk_kill_session = None
+ self._dpdk_timestamp = (
+ f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"
+ )
@property
def _remote_dpdk_dir(self) -> PurePath:
@@ -169,3 +180,120 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
return self.main_session.join_remote_path(
self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
)
+
+ def kill_cleanup_dpdk_apps(self) -> None:
+ """
+ Kill all dpdk applications on the SUT. Cleanup hugepages.
+ """
+ if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
+ # we can use the session if it exists and responds
+ self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
+ else:
+ # otherwise, we need to (re)create it
+ self._dpdk_kill_session = self.create_session("dpdk_kill")
+ self._dpdk_prefix_list = []
+
+ def create_eal_parameters(
+ self,
+ lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
+ ascending_cores: bool = True,
+ prefix: str = "dpdk",
+ append_prefix_timestamp: bool = True,
+ no_pci: bool = False,
+ vdevs: list[VirtualDevice] = None,
+ other_eal_param: str = "",
+ ) -> "EalParameters":
+ """
+ Generate eal parameters character string;
+ :param lcore_filter_specifier: a number of lcores/cores/sockets to use
+ or a list of lcore ids to use.
+ The default will select one lcore for each of two cores
+ on one socket, in ascending order of core ids.
+ :param ascending_cores: True, use cores with the lowest numerical id first
+ and continue in ascending order. If False, start with the
+ highest id and continue in descending order. This ordering
+ affects which sockets to consider first as well.
+ :param prefix: set file prefix string, eg:
+ prefix='vf'
+ :param append_prefix_timestamp: if True, will append a timestamp to
+ DPDK file prefix.
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True
+ :param vdevs: virtual device list, eg:
+ vdevs=[
+ VirtualDevice('net_ring0'),
+ VirtualDevice('net_ring1')
+ ]
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments'
+ :return: eal param string, eg:
+ '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
+ """
+
+ lcore_list = LogicalCoreList(
+ self.filter_lcores(lcore_filter_specifier, ascending_cores)
+ )
+
+ if append_prefix_timestamp:
+ prefix = f"{prefix}_{self._dpdk_timestamp}"
+ prefix = self.main_session.get_dpdk_file_prefix(prefix)
+ if prefix:
+ self._dpdk_prefix_list.append(prefix)
+
+ if vdevs is None:
+ vdevs = []
+
+ return EalParameters(
+ lcore_list=lcore_list,
+ memory_channels=self.config.memory_channels,
+ prefix=prefix,
+ no_pci=no_pci,
+ vdevs=vdevs,
+ other_eal_param=other_eal_param,
+ )
+
+
+class EalParameters(object):
+ def __init__(
+ self,
+ lcore_list: LogicalCoreList,
+ memory_channels: int,
+ prefix: str,
+ no_pci: bool,
+ vdevs: list[VirtualDevice],
+ other_eal_param: str,
+ ):
+ """
+ Generate eal parameters character string;
+ :param lcore_list: the list of logical cores to use.
+ :param memory_channels: the number of memory channels to use.
+ :param prefix: set file prefix string, eg:
+ prefix='vf'
+ :param no_pci: switch of disable PCI bus eg:
+ no_pci=True
+ :param vdevs: virtual device list, eg:
+ vdevs=[
+ VirtualDevice('net_ring0'),
+ VirtualDevice('net_ring1')
+ ]
+ :param other_eal_param: user defined DPDK eal parameters, eg:
+ other_eal_param='--single-file-segments'
+ """
+ self._lcore_list = f"-l {lcore_list}"
+ self._memory_channels = f"-n {memory_channels}"
+ self._prefix = prefix
+ if prefix:
+ self._prefix = f"--file-prefix={prefix}"
+ self._no_pci = "--no-pci" if no_pci else ""
+ self._vdevs = " ".join(f"--vdev {vdev}" for vdev in vdevs)
+ self._other_eal_param = other_eal_param
+
+ def __str__(self) -> str:
+ return (
+ f"{self._lcore_list} "
+ f"{self._memory_channels} "
+ f"{self._prefix} "
+ f"{self._no_pci} "
+ f"{self._vdevs} "
+ f"{self._other_eal_param}"
+ )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 0ed591ac23..55e0b0ef0e 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -22,6 +22,26 @@ def check_dts_python_version() -> None:
print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+def expand_range(range_str: str) -> list[int]:
+ """
+ Process range string into a list of integers. There are two possible formats:
+ n - a single integer
+ n-m - a range of integers
+
+ The returned range includes both n and m. Empty string returns an empty list.
+ """
+ expanded_range: list[int] = []
+ if range_str:
+ range_boundaries = range_str.split("-")
+ # will throw an exception when items in range_boundaries can't be converted,
+ # serving as type check
+ expanded_range.extend(
+ range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
+ )
+
+ return expanded_range
+
+
def GREEN(text: str) -> str:
return f"\u001B[32;1m{str(text)}\u001B[0m"
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 05/10] dts: add node memory setup
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
` (3 preceding siblings ...)
2023-03-03 10:25 ` [PATCH v6 04/10] dts: add dpdk execution handling Juraj Linkeš
@ 2023-03-03 10:25 ` Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 06/10] dts: add test suite module Juraj Linkeš
` (5 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:25 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Setup hugepages on nodes. This is useful not only on SUT nodes, but
also on TG nodes which use TGs that utilize hugepages.
The setup is opt-in, i.e. users need to supply hugepage configuration to
instruct DTS to configure them. It not configured, hugepage
configuration will be skipped. This is helpful if users don't want DTS
to tamper with hugepages on their system.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 3 +
dts/framework/config/__init__.py | 14 ++++
dts/framework/config/conf_yaml_schema.json | 21 +++++
dts/framework/remote_session/linux_session.py | 78 +++++++++++++++++++
dts/framework/remote_session/os_session.py | 8 ++
dts/framework/testbed_model/node.py | 12 +++
6 files changed, 136 insertions(+)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 1648e5c3c5..6540a45ef7 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -18,3 +18,6 @@ nodes:
lcores: ""
use_first_core: false
memory_channels: 4
+ hugepages: # optional; if removed, will use system hugepage configuration
+ amount: 256
+ force_first_numa: false
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 17b917f3b3..0e5f493c5d 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -66,6 +66,12 @@ class Compiler(StrEnum):
#
# Frozen makes the object immutable. This enables further optimizations,
# and makes it thread safe should we every want to move in that direction.
+@dataclass(slots=True, frozen=True)
+class HugepageConfiguration:
+ amount: int
+ force_first_numa: bool
+
+
@dataclass(slots=True, frozen=True)
class NodeConfiguration:
name: str
@@ -77,9 +83,16 @@ class NodeConfiguration:
lcores: str
use_first_core: bool
memory_channels: int
+ hugepages: HugepageConfiguration | None
@staticmethod
def from_dict(d: dict) -> "NodeConfiguration":
+ hugepage_config = d.get("hugepages")
+ if hugepage_config:
+ if "force_first_numa" not in hugepage_config:
+ hugepage_config["force_first_numa"] = False
+ hugepage_config = HugepageConfiguration(**hugepage_config)
+
return NodeConfiguration(
name=d["name"],
hostname=d["hostname"],
@@ -90,6 +103,7 @@ def from_dict(d: dict) -> "NodeConfiguration":
lcores=d.get("lcores", "1"),
use_first_core=d.get("use_first_core", False),
memory_channels=d.get("memory_channels", 1),
+ hugepages=hugepage_config,
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 334b4bd8ab..56f93def36 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -75,6 +75,24 @@
"cpu",
"compiler"
]
+ },
+ "hugepages": {
+ "type": "object",
+ "description": "Optional hugepage configuration. If not specified, hugepages won't be configured and DTS will use system configuration.",
+ "properties": {
+ "amount": {
+ "type": "integer",
+ "description": "The amount of hugepages to configure. Hugepage size will be the system default."
+ },
+ "force_first_numa": {
+ "type": "boolean",
+ "description": "Set to True to force configuring hugepages on the first NUMA node. Defaults to False."
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "amount"
+ ]
}
},
"type": "object",
@@ -118,6 +136,9 @@
"memory_channels": {
"type": "integer",
"description": "How many memory channels to use. Optional, defaults to 1."
+ },
+ "hugepages": {
+ "$ref": "#/definitions/hugepages"
}
},
"additionalProperties": false,
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index c49b6bb1d7..a1e3bc3a92 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,7 +2,9 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+from framework.exception import RemoteCommandExecutionError
from framework.testbed_model import LogicalCore
+from framework.utils import expand_range
from .posix_session import PosixSession
@@ -27,3 +29,79 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
return dpdk_prefix
+
+ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+ self._logger.info("Getting Hugepage information.")
+ hugepage_size = self._get_hugepage_size()
+ hugepages_total = self._get_hugepages_total()
+ self._numa_nodes = self._get_numa_nodes()
+
+ if force_first_numa or hugepages_total != hugepage_amount:
+ # when forcing numa, we need to clear existing hugepages regardless
+ # of size, so they can be moved to the first numa node
+ self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
+ else:
+ self._logger.info("Hugepages already configured.")
+ self._mount_huge_pages()
+
+ def _get_hugepage_size(self) -> int:
+ hugepage_size = self.remote_session.send_command(
+ "awk '/Hugepagesize/ {print $2}' /proc/meminfo"
+ ).stdout
+ return int(hugepage_size)
+
+ def _get_hugepages_total(self) -> int:
+ hugepages_total = self.remote_session.send_command(
+ "awk '/HugePages_Total/ { print $2 }' /proc/meminfo"
+ ).stdout
+ return int(hugepages_total)
+
+ def _get_numa_nodes(self) -> list[int]:
+ try:
+ numa_count = self.remote_session.send_command(
+ "cat /sys/devices/system/node/online", verify=True
+ ).stdout
+ numa_range = expand_range(numa_count)
+ except RemoteCommandExecutionError:
+ # the file doesn't exist, meaning the node doesn't support numa
+ numa_range = []
+ return numa_range
+
+ def _mount_huge_pages(self) -> None:
+ self._logger.info("Re-mounting Hugepages.")
+ hugapge_fs_cmd = "awk '/hugetlbfs/ { print $2 }' /proc/mounts"
+ self.remote_session.send_command(f"umount $({hugapge_fs_cmd})")
+ result = self.remote_session.send_command(hugapge_fs_cmd)
+ if result.stdout == "":
+ remote_mount_path = "/mnt/huge"
+ self.remote_session.send_command(f"mkdir -p {remote_mount_path}")
+ self.remote_session.send_command(
+ f"mount -t hugetlbfs nodev {remote_mount_path}"
+ )
+
+ def _supports_numa(self) -> bool:
+ # the system supports numa if self._numa_nodes is non-empty and there are more
+ # than one numa node (in the latter case it may actually support numa, but
+ # there's no reason to do any numa specific configuration)
+ return len(self._numa_nodes) > 1
+
+ def _configure_huge_pages(
+ self, amount: int, size: int, force_first_numa: bool
+ ) -> None:
+ self._logger.info("Configuring Hugepages.")
+ hugepage_config_path = (
+ f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+ )
+ if force_first_numa and self._supports_numa():
+ # clear non-numa hugepages
+ self.remote_session.send_command(
+ f"echo 0 | sudo tee {hugepage_config_path}"
+ )
+ hugepage_config_path = (
+ f"/sys/devices/system/node/node{self._numa_nodes[0]}/hugepages"
+ f"/hugepages-{size}kB/nr_hugepages"
+ )
+
+ self.remote_session.send_command(
+ f"echo {amount} | sudo tee {hugepage_config_path}"
+ )
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 0a42f40a86..048bf7178e 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -151,3 +151,11 @@ def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
"""
Get the DPDK file prefix that will be used when running DPDK apps.
"""
+
+ @abstractmethod
+ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+ """
+ Get the node's Hugepage Size, configure the specified amount of hugepages
+ if needed and mount the hugepages if needed.
+ If force_first_numa is True, configure hugepages just on the first socket.
+ """
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index f63b755801..d48fafe65d 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -62,6 +62,7 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
Perform the execution setup that will be done for each execution
this node is part of.
"""
+ self._setup_hugepages()
self._set_up_execution(execution_config)
def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
@@ -154,6 +155,17 @@ def _get_remote_cpus(self) -> None:
self._logger.info("Getting CPU information.")
self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
+ def _setup_hugepages(self):
+ """
+ Setup hugepages on the Node. Different architectures can supply different
+ amounts of memory for hugepages and numa-based hugepage allocation may need
+ to be considered.
+ """
+ if self.config.hugepages:
+ self.main_session.setup_hugepages(
+ self.config.hugepages.amount, self.config.hugepages.force_first_numa
+ )
+
def close(self) -> None:
"""
Close all connections and free other resources.
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 06/10] dts: add test suite module
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
` (4 preceding siblings ...)
2023-03-03 10:25 ` [PATCH v6 05/10] dts: add node memory setup Juraj Linkeš
@ 2023-03-03 10:25 ` Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 07/10] dts: add hello world testsuite Juraj Linkeš
` (4 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:25 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The module implements the base class that all test suites inherit from.
It implements methods common to all test suites.
The derived test suites implement test cases and any particular setup
needed for the suite or tests.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 2 +
dts/framework/config/__init__.py | 4 +
dts/framework/config/conf_yaml_schema.json | 10 +
dts/framework/exception.py | 16 ++
dts/framework/settings.py | 24 +++
dts/framework/test_suite.py | 228 +++++++++++++++++++++
6 files changed, 284 insertions(+)
create mode 100644 dts/framework/test_suite.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 6540a45ef7..75e33e8ccf 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -8,6 +8,8 @@ executions:
cpu: native
compiler: gcc
compiler_wrapper: ccache
+ perf: false
+ func: true
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 0e5f493c5d..544fceca6a 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -131,6 +131,8 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
+ perf: bool
+ func: bool
system_under_test: NodeConfiguration
@staticmethod
@@ -143,6 +145,8 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
return ExecutionConfiguration(
build_targets=build_targets,
+ perf=d["perf"],
+ func=d["func"],
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 56f93def36..878ca3aec2 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -164,6 +164,14 @@
},
"minimum": 1
},
+ "perf": {
+ "type": "boolean",
+ "description": "Enable performance testing."
+ },
+ "func": {
+ "type": "boolean",
+ "description": "Enable functional testing."
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -171,6 +179,8 @@
"additionalProperties": false,
"required": [
"build_targets",
+ "perf",
+ "func",
"system_under_test"
]
},
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index b4545a5a40..ca353d98fc 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -24,6 +24,7 @@ class ErrorSeverity(IntEnum):
REMOTE_CMD_EXEC_ERR = 3
SSH_ERR = 4
DPDK_BUILD_ERR = 10
+ TESTCASE_VERIFY_ERR = 20
class DTSError(Exception):
@@ -128,3 +129,18 @@ class DPDKBuildError(DTSError):
"""
severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
+
+
+class TestCaseVerifyError(DTSError):
+ """
+ Used in test cases to verify the expected behavior.
+ """
+
+ value: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
+
+ def __init__(self, value: str):
+ self.value = value
+
+ def __str__(self) -> str:
+ return repr(self.value)
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index f787187ade..4ccc98537d 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -66,6 +66,8 @@ class _Settings:
skip_setup: bool
dpdk_tarball_path: Path
compile_timeout: float
+ test_cases: list
+ re_run: int
def _get_parser() -> argparse.ArgumentParser:
@@ -137,6 +139,26 @@ def _get_parser() -> argparse.ArgumentParser:
help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
)
+ parser.add_argument(
+ "--test-cases",
+ action=_env_arg("DTS_TESTCASES"),
+ default="",
+ required=False,
+ help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
+ "Unknown test cases will be silently ignored.",
+ )
+
+ parser.add_argument(
+ "--re-run",
+ "--re_run",
+ action=_env_arg("DTS_RERUN"),
+ default=0,
+ type=int,
+ required=False,
+ help="[DTS_RERUN] Re-run each test case the specified amount of times "
+ "if a test failure occurs",
+ )
+
return parser
@@ -156,6 +178,8 @@ def _get_settings() -> _Settings:
skip_setup=(parsed_args.skip_setup == "Y"),
dpdk_tarball_path=parsed_args.tarball,
compile_timeout=parsed_args.compile_timeout,
+ test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+ re_run=parsed_args.re_run,
)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
new file mode 100644
index 0000000000..9002d43297
--- /dev/null
+++ b/dts/framework/test_suite.py
@@ -0,0 +1,228 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Base class for creating DTS test cases.
+"""
+
+import inspect
+import re
+from collections.abc import MutableSequence
+from types import MethodType
+
+from .exception import SSHTimeoutError, TestCaseVerifyError
+from .logger import DTSLOG, getLogger
+from .settings import SETTINGS
+from .testbed_model import SutNode
+
+
+class TestSuite(object):
+ """
+ The base TestSuite class provides methods for handling basic flow of a test suite:
+ * test case filtering and collection
+ * test suite setup/cleanup
+ * test setup/cleanup
+ * test case execution
+ * error handling and results storage
+ Test cases are implemented by derived classes. Test cases are all methods
+ starting with test_, further divided into performance test cases
+ (starting with test_perf_) and functional test cases (all other test cases).
+ By default, all test cases will be executed. A list of testcase str names
+ may be specified in conf.yaml or on the command line
+ to filter which test cases to run.
+ The methods named [set_up|tear_down]_[suite|test_case] should be overridden
+ in derived classes if the appropriate suite/test case fixtures are needed.
+ """
+
+ sut_node: SutNode
+ _logger: DTSLOG
+ _test_cases_to_run: list[str]
+ _func: bool
+ _errors: MutableSequence[Exception]
+
+ def __init__(
+ self,
+ sut_node: SutNode,
+ test_cases: list[str],
+ func: bool,
+ errors: MutableSequence[Exception],
+ ):
+ self.sut_node = sut_node
+ self._logger = getLogger(self.__class__.__name__)
+ self._test_cases_to_run = test_cases
+ self._test_cases_to_run.extend(SETTINGS.test_cases)
+ self._func = func
+ self._errors = errors
+
+ def set_up_suite(self) -> None:
+ """
+ Set up test fixtures common to all test cases; this is done before
+ any test case is run.
+ """
+
+ def tear_down_suite(self) -> None:
+ """
+ Tear down the previously created test fixtures common to all test cases.
+ """
+
+ def set_up_test_case(self) -> None:
+ """
+ Set up test fixtures before each test case.
+ """
+
+ def tear_down_test_case(self) -> None:
+ """
+ Tear down the previously created test fixtures after each test case.
+ """
+
+ def verify(self, condition: bool, failure_description: str) -> None:
+ if not condition:
+ self._logger.debug(
+ "A test case failed, showing the last 10 commands executed on SUT:"
+ )
+ for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ self._logger.debug(command_res.command)
+ raise TestCaseVerifyError(failure_description)
+
+ def run(self) -> None:
+ """
+ Setup, execute and teardown the whole suite.
+ Suite execution consists of running all test cases scheduled to be executed.
+ A test cast run consists of setup, execution and teardown of said test case.
+ """
+ test_suite_name = self.__class__.__name__
+
+ try:
+ self._logger.info(f"Starting test suite setup: {test_suite_name}")
+ self.set_up_suite()
+ self._logger.info(f"Test suite setup successful: {test_suite_name}")
+ except Exception as e:
+ self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
+ self._errors.append(e)
+
+ else:
+ self._execute_test_suite()
+
+ finally:
+ try:
+ self.tear_down_suite()
+ self.sut_node.kill_cleanup_dpdk_apps()
+ except Exception as e:
+ self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
+ self._logger.warning(
+ f"Test suite '{test_suite_name}' teardown failed, "
+ f"the next test suite may be affected."
+ )
+ self._errors.append(e)
+
+ def _execute_test_suite(self) -> None:
+ """
+ Execute all test cases scheduled to be executed in this suite.
+ """
+ if self._func:
+ for test_case_method in self._get_functional_test_cases():
+ all_attempts = SETTINGS.re_run + 1
+ attempt_nr = 1
+ while (
+ not self._run_test_case(test_case_method)
+ and attempt_nr < all_attempts
+ ):
+ attempt_nr += 1
+ self._logger.info(
+ f"Re-running FAILED test case '{test_case_method.__name__}'. "
+ f"Attempt number {attempt_nr} out of {all_attempts}."
+ )
+
+ def _get_functional_test_cases(self) -> list[MethodType]:
+ """
+ Get all functional test cases.
+ """
+ return self._get_test_cases(r"test_(?!perf_)")
+
+ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
+ """
+ Return a list of test cases matching test_case_regex.
+ """
+ self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
+ filtered_test_cases = []
+ for test_case_name, test_case in inspect.getmembers(self, inspect.ismethod):
+ if self._should_be_executed(test_case_name, test_case_regex):
+ filtered_test_cases.append(test_case)
+ cases_str = ", ".join((x.__name__ for x in filtered_test_cases))
+ self._logger.debug(
+ f"Found test cases '{cases_str}' in {self.__class__.__name__}."
+ )
+ return filtered_test_cases
+
+ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
+ """
+ Check whether the test case should be executed.
+ """
+ match = bool(re.match(test_case_regex, test_case_name))
+ if self._test_cases_to_run:
+ return match and test_case_name in self._test_cases_to_run
+
+ return match
+
+ def _run_test_case(self, test_case_method: MethodType) -> bool:
+ """
+ Setup, execute and teardown a test case in this suite.
+ Exceptions are caught and recorded in logs.
+ """
+ test_case_name = test_case_method.__name__
+ result = False
+
+ try:
+ # run set_up function for each case
+ self.set_up_test_case()
+ except SSHTimeoutError as e:
+ self._logger.exception(f"Test case setup FAILED: {test_case_name}")
+ self._errors.append(e)
+ except Exception as e:
+ self._logger.exception(f"Test case setup ERROR: {test_case_name}")
+ self._errors.append(e)
+
+ else:
+ # run test case if setup was successful
+ result = self._execute_test_case(test_case_method)
+
+ finally:
+ try:
+ self.tear_down_test_case()
+ except Exception as e:
+ self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
+ self._logger.warning(
+ f"Test case '{test_case_name}' teardown failed, "
+ f"the next test case may be affected."
+ )
+ self._errors.append(e)
+ result = False
+
+ return result
+
+ def _execute_test_case(self, test_case_method: MethodType) -> bool:
+ """
+ Execute one test case and handle failures.
+ """
+ test_case_name = test_case_method.__name__
+ result = False
+ try:
+ self._logger.info(f"Starting test case execution: {test_case_name}")
+ test_case_method()
+ result = True
+ self._logger.info(f"Test case execution PASSED: {test_case_name}")
+
+ except TestCaseVerifyError as e:
+ self._logger.exception(f"Test case execution FAILED: {test_case_name}")
+ self._errors.append(e)
+ except Exception as e:
+ self._logger.exception(f"Test case execution ERROR: {test_case_name}")
+ self._errors.append(e)
+ except KeyboardInterrupt:
+ self._logger.error(
+ f"Test case execution INTERRUPTED by user: {test_case_name}"
+ )
+ raise KeyboardInterrupt("Stop DTS")
+
+ return result
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 07/10] dts: add hello world testsuite
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
` (5 preceding siblings ...)
2023-03-03 10:25 ` [PATCH v6 06/10] dts: add test suite module Juraj Linkeš
@ 2023-03-03 10:25 ` Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 08/10] dts: add test suite config and runner Juraj Linkeš
` (3 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:25 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The test suite implements test cases defined in the corresponding test
plan.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/__init__.py | 2 +-
dts/framework/remote_session/os_session.py | 16 ++++-
.../remote_session/remote/__init__.py | 2 +-
dts/framework/testbed_model/__init__.py | 1 +
dts/framework/testbed_model/sut_node.py | 12 +++-
dts/tests/TestSuite_hello_world.py | 64 +++++++++++++++++++
6 files changed, 93 insertions(+), 4 deletions(-)
create mode 100644 dts/tests/TestSuite_hello_world.py
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 747316c78a..ee221503df 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,7 @@
from .linux_session import LinuxSession
from .os_session import OSSession
-from .remote import RemoteSession, SSHSession
+from .remote import CommandResult, RemoteSession, SSHSession
def create_session(
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 048bf7178e..4c48ae2567 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -12,7 +12,7 @@
from framework.testbed_model import LogicalCore
from framework.utils import EnvVarsDict, MesonArgs
-from .remote import RemoteSession, create_remote_session
+from .remote import CommandResult, RemoteSession, create_remote_session
class OSSession(ABC):
@@ -50,6 +50,20 @@ def is_alive(self) -> bool:
"""
return self.remote_session.is_alive()
+ def send_command(
+ self,
+ command: str,
+ timeout: float,
+ verify: bool = False,
+ env: EnvVarsDict | None = None,
+ ) -> CommandResult:
+ """
+ An all-purpose API in case the command to be executed is already
+ OS-agnostic, such as when the path to the executed command has been
+ constructed beforehand.
+ """
+ return self.remote_session.send_command(command, timeout, verify, env)
+
@abstractmethod
def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
"""
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
index f3092f8bbe..8a1512210a 100644
--- a/dts/framework/remote_session/remote/__init__.py
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -6,7 +6,7 @@
from framework.config import NodeConfiguration
from framework.logger import DTSLOG
-from .remote_session import RemoteSession
+from .remote_session import CommandResult, RemoteSession
from .ssh_session import SSHSession
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 5be3e4c48d..f54a947051 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -12,6 +12,7 @@
from .hw import (
LogicalCore,
LogicalCoreCount,
+ LogicalCoreCountFilter,
LogicalCoreList,
LogicalCoreListFilter,
VirtualDevice,
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 3672f5f6e5..2b2b50d982 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -8,7 +8,7 @@
from pathlib import PurePath
from framework.config import BuildTargetConfiguration, NodeConfiguration
-from framework.remote_session import OSSession
+from framework.remote_session import CommandResult, OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, MesonArgs
@@ -252,6 +252,16 @@ def create_eal_parameters(
other_eal_param=other_eal_param,
)
+ def run_dpdk_app(
+ self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30
+ ) -> CommandResult:
+ """
+ Run DPDK application on the remote node.
+ """
+ return self.main_session.send_command(
+ f"{app_path} {eal_args}", timeout, verify=True
+ )
+
class EalParameters(object):
def __init__(
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
new file mode 100644
index 0000000000..7e3d95c0cf
--- /dev/null
+++ b/dts/tests/TestSuite_hello_world.py
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+
+"""
+Run the helloworld example app and verify it prints a message for each used core.
+No other EAL parameters apart from cores are used.
+"""
+
+from framework.test_suite import TestSuite
+from framework.testbed_model import (
+ LogicalCoreCount,
+ LogicalCoreCountFilter,
+ LogicalCoreList,
+)
+
+
+class TestHelloWorld(TestSuite):
+ def set_up_suite(self) -> None:
+ """
+ Setup:
+ Build the app we're about to test - helloworld.
+ """
+ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
+
+ def test_hello_world_single_core(self) -> None:
+ """
+ Steps:
+ Run the helloworld app on the first usable logical core.
+ Verify:
+ The app prints a message from the used core:
+ "hello from core <core_id>"
+ """
+
+ # get the first usable core
+ lcore_amount = LogicalCoreCount(1, 1, 1)
+ lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
+ eal_para = self.sut_node.create_eal_parameters(
+ lcore_filter_specifier=lcore_amount
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para)
+ self.verify(
+ f"hello from core {int(lcores[0])}" in result.stdout,
+ f"helloworld didn't start on lcore{lcores[0]}",
+ )
+
+ def test_hello_world_all_cores(self) -> None:
+ """
+ Steps:
+ Run the helloworld app on all usable logical cores.
+ Verify:
+ The app prints a message from all used cores:
+ "hello from core <core_id>"
+ """
+
+ # get the maximum logical core number
+ eal_para = self.sut_node.create_eal_parameters(
+ lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
+ )
+ result = self.sut_node.run_dpdk_app(self.app_helloworld_path, eal_para, 50)
+ for lcore in self.sut_node.lcores:
+ self.verify(
+ f"hello from core {int(lcore)}" in result.stdout,
+ f"helloworld didn't start on lcore{lcore}",
+ )
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 08/10] dts: add test suite config and runner
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
` (6 preceding siblings ...)
2023-03-03 10:25 ` [PATCH v6 07/10] dts: add hello world testsuite Juraj Linkeš
@ 2023-03-03 10:25 ` Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 09/10] dts: add test results module Juraj Linkeš
` (2 subsequent siblings)
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:25 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The config allows users to specify which test suites and test cases
within test suites to run.
Also add test suite running capabilities to dts runner.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 2 ++
dts/framework/config/__init__.py | 29 +++++++++++++++-
dts/framework/config/conf_yaml_schema.json | 40 ++++++++++++++++++++++
dts/framework/dts.py | 19 ++++++++++
dts/framework/test_suite.py | 24 ++++++++++++-
5 files changed, 112 insertions(+), 2 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 75e33e8ccf..a9bd8a3ecf 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -10,6 +10,8 @@ executions:
compiler_wrapper: ccache
perf: false
func: true
+ test_suites:
+ - hello_world
system_under_test: "SUT 1"
nodes:
- name: "SUT 1"
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 544fceca6a..ebb0823ff5 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -12,7 +12,7 @@
import pathlib
from dataclasses import dataclass
from enum import Enum, auto, unique
-from typing import Any
+from typing import Any, TypedDict
import warlock # type: ignore
import yaml
@@ -128,11 +128,34 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
)
+class TestSuiteConfigDict(TypedDict):
+ suite: str
+ cases: list[str]
+
+
+@dataclass(slots=True, frozen=True)
+class TestSuiteConfig:
+ test_suite: str
+ test_cases: list[str]
+
+ @staticmethod
+ def from_dict(
+ entry: str | TestSuiteConfigDict,
+ ) -> "TestSuiteConfig":
+ if isinstance(entry, str):
+ return TestSuiteConfig(test_suite=entry, test_cases=[])
+ elif isinstance(entry, dict):
+ return TestSuiteConfig(test_suite=entry["suite"], test_cases=entry["cases"])
+ else:
+ raise TypeError(f"{type(entry)} is not valid for a test suite config.")
+
+
@dataclass(slots=True, frozen=True)
class ExecutionConfiguration:
build_targets: list[BuildTargetConfiguration]
perf: bool
func: bool
+ test_suites: list[TestSuiteConfig]
system_under_test: NodeConfiguration
@staticmethod
@@ -140,6 +163,9 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets: list[BuildTargetConfiguration] = list(
map(BuildTargetConfiguration.from_dict, d["build_targets"])
)
+ test_suites: list[TestSuiteConfig] = list(
+ map(TestSuiteConfig.from_dict, d["test_suites"])
+ )
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
@@ -147,6 +173,7 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
build_targets=build_targets,
perf=d["perf"],
func=d["func"],
+ test_suites=test_suites,
system_under_test=node_map[sut_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 878ca3aec2..ca2d4a1ef2 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -93,6 +93,32 @@
"required": [
"amount"
]
+ },
+ "test_suite": {
+ "type": "string",
+ "enum": [
+ "hello_world"
+ ]
+ },
+ "test_target": {
+ "type": "object",
+ "properties": {
+ "suite": {
+ "$ref": "#/definitions/test_suite"
+ },
+ "cases": {
+ "type": "array",
+ "description": "If specified, only this subset of test suite's test cases will be run. Unknown test cases will be silently ignored.",
+ "items": {
+ "type": "string"
+ },
+ "minimum": 1
+ }
+ },
+ "required": [
+ "suite"
+ ],
+ "additionalProperties": false
}
},
"type": "object",
@@ -172,6 +198,19 @@
"type": "boolean",
"description": "Enable functional testing."
},
+ "test_suites": {
+ "type": "array",
+ "items": {
+ "oneOf": [
+ {
+ "$ref": "#/definitions/test_suite"
+ },
+ {
+ "$ref": "#/definitions/test_target"
+ }
+ ]
+ }
+ },
"system_under_test": {
"$ref": "#/definitions/node_name"
}
@@ -181,6 +220,7 @@
"build_targets",
"perf",
"func",
+ "test_suites",
"system_under_test"
]
},
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 3d4170d10f..9012a499a3 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -8,6 +8,7 @@
from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .test_suite import get_test_suites
from .testbed_model import SutNode
from .utils import check_dts_python_version
@@ -132,6 +133,24 @@ def _run_suites(
with possibly only a subset of test cases.
If no subset is specified, run all test cases.
"""
+ for test_suite_config in execution.test_suites:
+ try:
+ full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
+ test_suite_classes = get_test_suites(full_suite_path)
+ suites_str = ", ".join((x.__name__ for x in test_suite_classes))
+ dts_logger.debug(
+ f"Found test suites '{suites_str}' in '{full_suite_path}'."
+ )
+ except Exception as e:
+ dts_logger.exception("An error occurred when searching for test suites.")
+ errors.append(e)
+
+ else:
+ for test_suite_class in test_suite_classes:
+ test_suite = test_suite_class(
+ sut_node, test_suite_config.test_cases, execution.func, errors
+ )
+ test_suite.run()
def _exit_dts() -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 9002d43297..12bf3b6420 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -6,12 +6,13 @@
Base class for creating DTS test cases.
"""
+import importlib
import inspect
import re
from collections.abc import MutableSequence
from types import MethodType
-from .exception import SSHTimeoutError, TestCaseVerifyError
+from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
from .testbed_model import SutNode
@@ -226,3 +227,24 @@ def _execute_test_case(self, test_case_method: MethodType) -> bool:
raise KeyboardInterrupt("Stop DTS")
return result
+
+
+def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+ def is_test_suite(object) -> bool:
+ try:
+ if issubclass(object, TestSuite) and object is not TestSuite:
+ return True
+ except TypeError:
+ return False
+ return False
+
+ try:
+ testcase_module = importlib.import_module(testsuite_module_path)
+ except ModuleNotFoundError as e:
+ raise ConfigurationError(
+ f"Test suite '{testsuite_module_path}' not found."
+ ) from e
+ return [
+ test_suite_class
+ for _, test_suite_class in inspect.getmembers(testcase_module, is_test_suite)
+ ]
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 09/10] dts: add test results module
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
` (7 preceding siblings ...)
2023-03-03 10:25 ` [PATCH v6 08/10] dts: add test suite config and runner Juraj Linkeš
@ 2023-03-03 10:25 ` Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 10/10] doc: update dts setup and test suite cookbook Juraj Linkeš
2023-03-19 15:26 ` [PATCH v6 00/10] dts: add hello world test case Thomas Monjalon
10 siblings, 0 replies; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:25 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
The module stores the results and errors from all executions, build
targets, test suites and test cases.
The result consist of the result of the setup and the teardown of each
testing stage (listed above) and the results of the inner stages. The
innermost stage is the case, which also contains the result of test case
itself.
The modules also produces a brief overview of the results and the
number of executed tests.
It also finds the proper return code to exit with from among the stored
errors.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/dts.py | 64 +++----
dts/framework/settings.py | 2 -
dts/framework/test_result.py | 316 +++++++++++++++++++++++++++++++++++
dts/framework/test_suite.py | 60 +++----
4 files changed, 382 insertions(+), 60 deletions(-)
create mode 100644 dts/framework/test_result.py
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 9012a499a3..0502284580 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,14 +6,14 @@
import sys
from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
-from .exception import DTSError, ErrorSeverity
from .logger import DTSLOG, getLogger
+from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
from .test_suite import get_test_suites
from .testbed_model import SutNode
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("DTSRunner")
-errors = []
+result: DTSResult = DTSResult(dts_logger)
def run_all() -> None:
@@ -22,7 +22,7 @@ def run_all() -> None:
config file.
"""
global dts_logger
- global errors
+ global result
# check the python version of the server that run dts
check_dts_python_version()
@@ -39,29 +39,31 @@ def run_all() -> None:
# the SUT has not been initialized yet
try:
sut_node = SutNode(execution.system_under_test)
+ result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception(
f"Connection to node {execution.system_under_test} failed."
)
- errors.append(e)
+ result.update_setup(Result.FAIL, e)
else:
nodes[sut_node.name] = sut_node
if sut_node:
- _run_execution(sut_node, execution)
+ _run_execution(sut_node, execution, result)
except Exception as e:
dts_logger.exception("An unexpected error has occurred.")
- errors.append(e)
+ result.add_error(e)
raise
finally:
try:
for node in nodes.values():
node.close()
+ result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Final cleanup of nodes failed.")
- errors.append(e)
+ result.update_teardown(Result.ERROR, e)
# we need to put the sys.exit call outside the finally clause to make sure
# that unexpected exceptions will propagate
@@ -72,61 +74,72 @@ def run_all() -> None:
_exit_dts()
-def _run_execution(sut_node: SutNode, execution: ExecutionConfiguration) -> None:
+def _run_execution(
+ sut_node: SutNode, execution: ExecutionConfiguration, result: DTSResult
+) -> None:
"""
Run the given execution. This involves running the execution setup as well as
running all build targets in the given execution.
"""
dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+ execution_result = result.add_execution(sut_node.config)
try:
sut_node.set_up_execution(execution)
+ execution_result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception("Execution setup failed.")
- errors.append(e)
+ execution_result.update_setup(Result.FAIL, e)
else:
for build_target in execution.build_targets:
- _run_build_target(sut_node, build_target, execution)
+ _run_build_target(sut_node, build_target, execution, execution_result)
finally:
try:
sut_node.tear_down_execution()
+ execution_result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Execution teardown failed.")
- errors.append(e)
+ execution_result.update_teardown(Result.FAIL, e)
def _run_build_target(
sut_node: SutNode,
build_target: BuildTargetConfiguration,
execution: ExecutionConfiguration,
+ execution_result: ExecutionResult,
) -> None:
"""
Run the given build target.
"""
dts_logger.info(f"Running build target '{build_target.name}'.")
+ build_target_result = execution_result.add_build_target(build_target)
try:
sut_node.set_up_build_target(build_target)
+ result.dpdk_version = sut_node.dpdk_version
+ build_target_result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception("Build target setup failed.")
- errors.append(e)
+ build_target_result.update_setup(Result.FAIL, e)
else:
- _run_suites(sut_node, execution)
+ _run_suites(sut_node, execution, build_target_result)
finally:
try:
sut_node.tear_down_build_target()
+ build_target_result.update_teardown(Result.PASS)
except Exception as e:
dts_logger.exception("Build target teardown failed.")
- errors.append(e)
+ build_target_result.update_teardown(Result.FAIL, e)
def _run_suites(
sut_node: SutNode,
execution: ExecutionConfiguration,
+ build_target_result: BuildTargetResult,
) -> None:
"""
Use the given build_target to run execution's test suites
@@ -143,12 +156,15 @@ def _run_suites(
)
except Exception as e:
dts_logger.exception("An error occurred when searching for test suites.")
- errors.append(e)
+ result.update_setup(Result.ERROR, e)
else:
for test_suite_class in test_suite_classes:
test_suite = test_suite_class(
- sut_node, test_suite_config.test_cases, execution.func, errors
+ sut_node,
+ test_suite_config.test_cases,
+ execution.func,
+ build_target_result,
)
test_suite.run()
@@ -157,20 +173,8 @@ def _exit_dts() -> None:
"""
Process all errors and exit with the proper exit code.
"""
- if errors and dts_logger:
- dts_logger.debug("Summary of errors:")
- for error in errors:
- dts_logger.debug(repr(error))
-
- return_code = ErrorSeverity.NO_ERR
- for error in errors:
- error_return_code = ErrorSeverity.GENERIC_ERR
- if isinstance(error, DTSError):
- error_return_code = error.severity
-
- if error_return_code > return_code:
- return_code = error_return_code
+ result.process()
if dts_logger:
dts_logger.info("DTS execution has ended.")
- sys.exit(return_code)
+ sys.exit(result.get_return_code())
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 4ccc98537d..71955f4581 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -143,7 +143,6 @@ def _get_parser() -> argparse.ArgumentParser:
"--test-cases",
action=_env_arg("DTS_TESTCASES"),
default="",
- required=False,
help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
"Unknown test cases will be silently ignored.",
)
@@ -154,7 +153,6 @@ def _get_parser() -> argparse.ArgumentParser:
action=_env_arg("DTS_RERUN"),
default=0,
type=int,
- required=False,
help="[DTS_RERUN] Re-run each test case the specified amount of times "
"if a test failure occurs",
)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
new file mode 100644
index 0000000000..743919820c
--- /dev/null
+++ b/dts/framework/test_result.py
@@ -0,0 +1,316 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Generic result container and reporters
+"""
+
+import os.path
+from collections.abc import MutableSequence
+from enum import Enum, auto
+
+from .config import (
+ OS,
+ Architecture,
+ BuildTargetConfiguration,
+ Compiler,
+ CPUType,
+ NodeConfiguration,
+)
+from .exception import DTSError, ErrorSeverity
+from .logger import DTSLOG
+from .settings import SETTINGS
+
+
+class Result(Enum):
+ """
+ An Enum defining the possible states that
+ a setup, a teardown or a test case may end up in.
+ """
+
+ PASS = auto()
+ FAIL = auto()
+ ERROR = auto()
+ SKIP = auto()
+
+ def __bool__(self) -> bool:
+ return self is self.PASS
+
+
+class FixtureResult(object):
+ """
+ A record that stored the result of a setup or a teardown.
+ The default is FAIL because immediately after creating the object
+ the setup of the corresponding stage will be executed, which also guarantees
+ the execution of teardown.
+ """
+
+ result: Result
+ error: Exception | None = None
+
+ def __init__(
+ self,
+ result: Result = Result.FAIL,
+ error: Exception | None = None,
+ ):
+ self.result = result
+ self.error = error
+
+ def __bool__(self) -> bool:
+ return bool(self.result)
+
+
+class Statistics(dict):
+ """
+ A helper class used to store the number of test cases by its result
+ along a few other basic information.
+ Using a dict provides a convenient way to format the data.
+ """
+
+ def __init__(self, dpdk_version):
+ super(Statistics, self).__init__()
+ for result in Result:
+ self[result.name] = 0
+ self["PASS RATE"] = 0.0
+ self["DPDK VERSION"] = dpdk_version
+
+ def __iadd__(self, other: Result) -> "Statistics":
+ """
+ Add a Result to the final count.
+ """
+ self[other.name] += 1
+ self["PASS RATE"] = (
+ float(self[Result.PASS.name])
+ * 100
+ / sum(self[result.name] for result in Result)
+ )
+ return self
+
+ def __str__(self) -> str:
+ """
+ Provide a string representation of the data.
+ """
+ stats_str = ""
+ for key, value in self.items():
+ stats_str += f"{key:<12} = {value}\n"
+ # according to docs, we should use \n when writing to text files
+ # on all platforms
+ return stats_str
+
+
+class BaseResult(object):
+ """
+ The Base class for all results. Stores the results of
+ the setup and teardown portions of the corresponding stage
+ and a list of results from each inner stage in _inner_results.
+ """
+
+ setup_result: FixtureResult
+ teardown_result: FixtureResult
+ _inner_results: MutableSequence["BaseResult"]
+
+ def __init__(self):
+ self.setup_result = FixtureResult()
+ self.teardown_result = FixtureResult()
+ self._inner_results = []
+
+ def update_setup(self, result: Result, error: Exception | None = None) -> None:
+ self.setup_result.result = result
+ self.setup_result.error = error
+
+ def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+ self.teardown_result.result = result
+ self.teardown_result.error = error
+
+ def _get_setup_teardown_errors(self) -> list[Exception]:
+ errors = []
+ if self.setup_result.error:
+ errors.append(self.setup_result.error)
+ if self.teardown_result.error:
+ errors.append(self.teardown_result.error)
+ return errors
+
+ def _get_inner_errors(self) -> list[Exception]:
+ return [
+ error
+ for inner_result in self._inner_results
+ for error in inner_result.get_errors()
+ ]
+
+ def get_errors(self) -> list[Exception]:
+ return self._get_setup_teardown_errors() + self._get_inner_errors()
+
+ def add_stats(self, statistics: Statistics) -> None:
+ for inner_result in self._inner_results:
+ inner_result.add_stats(statistics)
+
+
+class TestCaseResult(BaseResult, FixtureResult):
+ """
+ The test case specific result.
+ Stores the result of the actual test case.
+ Also stores the test case name.
+ """
+
+ test_case_name: str
+
+ def __init__(self, test_case_name: str):
+ super(TestCaseResult, self).__init__()
+ self.test_case_name = test_case_name
+
+ def update(self, result: Result, error: Exception | None = None) -> None:
+ self.result = result
+ self.error = error
+
+ def _get_inner_errors(self) -> list[Exception]:
+ if self.error:
+ return [self.error]
+ return []
+
+ def add_stats(self, statistics: Statistics) -> None:
+ statistics += self.result
+
+ def __bool__(self) -> bool:
+ return (
+ bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
+ )
+
+
+class TestSuiteResult(BaseResult):
+ """
+ The test suite specific result.
+ The _inner_results list stores results of test cases in a given test suite.
+ Also stores the test suite name.
+ """
+
+ suite_name: str
+
+ def __init__(self, suite_name: str):
+ super(TestSuiteResult, self).__init__()
+ self.suite_name = suite_name
+
+ def add_test_case(self, test_case_name: str) -> TestCaseResult:
+ test_case_result = TestCaseResult(test_case_name)
+ self._inner_results.append(test_case_result)
+ return test_case_result
+
+
+class BuildTargetResult(BaseResult):
+ """
+ The build target specific result.
+ The _inner_results list stores results of test suites in a given build target.
+ Also stores build target specifics, such as compiler used to build DPDK.
+ """
+
+ arch: Architecture
+ os: OS
+ cpu: CPUType
+ compiler: Compiler
+
+ def __init__(self, build_target: BuildTargetConfiguration):
+ super(BuildTargetResult, self).__init__()
+ self.arch = build_target.arch
+ self.os = build_target.os
+ self.cpu = build_target.cpu
+ self.compiler = build_target.compiler
+
+ def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+ test_suite_result = TestSuiteResult(test_suite_name)
+ self._inner_results.append(test_suite_result)
+ return test_suite_result
+
+
+class ExecutionResult(BaseResult):
+ """
+ The execution specific result.
+ The _inner_results list stores results of build targets in a given execution.
+ Also stores the SUT node configuration.
+ """
+
+ sut_node: NodeConfiguration
+
+ def __init__(self, sut_node: NodeConfiguration):
+ super(ExecutionResult, self).__init__()
+ self.sut_node = sut_node
+
+ def add_build_target(
+ self, build_target: BuildTargetConfiguration
+ ) -> BuildTargetResult:
+ build_target_result = BuildTargetResult(build_target)
+ self._inner_results.append(build_target_result)
+ return build_target_result
+
+
+class DTSResult(BaseResult):
+ """
+ Stores environment information and test results from a DTS run, which are:
+ * Execution level information, such as SUT and TG hardware.
+ * Build target level information, such as compiler, target OS and cpu.
+ * Test suite results.
+ * All errors that are caught and recorded during DTS execution.
+
+ The information is stored in nested objects.
+
+ The class is capable of computing the return code used to exit DTS with
+ from the stored error.
+
+ It also provides a brief statistical summary of passed/failed test cases.
+ """
+
+ dpdk_version: str | None
+ _logger: DTSLOG
+ _errors: list[Exception]
+ _return_code: ErrorSeverity
+ _stats_result: Statistics | None
+ _stats_filename: str
+
+ def __init__(self, logger: DTSLOG):
+ super(DTSResult, self).__init__()
+ self.dpdk_version = None
+ self._logger = logger
+ self._errors = []
+ self._return_code = ErrorSeverity.NO_ERR
+ self._stats_result = None
+ self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
+
+ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+ execution_result = ExecutionResult(sut_node)
+ self._inner_results.append(execution_result)
+ return execution_result
+
+ def add_error(self, error) -> None:
+ self._errors.append(error)
+
+ def process(self) -> None:
+ """
+ Process the data after a DTS run.
+ The data is added to nested objects during runtime and this parent object
+ is not updated at that time. This requires us to process the nested data
+ after it's all been gathered.
+
+ The processing gathers all errors and the result statistics of test cases.
+ """
+ self._errors += self.get_errors()
+ if self._errors and self._logger:
+ self._logger.debug("Summary of errors:")
+ for error in self._errors:
+ self._logger.debug(repr(error))
+
+ self._stats_result = Statistics(self.dpdk_version)
+ self.add_stats(self._stats_result)
+ with open(self._stats_filename, "w+") as stats_file:
+ stats_file.write(str(self._stats_result))
+
+ def get_return_code(self) -> int:
+ """
+ Go through all stored Exceptions and return the highest error code found.
+ """
+ for error in self._errors:
+ error_return_code = ErrorSeverity.GENERIC_ERR
+ if isinstance(error, DTSError):
+ error_return_code = error.severity
+
+ if error_return_code > self._return_code:
+ self._return_code = error_return_code
+
+ return int(self._return_code)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 12bf3b6420..0705f38f98 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -9,12 +9,12 @@
import importlib
import inspect
import re
-from collections.abc import MutableSequence
from types import MethodType
from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
+from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
from .testbed_model import SutNode
@@ -40,21 +40,21 @@ class TestSuite(object):
_logger: DTSLOG
_test_cases_to_run: list[str]
_func: bool
- _errors: MutableSequence[Exception]
+ _result: TestSuiteResult
def __init__(
self,
sut_node: SutNode,
test_cases: list[str],
func: bool,
- errors: MutableSequence[Exception],
+ build_target_result: BuildTargetResult,
):
self.sut_node = sut_node
self._logger = getLogger(self.__class__.__name__)
self._test_cases_to_run = test_cases
self._test_cases_to_run.extend(SETTINGS.test_cases)
self._func = func
- self._errors = errors
+ self._result = build_target_result.add_test_suite(self.__class__.__name__)
def set_up_suite(self) -> None:
"""
@@ -97,10 +97,11 @@ def run(self) -> None:
try:
self._logger.info(f"Starting test suite setup: {test_suite_name}")
self.set_up_suite()
+ self._result.update_setup(Result.PASS)
self._logger.info(f"Test suite setup successful: {test_suite_name}")
except Exception as e:
self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
- self._errors.append(e)
+ self._result.update_setup(Result.ERROR, e)
else:
self._execute_test_suite()
@@ -109,13 +110,14 @@ def run(self) -> None:
try:
self.tear_down_suite()
self.sut_node.kill_cleanup_dpdk_apps()
+ self._result.update_teardown(Result.PASS)
except Exception as e:
self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
self._logger.warning(
f"Test suite '{test_suite_name}' teardown failed, "
f"the next test suite may be affected."
)
- self._errors.append(e)
+ self._result.update_setup(Result.ERROR, e)
def _execute_test_suite(self) -> None:
"""
@@ -123,17 +125,18 @@ def _execute_test_suite(self) -> None:
"""
if self._func:
for test_case_method in self._get_functional_test_cases():
+ test_case_name = test_case_method.__name__
+ test_case_result = self._result.add_test_case(test_case_name)
all_attempts = SETTINGS.re_run + 1
attempt_nr = 1
- while (
- not self._run_test_case(test_case_method)
- and attempt_nr < all_attempts
- ):
+ self._run_test_case(test_case_method, test_case_result)
+ while not test_case_result and attempt_nr < all_attempts:
attempt_nr += 1
self._logger.info(
- f"Re-running FAILED test case '{test_case_method.__name__}'. "
+ f"Re-running FAILED test case '{test_case_name}'. "
f"Attempt number {attempt_nr} out of {all_attempts}."
)
+ self._run_test_case(test_case_method, test_case_result)
def _get_functional_test_cases(self) -> list[MethodType]:
"""
@@ -166,68 +169,69 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
return match
- def _run_test_case(self, test_case_method: MethodType) -> bool:
+ def _run_test_case(
+ self, test_case_method: MethodType, test_case_result: TestCaseResult
+ ) -> None:
"""
Setup, execute and teardown a test case in this suite.
- Exceptions are caught and recorded in logs.
+ Exceptions are caught and recorded in logs and results.
"""
test_case_name = test_case_method.__name__
- result = False
try:
# run set_up function for each case
self.set_up_test_case()
+ test_case_result.update_setup(Result.PASS)
except SSHTimeoutError as e:
self._logger.exception(f"Test case setup FAILED: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update_setup(Result.FAIL, e)
except Exception as e:
self._logger.exception(f"Test case setup ERROR: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update_setup(Result.ERROR, e)
else:
# run test case if setup was successful
- result = self._execute_test_case(test_case_method)
+ self._execute_test_case(test_case_method, test_case_result)
finally:
try:
self.tear_down_test_case()
+ test_case_result.update_teardown(Result.PASS)
except Exception as e:
self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
self._logger.warning(
f"Test case '{test_case_name}' teardown failed, "
f"the next test case may be affected."
)
- self._errors.append(e)
- result = False
+ test_case_result.update_teardown(Result.ERROR, e)
+ test_case_result.update(Result.ERROR)
- return result
-
- def _execute_test_case(self, test_case_method: MethodType) -> bool:
+ def _execute_test_case(
+ self, test_case_method: MethodType, test_case_result: TestCaseResult
+ ) -> None:
"""
Execute one test case and handle failures.
"""
test_case_name = test_case_method.__name__
- result = False
try:
self._logger.info(f"Starting test case execution: {test_case_name}")
test_case_method()
- result = True
+ test_case_result.update(Result.PASS)
self._logger.info(f"Test case execution PASSED: {test_case_name}")
except TestCaseVerifyError as e:
self._logger.exception(f"Test case execution FAILED: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update(Result.FAIL, e)
except Exception as e:
self._logger.exception(f"Test case execution ERROR: {test_case_name}")
- self._errors.append(e)
+ test_case_result.update(Result.ERROR, e)
except KeyboardInterrupt:
self._logger.error(
f"Test case execution INTERRUPTED by user: {test_case_name}"
)
+ test_case_result.update(Result.SKIP)
raise KeyboardInterrupt("Stop DTS")
- return result
-
def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
def is_test_suite(object) -> bool:
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* [PATCH v6 10/10] doc: update dts setup and test suite cookbook
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
` (8 preceding siblings ...)
2023-03-03 10:25 ` [PATCH v6 09/10] dts: add test results module Juraj Linkeš
@ 2023-03-03 10:25 ` Juraj Linkeš
2023-03-09 21:47 ` Patrick Robb
2023-03-19 15:26 ` [PATCH v6 00/10] dts: add hello world test case Thomas Monjalon
10 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-03 10:25 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb
Cc: dev, Juraj Linkeš
Document how to configure and run DTS.
Also add documentation related to new features: SUT setup and a brief
test suite implementation cookbook.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
doc/guides/tools/dts.rst | 165 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 163 insertions(+), 2 deletions(-)
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index daf54359ed..ebd6dceb6a 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -1,5 +1,5 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2022 PANTHEON.tech s.r.o.
+ Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
DPDK Test Suite
===============
@@ -56,7 +56,7 @@ DTS runtime environment or just plain DTS environment are used interchangeably.
Setting up DTS environment
---------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~
#. **Python Version**
@@ -93,6 +93,167 @@ Setting up DTS environment
poetry install
poetry shell
+#. **SSH Connection**
+
+ DTS uses Python pexpect for SSH connections between DTS environment and the other hosts.
+ The pexpect implementation is a wrapper around the ssh command in the DTS environment.
+ This means it'll use the SSH agent providing the ssh command and its keys.
+
+
+Setting up System Under Test
+----------------------------
+
+There are two areas that need to be set up on a System Under Test:
+
+#. **DPDK dependencies**
+
+ DPDK will be built and run on the SUT.
+ Consult the Getting Started guides for the list of dependencies for each distribution.
+
+#. **Hardware dependencies**
+
+ Any hardware DPDK uses needs a proper driver
+ and most OS distributions provide those, but the version may not be satisfactory.
+ It's up to each user to install the driver they're interested in testing.
+ The hardware also may also need firmware upgrades, which is also left at user discretion.
+
+#. **Hugepages**
+
+ There are two ways to configure hugepages:
+
+ * DTS configuration
+
+ You may specify the optional hugepage configuration in the DTS config file.
+ If you do, DTS will take care of configuring hugepages,
+ overwriting your current SUT hugepage configuration.
+
+ * System under test configuration
+
+ It's possible to use the hugepage configuration already present on the SUT.
+ If you wish to do so, don't specify the hugepage configuration in the DTS config file.
+
+
+Running DTS
+-----------
+
+DTS needs to know which nodes to connect to and what hardware to use on those nodes.
+Once that's configured, DTS needs a DPDK tarball and it's ready to run.
+
+Configuring DTS
+~~~~~~~~~~~~~~~
+
+DTS configuration is split into nodes and executions and build targets within executions.
+By default, DTS will try to use the ``dts/conf.yaml`` config file,
+which is a template that illustrates what can be configured in DTS:
+
+ .. literalinclude:: ../../../dts/conf.yaml
+ :language: yaml
+ :start-at: executions:
+
+
+The user must be root or any other user with prompt starting with ``#``.
+The other fields are mostly self-explanatory
+and documented in more detail in ``dts/framework/config/conf_yaml_schema.json``.
+
+DTS Execution
+~~~~~~~~~~~~~
+
+DTS is run with ``main.py`` located in the ``dts`` directory after entering Poetry shell::
+
+ usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT]
+ [-v VERBOSE] [-s SKIP_SETUP] [--tarball TARBALL]
+ [--compile-timeout COMPILE_TIMEOUT] [--test-cases TEST_CASES]
+ [--re-run RE_RUN]
+
+ Run DPDK test suites. All options may be specified with the environment variables provided in
+ brackets. Command line arguments have higher priority.
+
+ options:
+ -h, --help show this help message and exit
+ --config-file CONFIG_FILE
+ [DTS_CFG_FILE] configuration file that describes the test cases, SUTs
+ and targets. (default: conf.yaml)
+ --output-dir OUTPUT_DIR, --output OUTPUT_DIR
+ [DTS_OUTPUT_DIR] Output directory where dts logs and results are
+ saved. (default: output)
+ -t TIMEOUT, --timeout TIMEOUT
+ [DTS_TIMEOUT] The default timeout for all DTS operations except for
+ compiling DPDK. (default: 15)
+ -v VERBOSE, --verbose VERBOSE
+ [DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all
+ messages to the console. (default: N)
+ -s SKIP_SETUP, --skip-setup SKIP_SETUP
+ [DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG
+ nodes. (default: N)
+ --tarball TARBALL, --snapshot TARBALL
+ [DTS_DPDK_TARBALL] Path to DPDK source code tarball which will be
+ used in testing. (default: dpdk.tar.xz)
+ --compile-timeout COMPILE_TIMEOUT
+ [DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK. (default: 1200)
+ --test-cases TEST_CASES
+ [DTS_TESTCASES] Comma-separated list of test cases to execute.
+ Unknown test cases will be silently ignored. (default: )
+ --re-run RE_RUN, --re_run RE_RUN
+ [DTS_RERUN] Re-run each test case the specified amount of times if a
+ test failure occurs (default: 0)
+
+
+The brackets contain the names of environment variables that set the same thing.
+The minimum DTS needs is a config file and a DPDK tarball.
+You may pass those to DTS using the command line arguments or use the default paths.
+
+
+DTS Results
+~~~~~~~~~~~
+
+Results are stored in the output dir by default
+which be changed with the ``--output-dir`` command line argument.
+The results contain basic statistics of passed/failed test cases and DPDK version.
+
+
+How To Write a Test Suite
+-------------------------
+
+All test suites inherit from ``TestSuite`` defined in ``dts/framework/test_suite.py``.
+There are four types of methods that comprise a test suite:
+
+#. **Test cases**
+
+ | Test cases are methods that start with a particular prefix.
+ | Functional test cases start with ``test_``, e.g. ``test_hello_world_single_core``.
+ | Performance test cases start with ``test_perf_``, e.g. ``test_perf_nic_single_core``.
+ | A test suite may have any number of functional and/or performance test cases.
+ However, these test cases must test the same feature,
+ following the rule of one feature = one test suite.
+ Test cases for one feature don't need to be grouped in just one test suite, though.
+ If the feature requires many testing scenarios to cover,
+ the test cases would be better off spread over multiple test suites
+ so that each test suite doesn't take too long to execute.
+
+#. **Setup and Teardown methods**
+
+ | There are setup and teardown methods for the whole test suite and each individual test case.
+ | Methods ``set_up_suite`` and ``tear_down_suite`` will be executed
+ before any and after all test cases have been executed, respectively.
+ | Methods ``set_up_test_case`` and ``tear_down_test_case`` will be executed
+ before and after each test case, respectively.
+ | These methods don't need to be implemented if there's no need for them in a test suite.
+ In that case, nothing will happen when they're is executed.
+
+#. **Test case verification**
+
+ Test case verification should be done with the ``verify`` method, which records the result.
+ The method should be called at the end of each test case.
+
+#. **Other methods**
+
+ Of course, all test suite code should adhere to coding standards.
+ Only the above methods will be treated specially and any other methods may be defined
+ (which should be mostly private methods needed by each particular test suite).
+ Any specific features (such as NIC configuration) required by a test suite
+ should be implemented in the ``SutNode`` class (and the underlying classes that ``SutNode`` uses)
+ and used by the test suite via the ``sut_node`` field.
+
DTS Developer Tools
-------------------
--
2.30.2
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v6 10/10] doc: update dts setup and test suite cookbook
2023-03-03 10:25 ` [PATCH v6 10/10] doc: update dts setup and test suite cookbook Juraj Linkeš
@ 2023-03-09 21:47 ` Patrick Robb
0 siblings, 0 replies; 97+ messages in thread
From: Patrick Robb @ 2023-03-09 21:47 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, dev
[-- Attachment #1: Type: text/plain, Size: 9316 bytes --]
Tested-by: Patrick Robb <probb@iol.unh.edu>
On Fri, Mar 3, 2023 at 5:25 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Document how to configure and run DTS.
> Also add documentation related to new features: SUT setup and a brief
> test suite implementation cookbook.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> doc/guides/tools/dts.rst | 165 ++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 163 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index daf54359ed..ebd6dceb6a 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -1,5 +1,5 @@
> .. SPDX-License-Identifier: BSD-3-Clause
> - Copyright(c) 2022 PANTHEON.tech s.r.o.
> + Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>
> DPDK Test Suite
> ===============
> @@ -56,7 +56,7 @@ DTS runtime environment or just plain DTS environment
> are used interchangeably.
>
>
> Setting up DTS environment
> ---------------------------
> +~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> #. **Python Version**
>
> @@ -93,6 +93,167 @@ Setting up DTS environment
> poetry install
> poetry shell
>
> +#. **SSH Connection**
> +
> + DTS uses Python pexpect for SSH connections between DTS environment
> and the other hosts.
> + The pexpect implementation is a wrapper around the ssh command in the
> DTS environment.
> + This means it'll use the SSH agent providing the ssh command and its
> keys.
> +
> +
> +Setting up System Under Test
> +----------------------------
> +
> +There are two areas that need to be set up on a System Under Test:
> +
> +#. **DPDK dependencies**
> +
> + DPDK will be built and run on the SUT.
> + Consult the Getting Started guides for the list of dependencies for
> each distribution.
> +
> +#. **Hardware dependencies**
> +
> + Any hardware DPDK uses needs a proper driver
> + and most OS distributions provide those, but the version may not be
> satisfactory.
> + It's up to each user to install the driver they're interested in
> testing.
> + The hardware also may also need firmware upgrades, which is also left
> at user discretion.
> +
> +#. **Hugepages**
> +
> + There are two ways to configure hugepages:
> +
> + * DTS configuration
> +
> + You may specify the optional hugepage configuration in the DTS
> config file.
> + If you do, DTS will take care of configuring hugepages,
> + overwriting your current SUT hugepage configuration.
> +
> + * System under test configuration
> +
> + It's possible to use the hugepage configuration already present on
> the SUT.
> + If you wish to do so, don't specify the hugepage configuration in
> the DTS config file.
> +
> +
> +Running DTS
> +-----------
> +
> +DTS needs to know which nodes to connect to and what hardware to use on
> those nodes.
> +Once that's configured, DTS needs a DPDK tarball and it's ready to run.
> +
> +Configuring DTS
> +~~~~~~~~~~~~~~~
> +
> +DTS configuration is split into nodes and executions and build targets
> within executions.
> +By default, DTS will try to use the ``dts/conf.yaml`` config file,
> +which is a template that illustrates what can be configured in DTS:
> +
> + .. literalinclude:: ../../../dts/conf.yaml
> + :language: yaml
> + :start-at: executions:
> +
> +
> +The user must be root or any other user with prompt starting with ``#``.
> +The other fields are mostly self-explanatory
> +and documented in more detail in
> ``dts/framework/config/conf_yaml_schema.json``.
> +
> +DTS Execution
> +~~~~~~~~~~~~~
> +
> +DTS is run with ``main.py`` located in the ``dts`` directory after
> entering Poetry shell::
> +
> + usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir
> OUTPUT_DIR] [-t TIMEOUT]
> + [-v VERBOSE] [-s SKIP_SETUP] [--tarball TARBALL]
> + [--compile-timeout COMPILE_TIMEOUT] [--test-cases
> TEST_CASES]
> + [--re-run RE_RUN]
> +
> + Run DPDK test suites. All options may be specified with the
> environment variables provided in
> + brackets. Command line arguments have higher priority.
> +
> + options:
> + -h, --help show this help message and exit
> + --config-file CONFIG_FILE
> + [DTS_CFG_FILE] configuration file that
> describes the test cases, SUTs
> + and targets. (default: conf.yaml)
> + --output-dir OUTPUT_DIR, --output OUTPUT_DIR
> + [DTS_OUTPUT_DIR] Output directory where dts
> logs and results are
> + saved. (default: output)
> + -t TIMEOUT, --timeout TIMEOUT
> + [DTS_TIMEOUT] The default timeout for all DTS
> operations except for
> + compiling DPDK. (default: 15)
> + -v VERBOSE, --verbose VERBOSE
> + [DTS_VERBOSE] Set to 'Y' to enable verbose
> output, logging all
> + messages to the console. (default: N)
> + -s SKIP_SETUP, --skip-setup SKIP_SETUP
> + [DTS_SKIP_SETUP] Set to 'Y' to skip all setup
> steps on SUT and TG
> + nodes. (default: N)
> + --tarball TARBALL, --snapshot TARBALL
> + [DTS_DPDK_TARBALL] Path to DPDK source code
> tarball which will be
> + used in testing. (default: dpdk.tar.xz)
> + --compile-timeout COMPILE_TIMEOUT
> + [DTS_COMPILE_TIMEOUT] The timeout for
> compiling DPDK. (default: 1200)
> + --test-cases TEST_CASES
> + [DTS_TESTCASES] Comma-separated list of test
> cases to execute.
> + Unknown test cases will be silently ignored.
> (default: )
> + --re-run RE_RUN, --re_run RE_RUN
> + [DTS_RERUN] Re-run each test case the
> specified amount of times if a
> + test failure occurs (default: 0)
> +
> +
> +The brackets contain the names of environment variables that set the same
> thing.
> +The minimum DTS needs is a config file and a DPDK tarball.
> +You may pass those to DTS using the command line arguments or use the
> default paths.
> +
> +
> +DTS Results
> +~~~~~~~~~~~
> +
> +Results are stored in the output dir by default
> +which be changed with the ``--output-dir`` command line argument.
> +The results contain basic statistics of passed/failed test cases and DPDK
> version.
> +
> +
> +How To Write a Test Suite
> +-------------------------
> +
> +All test suites inherit from ``TestSuite`` defined in
> ``dts/framework/test_suite.py``.
> +There are four types of methods that comprise a test suite:
> +
> +#. **Test cases**
> +
> + | Test cases are methods that start with a particular prefix.
> + | Functional test cases start with ``test_``, e.g.
> ``test_hello_world_single_core``.
> + | Performance test cases start with ``test_perf_``, e.g.
> ``test_perf_nic_single_core``.
> + | A test suite may have any number of functional and/or performance
> test cases.
> + However, these test cases must test the same feature,
> + following the rule of one feature = one test suite.
> + Test cases for one feature don't need to be grouped in just one test
> suite, though.
> + If the feature requires many testing scenarios to cover,
> + the test cases would be better off spread over multiple test suites
> + so that each test suite doesn't take too long to execute.
> +
> +#. **Setup and Teardown methods**
> +
> + | There are setup and teardown methods for the whole test suite and
> each individual test case.
> + | Methods ``set_up_suite`` and ``tear_down_suite`` will be executed
> + before any and after all test cases have been executed, respectively.
> + | Methods ``set_up_test_case`` and ``tear_down_test_case`` will be
> executed
> + before and after each test case, respectively.
> + | These methods don't need to be implemented if there's no need for
> them in a test suite.
> + In that case, nothing will happen when they're is executed.
> +
> +#. **Test case verification**
> +
> + Test case verification should be done with the ``verify`` method,
> which records the result.
> + The method should be called at the end of each test case.
> +
> +#. **Other methods**
> +
> + Of course, all test suite code should adhere to coding standards.
> + Only the above methods will be treated specially and any other methods
> may be defined
> + (which should be mostly private methods needed by each particular test
> suite).
> + Any specific features (such as NIC configuration) required by a test
> suite
> + should be implemented in the ``SutNode`` class (and the underlying
> classes that ``SutNode`` uses)
> + and used by the test suite via the ``sut_node`` field.
> +
>
> DTS Developer Tools
> -------------------
> --
> 2.30.2
>
>
--
Patrick Robb
Technical Service Manager
UNH InterOperability Laboratory
21 Madbury Rd, Suite 100, Durham, NH 03824
www.iol.unh.edu
[-- Attachment #2: Type: text/html, Size: 12276 bytes --]
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v6 00/10] dts: add hello world test case
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
` (9 preceding siblings ...)
2023-03-03 10:25 ` [PATCH v6 10/10] doc: update dts setup and test suite cookbook Juraj Linkeš
@ 2023-03-19 15:26 ` Thomas Monjalon
10 siblings, 0 replies; 97+ messages in thread
From: Thomas Monjalon @ 2023-03-19 15:26 UTC (permalink / raw)
To: Juraj Linkeš
Cc: Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb, dev
> Juraj Linkeš (10):
> dts: add node and os abstractions
> dts: add ssh command verification
> dts: add dpdk build on sut
> dts: add dpdk execution handling
> dts: add node memory setup
> dts: add test suite module
> dts: add hello world testsuite
> dts: add test suite config and runner
> dts: add test results module
> doc: update dts setup and test suite cookbook
Applied, thanks.
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v6 03/10] dts: add dpdk build on sut
2023-03-03 10:25 ` [PATCH v6 03/10] dts: add dpdk build on sut Juraj Linkeš
@ 2023-03-20 8:30 ` David Marchand
2023-03-20 13:12 ` Juraj Linkeš
0 siblings, 1 reply; 97+ messages in thread
From: David Marchand @ 2023-03-20 8:30 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb, dev
Hi Juraj,
On Fri, Mar 3, 2023 at 11:25 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
> +class MesonArgs(object):
> + """
> + Aggregate the arguments needed to build DPDK:
> + default_library: Default library type, Meson allows "shared", "static" and "both".
> + Defaults to None, in which case the argument won't be used.
> + Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
> + Do not use -D with them, for example:
> + meson_args = MesonArgs(enable_kmods=True).
> + """
> +
> + _default_library: str
> +
> + def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> + self._default_library = (
> + f"--default-library={default_library}" if default_library else ""
> + )
> + self._dpdk_args = " ".join(
> + (
> + f"-D{dpdk_arg_name}={dpdk_arg_value}"
> + for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
> + )
> + )
I am missing something here.
Afair, meson accepts the -Ddefault_library form.
Why do we need this special case?
> +
> + def __str__(self) -> str:
> + return " ".join(f"{self._default_library} {self._dpdk_args}".split())
> --
> 2.30.2
>
--
David Marchand
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v6 03/10] dts: add dpdk build on sut
2023-03-20 8:30 ` David Marchand
@ 2023-03-20 13:12 ` Juraj Linkeš
2023-03-20 13:22 ` David Marchand
0 siblings, 1 reply; 97+ messages in thread
From: Juraj Linkeš @ 2023-03-20 13:12 UTC (permalink / raw)
To: David Marchand
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb, dev
On Mon, Mar 20, 2023 at 9:30 AM David Marchand
<david.marchand@redhat.com> wrote:
>
> Hi Juraj,
>
> On Fri, Mar 3, 2023 at 11:25 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
> > +class MesonArgs(object):
> > + """
> > + Aggregate the arguments needed to build DPDK:
> > + default_library: Default library type, Meson allows "shared", "static" and "both".
> > + Defaults to None, in which case the argument won't be used.
> > + Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
> > + Do not use -D with them, for example:
> > + meson_args = MesonArgs(enable_kmods=True).
> > + """
> > +
> > + _default_library: str
> > +
> > + def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> > + self._default_library = (
> > + f"--default-library={default_library}" if default_library else ""
> > + )
> > + self._dpdk_args = " ".join(
> > + (
> > + f"-D{dpdk_arg_name}={dpdk_arg_value}"
> > + for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
> > + )
> > + )
>
> I am missing something here.
> Afair, meson accepts the -Ddefault_library form.
>
> Why do we need this special case?
I did not know that specifying default_library with -D is possible.
Should I submit a fix?
>
>
> > +
> > + def __str__(self) -> str:
> > + return " ".join(f"{self._default_library} {self._dpdk_args}".split())
> > --
> > 2.30.2
> >
>
>
> --
> David Marchand
>
^ permalink raw reply [flat|nested] 97+ messages in thread
* Re: [PATCH v6 03/10] dts: add dpdk build on sut
2023-03-20 13:12 ` Juraj Linkeš
@ 2023-03-20 13:22 ` David Marchand
0 siblings, 0 replies; 97+ messages in thread
From: David Marchand @ 2023-03-20 13:22 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb, dev
On Mon, Mar 20, 2023 at 2:12 PM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
> On Mon, Mar 20, 2023 at 9:30 AM David Marchand
> <david.marchand@redhat.com> wrote:
> > On Fri, Mar 3, 2023 at 11:25 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
> > > +class MesonArgs(object):
> > > + """
> > > + Aggregate the arguments needed to build DPDK:
> > > + default_library: Default library type, Meson allows "shared", "static" and "both".
> > > + Defaults to None, in which case the argument won't be used.
> > > + Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
> > > + Do not use -D with them, for example:
> > > + meson_args = MesonArgs(enable_kmods=True).
> > > + """
> > > +
> > > + _default_library: str
> > > +
> > > + def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> > > + self._default_library = (
> > > + f"--default-library={default_library}" if default_library else ""
> > > + )
> > > + self._dpdk_args = " ".join(
> > > + (
> > > + f"-D{dpdk_arg_name}={dpdk_arg_value}"
> > > + for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
> > > + )
> > > + )
> >
> > I am missing something here.
> > Afair, meson accepts the -Ddefault_library form.
> >
> > Why do we need this special case?
>
> I did not know that specifying default_library with -D is possible.
> Should I submit a fix?
This is not a big/pressing issue.
This can go in the next release.
Thanks.
--
David Marchand
^ permalink raw reply [flat|nested] 97+ messages in thread
end of thread, other threads:[~2023-03-20 13:22 UTC | newest]
Thread overview: 97+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-24 16:24 [RFC PATCH v1 00/10] dts: add hello world testcase Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 01/10] dts: hello world config options Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 02/10] dts: hello world cli parameters and env vars Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 03/10] dts: ssh connection additions for hello world Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 04/10] dts: add basic node management methods Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 05/10] dts: add system under test node Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 06/10] dts: add traffic generator node Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 07/10] dts: add testcase and basic test results Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 08/10] dts: add test runner and statistics collector Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 09/10] dts: add hello world testplan Juraj Linkeš
2022-08-24 16:24 ` [RFC PATCH v1 10/10] dts: add hello world testsuite Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 00/10] dts: add hello world testcase Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 01/10] dts: add node and os abstractions Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 02/10] dts: add ssh command verification Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 03/10] dts: add dpdk build on sut Juraj Linkeš
2022-11-16 13:15 ` Owen Hilyard
[not found] ` <30ad4f7d087d4932845b6ca13934b1d2@pantheon.tech>
[not found] ` <CAHx6DYDOFMuEm4xc65OTrtUmGBtk8Z6UtSgS2grnR_RBY5HcjQ@mail.gmail.com>
2022-11-23 12:37 ` Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 04/10] dts: add dpdk execution handling Juraj Linkeš
2022-11-16 13:28 ` Owen Hilyard
[not found] ` <df13ee41efb64e7bb37791f21ae5bac1@pantheon.tech>
[not found] ` <CAHx6DYCEYxZ0Osm6fKhp3Jx8n7s=r7qVh8R41c6nCan8Or-dpA@mail.gmail.com>
2022-11-23 13:03 ` Juraj Linkeš
2022-11-28 13:05 ` Owen Hilyard
2022-11-14 16:54 ` [RFC PATCH v2 05/10] dts: add node memory setup Juraj Linkeš
2022-11-16 13:47 ` Owen Hilyard
2022-11-23 13:58 ` Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 06/10] dts: add test results module Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 07/10] dts: add simple stats report Juraj Linkeš
2022-11-16 13:57 ` Owen Hilyard
2022-11-14 16:54 ` [RFC PATCH v2 08/10] dts: add testsuite class Juraj Linkeš
2022-11-16 15:15 ` Owen Hilyard
2022-11-14 16:54 ` [RFC PATCH v2 09/10] dts: add hello world testplan Juraj Linkeš
2022-11-14 16:54 ` [RFC PATCH v2 10/10] dts: add hello world testsuite Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 00/10] dts: add hello world testcase Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 01/10] dts: add node and os abstractions Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 02/10] dts: add ssh command verification Juraj Linkeš
2023-01-17 15:48 ` [PATCH v3 03/10] dts: add dpdk build on sut Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 04/10] dts: add dpdk execution handling Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 05/10] dts: add node memory setup Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 06/10] dts: add test suite module Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 07/10] dts: add hello world testplan Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 08/10] dts: add hello world testsuite Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 09/10] dts: add test suite config and runner Juraj Linkeš
2023-01-17 15:49 ` [PATCH v3 10/10] dts: add test results module Juraj Linkeš
2023-01-19 16:16 ` [PATCH v3 00/10] dts: add hello world testcase Owen Hilyard
2023-02-09 16:47 ` Patrick Robb
2023-02-13 15:28 ` [PATCH v4 " Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 01/10] dts: add node and os abstractions Juraj Linkeš
2023-02-17 17:44 ` Bruce Richardson
2023-02-20 13:24 ` Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 02/10] dts: add ssh command verification Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 03/10] dts: add dpdk build on sut Juraj Linkeš
2023-02-22 16:44 ` Bruce Richardson
2023-02-13 15:28 ` [PATCH v4 04/10] dts: add dpdk execution handling Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 05/10] dts: add node memory setup Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 06/10] dts: add test suite module Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 07/10] dts: add hello world testsuite Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 08/10] dts: add test suite config and runner Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 09/10] dts: add test results module Juraj Linkeš
2023-02-13 15:28 ` [PATCH v4 10/10] doc: update DTS setup and test suite cookbook Juraj Linkeš
2023-02-17 17:26 ` [PATCH v4 00/10] dts: add hello world testcase Bruce Richardson
2023-02-20 10:13 ` Juraj Linkeš
2023-02-20 11:56 ` Bruce Richardson
2023-02-22 16:39 ` Bruce Richardson
2023-02-23 8:27 ` Juraj Linkeš
2023-02-23 9:17 ` Bruce Richardson
2023-02-23 15:28 ` [PATCH v5 " Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 01/10] dts: add node and os abstractions Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 02/10] dts: add ssh command verification Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 03/10] dts: add dpdk build on sut Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 04/10] dts: add dpdk execution handling Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 05/10] dts: add node memory setup Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 06/10] dts: add test suite module Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 07/10] dts: add hello world testsuite Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 08/10] dts: add test suite config and runner Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 09/10] dts: add test results module Juraj Linkeš
2023-02-23 15:28 ` [PATCH v5 10/10] doc: update DTS setup and test suite cookbook Juraj Linkeš
2023-03-03 8:31 ` Huang, ChenyuX
2023-02-23 16:13 ` [PATCH v5 00/10] dts: add hello world testcase Bruce Richardson
2023-02-26 19:11 ` Wathsala Wathawana Vithanage
2023-02-27 8:28 ` Juraj Linkeš
2023-02-28 15:27 ` Wathsala Wathawana Vithanage
2023-03-01 8:35 ` Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 00/10] dts: add hello world test case Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 01/10] dts: add node and os abstractions Juraj Linkeš
2023-03-03 10:24 ` [PATCH v6 02/10] dts: add ssh command verification Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 03/10] dts: add dpdk build on sut Juraj Linkeš
2023-03-20 8:30 ` David Marchand
2023-03-20 13:12 ` Juraj Linkeš
2023-03-20 13:22 ` David Marchand
2023-03-03 10:25 ` [PATCH v6 04/10] dts: add dpdk execution handling Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 05/10] dts: add node memory setup Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 06/10] dts: add test suite module Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 07/10] dts: add hello world testsuite Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 08/10] dts: add test suite config and runner Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 09/10] dts: add test results module Juraj Linkeš
2023-03-03 10:25 ` [PATCH v6 10/10] doc: update dts setup and test suite cookbook Juraj Linkeš
2023-03-09 21:47 ` Patrick Robb
2023-03-19 15:26 ` [PATCH v6 00/10] dts: add hello world test case Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).