* [RFC PATCH 0/7] dts: revamp framework
@ 2025-02-03 15:16 Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
` (7 more replies)
0 siblings, 8 replies; 9+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Hi there,
This series enables the topology configuration and implements it in the
framework. Moreover, it performs quite a few refactors and a change on
how the test suites operate through the use of a Context. Finally, the
runtime internals are now isolated under the new TestRun and are further
reworked into a finite state machine for ease of handling.
Mind that unfortunately these commits may be breaking in intermediate
steps. Due to the amount of work done in one go, it was rather difficult
to have individually working commits.
I am currently requesting for comments on how we can improve this
further. In the meantime, I am going to remove node discrimination (TG
vs SUT) in the nodes configuration and allow the test run to define the
TG and SUT configurations.
I understand this is a lot of changes so please bear with me, I may have
missed some documentation changes, or even added proper documentation in
some instances. Please do point everything out.
Best,
Luca
Luca Vizzarro (7):
dts: add port topology configuration
dts: isolate test specification to config
dts: revamp Topology model
dts: improve Port model
dts: add runtime status
dts: add global runtime context
dts: revamp runtime internals
doc/api/dts/framework.context.rst | 8 +
doc/api/dts/framework.status.rst | 8 +
doc/api/dts/framework.test_run.rst | 8 +
doc/api/dts/index.rst | 3 +
doc/guides/conf.py | 3 +-
dts/framework/config/__init__.py | 138 +++--
dts/framework/config/node.py | 25 +-
dts/framework/config/test_run.py | 144 +++++-
dts/framework/context.py | 107 ++++
dts/framework/exception.py | 33 +-
dts/framework/logger.py | 36 +-
dts/framework/remote_session/dpdk_shell.py | 53 +-
.../single_active_interactive_shell.py | 14 +-
dts/framework/remote_session/testpmd_shell.py | 27 +-
dts/framework/runner.py | 485 +-----------------
dts/framework/status.py | 64 +++
dts/framework/test_result.py | 136 +----
dts/framework/test_run.py | 443 ++++++++++++++++
dts/framework/test_suite.py | 73 ++-
dts/framework/testbed_model/capability.py | 42 +-
dts/framework/testbed_model/linux_session.py | 47 +-
dts/framework/testbed_model/node.py | 29 +-
dts/framework/testbed_model/os_session.py | 16 +-
dts/framework/testbed_model/port.py | 97 ++--
dts/framework/testbed_model/sut_node.py | 50 +-
dts/framework/testbed_model/tg_node.py | 30 +-
dts/framework/testbed_model/topology.py | 165 +++---
dts/framework/utils.py | 8 +-
dts/nodes.example.yaml | 24 +-
dts/test_runs.example.yaml | 3 +
dts/tests/TestSuite_blocklist.py | 6 +-
dts/tests/TestSuite_checksum_offload.py | 14 +-
dts/tests/TestSuite_dual_vlan.py | 6 +-
dts/tests/TestSuite_dynamic_config.py | 8 +-
dts/tests/TestSuite_dynamic_queue_conf.py | 1 -
dts/tests/TestSuite_hello_world.py | 2 +-
dts/tests/TestSuite_l2fwd.py | 9 +-
dts/tests/TestSuite_mac_filter.py | 10 +-
dts/tests/TestSuite_mtu.py | 17 +-
dts/tests/TestSuite_pmd_buffer_scatter.py | 9 +-
...stSuite_port_restart_config_persistency.py | 8 +-
dts/tests/TestSuite_promisc_support.py | 8 +-
dts/tests/TestSuite_smoke_tests.py | 3 +-
dts/tests/TestSuite_softnic.py | 4 +-
dts/tests/TestSuite_uni_pkt.py | 14 +-
dts/tests/TestSuite_vlan.py | 8 +-
46 files changed, 1287 insertions(+), 1159 deletions(-)
create mode 100644 doc/api/dts/framework.context.rst
create mode 100644 doc/api/dts/framework.status.rst
create mode 100644 doc/api/dts/framework.test_run.rst
create mode 100644 dts/framework/context.py
create mode 100644 dts/framework/status.py
create mode 100644 dts/framework/test_run.py
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 1/7] dts: add port topology configuration
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 2/7] dts: isolate test specification to config Luca Vizzarro
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
The current configuration makes the user re-specify the port links for
each port in an unintuitive and repetitive way. Moreover, this design
does not give the user to opportunity to map the port topology as
desired.
This change adds a port_topology field in the test runs, so that the
user can use map topologies for each run as required. Moreover it
simplies the process to link ports by defining a user friendly notation.
Bugzilla ID: 1478
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
dts/framework/config/__init__.py | 54 ++++++++----
dts/framework/config/node.py | 25 ++++--
dts/framework/config/test_run.py | 85 +++++++++++++++++-
dts/framework/runner.py | 13 ++-
dts/framework/test_result.py | 9 +-
dts/framework/testbed_model/capability.py | 18 ++--
dts/framework/testbed_model/node.py | 6 ++
dts/framework/testbed_model/port.py | 58 +++----------
dts/framework/testbed_model/sut_node.py | 2 +-
dts/framework/testbed_model/topology.py | 100 ++++++++--------------
dts/framework/utils.py | 8 +-
dts/nodes.example.yaml | 24 ++----
dts/test_runs.example.yaml | 3 +
13 files changed, 235 insertions(+), 170 deletions(-)
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index adbd4e952d..f8ac2c0d18 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -120,24 +120,46 @@ def validate_node_names(cls, nodes: list[NodeConfiguration]) -> list[NodeConfigu
return nodes
@model_validator(mode="after")
- def validate_ports(self) -> Self:
- """Validate that the ports are all linked to valid ones."""
- port_links: dict[tuple[str, str], Literal[False] | tuple[int, int]] = {
- (node.name, port.pci): False for node in self.nodes for port in node.ports
+ def validate_port_links(self) -> Self:
+ """Validate that all the test runs' port links are valid."""
+ existing_port_links: dict[tuple[str, str], Literal[False] | tuple[str, str]] = {
+ (node.name, port.name): False for node in self.nodes for port in node.ports
}
- for node_no, node in enumerate(self.nodes):
- for port_no, port in enumerate(node.ports):
- peer_port_identifier = (port.peer_node, port.peer_pci)
- peer_port = port_links.get(peer_port_identifier, None)
- assert peer_port is not None, (
- "invalid peer port specified for " f"nodes.{node_no}.ports.{port_no}"
- )
- assert peer_port is False, (
- f"the peer port specified for nodes.{node_no}.ports.{port_no} "
- f"is already linked to nodes.{peer_port[0]}.ports.{peer_port[1]}"
- )
- port_links[peer_port_identifier] = (node_no, port_no)
+ defined_port_links = [
+ (test_run_idx, test_run, link_idx, link)
+ for test_run_idx, test_run in enumerate(self.test_runs)
+ for link_idx, link in enumerate(test_run.port_topology)
+ ]
+ for test_run_idx, test_run, link_idx, link in defined_port_links:
+ sut_node_port_peer = existing_port_links.get(
+ (test_run.system_under_test_node, link.sut_port), None
+ )
+ assert sut_node_port_peer is not None, (
+ "Invalid SUT node port specified for link "
+ f"test_runs.{test_run_idx}.port_topology.{link_idx}."
+ )
+
+ assert sut_node_port_peer is False or sut_node_port_peer == link.right, (
+ f"The SUT node port for link test_runs.{test_run_idx}.port_topology.{link_idx} is "
+ f"already linked to port {sut_node_port_peer[0]}.{sut_node_port_peer[1]}."
+ )
+
+ tg_node_port_peer = existing_port_links.get(
+ (test_run.traffic_generator_node, link.tg_port), None
+ )
+ assert tg_node_port_peer is not None, (
+ "Invalid TG node port specified for link "
+ f"test_runs.{test_run_idx}.port_topology.{link_idx}."
+ )
+
+ assert tg_node_port_peer is False or sut_node_port_peer == link.left, (
+ f"The TG node port for link test_runs.{test_run_idx}.port_topology.{link_idx} is "
+ f"already linked to port {tg_node_port_peer[0]}.{tg_node_port_peer[1]}."
+ )
+
+ existing_port_links[link.left] = link.right
+ existing_port_links[link.right] = link.left
return self
diff --git a/dts/framework/config/node.py b/dts/framework/config/node.py
index a7ace514d9..97e0285912 100644
--- a/dts/framework/config/node.py
+++ b/dts/framework/config/node.py
@@ -12,9 +12,10 @@
from enum import Enum, auto, unique
from typing import Annotated, Literal
-from pydantic import Field
+from pydantic import Field, model_validator
+from typing_extensions import Self
-from framework.utils import REGEX_FOR_PCI_ADDRESS, StrEnum
+from framework.utils import REGEX_FOR_IDENTIFIER, REGEX_FOR_PCI_ADDRESS, StrEnum
from .common import FrozenModel
@@ -51,16 +52,14 @@ class HugepageConfiguration(FrozenModel):
class PortConfig(FrozenModel):
r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s."""
+ #: An identifier for the port. May contain letters, digits, underscores, hyphens and spaces.
+ name: str = Field(pattern=REGEX_FOR_IDENTIFIER)
#: The PCI address of the port.
pci: str = Field(pattern=REGEX_FOR_PCI_ADDRESS)
#: The driver that the kernel should bind this device to for DPDK to use it.
os_driver_for_dpdk: str = Field(examples=["vfio-pci", "mlx5_core"])
#: The operating system driver name when the operating system controls the port.
os_driver: str = Field(examples=["i40e", "ice", "mlx5_core"])
- #: The name of the peer node this port is connected to.
- peer_node: str
- #: The PCI address of the peer port connected to this port.
- peer_pci: str = Field(pattern=REGEX_FOR_PCI_ADDRESS)
class TrafficGeneratorConfig(FrozenModel):
@@ -94,7 +93,7 @@ class NodeConfiguration(FrozenModel):
r"""The configuration of :class:`~framework.testbed_model.node.Node`\s."""
#: The name of the :class:`~framework.testbed_model.node.Node`.
- name: str
+ name: str = Field(pattern=REGEX_FOR_IDENTIFIER)
#: The hostname of the :class:`~framework.testbed_model.node.Node`. Can also be an IP address.
hostname: str
#: The name of the user used to connect to the :class:`~framework.testbed_model.node.Node`.
@@ -108,6 +107,18 @@ class NodeConfiguration(FrozenModel):
#: The ports that can be used in testing.
ports: list[PortConfig] = Field(min_length=1)
+ @model_validator(mode="after")
+ def verify_unique_port_names(self) -> Self:
+ """Verify that there are no ports with the same name."""
+ used_port_names: dict[str, int] = {}
+ for idx, port in enumerate(self.ports):
+ assert port.name not in used_port_names, (
+ f"Cannot use port name '{port.name}' for ports.{idx}. "
+ f"This was already used in ports.{used_port_names[port.name]}."
+ )
+ used_port_names[port.name] = idx
+ return self
+
class DPDKConfiguration(FrozenModel):
"""Configuration of the DPDK EAL parameters."""
diff --git a/dts/framework/config/test_run.py b/dts/framework/config/test_run.py
index 006410b467..2092da725e 100644
--- a/dts/framework/config/test_run.py
+++ b/dts/framework/config/test_run.py
@@ -9,16 +9,18 @@
The root model of a test run configuration is :class:`TestRunConfiguration`.
"""
+import re
import tarfile
from enum import auto, unique
from functools import cached_property
from pathlib import Path, PurePath
-from typing import Any, Literal
+from typing import Any, Literal, NamedTuple
from pydantic import Field, field_validator, model_validator
from typing_extensions import TYPE_CHECKING, Self
-from framework.utils import StrEnum
+from framework.exception import InternalError
+from framework.utils import REGEX_FOR_PORT_LINK, StrEnum
from .common import FrozenModel, load_fields_from_settings
@@ -273,6 +275,83 @@ def fetch_all_test_suites() -> list[TestSuiteConfig]:
]
+class LinkPortIdentifier(NamedTuple):
+ """A tuple linking test run node type to port name."""
+
+ node_type: Literal["sut", "tg"]
+ port_name: str
+
+
+class PortLinkConfig(FrozenModel):
+ """A link between the ports of the nodes.
+
+ Can be represented as a string with the following notation:
+
+ .. code::
+
+ sut.PORT_0 <-> tg.PORT_0 # explicit node nomination
+ PORT_0 <-> PORT_0 # implicit node nomination. Left is SUT, right is TG.
+ """
+
+ #: The port at the left side of the link.
+ left: LinkPortIdentifier
+ #: The port at the right side of the link.
+ right: LinkPortIdentifier
+
+ @cached_property
+ def sut_port(self) -> str:
+ """Port name of the SUT node.
+
+ Raises:
+ InternalError: If a misconfiguration has been allowed to happen.
+ """
+ if self.left.node_type == "sut":
+ return self.left.port_name
+ if self.right.node_type == "sut":
+ return self.right.port_name
+
+ raise InternalError("Unreachable state reached.")
+
+ @cached_property
+ def tg_port(self) -> str:
+ """Port name of the TG node.
+
+ Raises:
+ InternalError: If a misconfiguration has been allowed to happen.
+ """
+ if self.left.node_type == "tg":
+ return self.left.port_name
+ if self.right.node_type == "tg":
+ return self.right.port_name
+
+ raise InternalError("Unreachable state reached.")
+
+ @model_validator(mode="before")
+ @classmethod
+ def convert_from_string(cls, data: Any) -> Any:
+ """Convert the string representation of the model into a valid mapping."""
+ if isinstance(data, str):
+ m = re.match(REGEX_FOR_PORT_LINK, data, re.I)
+ assert m is not None, (
+ "The provided link is malformed. Please use the following "
+ "notation: sut.PORT_0 <-> tg.PORT_0"
+ )
+
+ left = (m.group(1) or "sut").lower(), m.group(2)
+ right = (m.group(3) or "tg").lower(), m.group(4)
+
+ return {"left": left, "right": right}
+ return data
+
+ @model_validator(mode="after")
+ def verify_distinct_nodes(self) -> Self:
+ """Verify that each side of the link has distinct nodes."""
+ assert (
+ self.left.node_type != self.right.node_type
+ ), "Linking ports of the same node is unsupported."
+ return self
+
+
class TestRunConfiguration(FrozenModel):
"""The configuration of a test run.
@@ -298,6 +377,8 @@ class TestRunConfiguration(FrozenModel):
vdevs: list[str] = Field(default_factory=list)
#: The seed to use for pseudo-random generation.
random_seed: int | None = None
+ #: The port links between the specified nodes to use.
+ port_topology: list[PortLinkConfig] = Field(max_length=2)
fields_from_settings = model_validator(mode="before")(
load_fields_from_settings("test_suites", "random_seed")
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index 9f9789cf49..60a885d8e6 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -54,7 +54,7 @@
TestSuiteWithCases,
)
from .test_suite import TestCase, TestSuite
-from .testbed_model.topology import Topology
+from .testbed_model.topology import PortLink, Topology
class DTSRunner:
@@ -331,7 +331,13 @@ def _run_test_run(
test_run_result.update_setup(Result.FAIL, e)
else:
- self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
+ topology = Topology(
+ PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
+ for link in test_run_config.port_topology
+ )
+ self._run_test_suites(
+ sut_node, tg_node, topology, test_run_result, test_suites_with_cases
+ )
finally:
try:
@@ -361,6 +367,7 @@ def _run_test_suites(
self,
sut_node: SutNode,
tg_node: TGNode,
+ topology: Topology,
test_run_result: TestRunResult,
test_suites_with_cases: Iterable[TestSuiteWithCases],
) -> None:
@@ -380,11 +387,11 @@ def _run_test_suites(
Args:
sut_node: The test run's SUT node.
tg_node: The test run's TG node.
+ topology: The test run's port topology.
test_run_result: The test run's result.
test_suites_with_cases: The test suites with test cases to run.
"""
end_dpdk_build = False
- topology = Topology(sut_node.ports, tg_node.ports)
supported_capabilities = self._get_supported_capabilities(
sut_node, topology, test_suites_with_cases
)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index bffbc52505..1acb526b64 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -28,8 +28,9 @@
from dataclasses import asdict, dataclass, field
from enum import Enum, auto
from pathlib import Path
-from typing import Any, Callable, TypedDict
+from typing import Any, Callable, TypedDict, cast
+from framework.config.node import PortConfig
from framework.testbed_model.capability import Capability
from .config.test_run import TestRunConfiguration, TestSuiteConfig
@@ -601,10 +602,14 @@ def to_dict(self) -> TestRunResultDict:
compiler_version = self.dpdk_build_info.compiler_version
dpdk_version = self.dpdk_build_info.dpdk_version
+ ports = [asdict(port) for port in self.ports]
+ for port in ports:
+ port["config"] = cast(PortConfig, port["config"]).model_dump()
+
return {
"compiler_version": compiler_version,
"dpdk_version": dpdk_version,
- "ports": [asdict(port) for port in self.ports],
+ "ports": ports,
"test_suites": [child.to_dict() for child in self.child_results],
"summary": results | self.generate_pass_rate_dict(results),
}
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 6a7a1f5b6c..7b06ecd715 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -362,10 +362,10 @@ def set_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
the test suite's.
"""
if inspect.isclass(test_case_or_suite):
- if self.topology_type is not TopologyType.default:
+ if self.topology_type is not TopologyType.default():
self.add_to_required(test_case_or_suite)
for test_case in test_case_or_suite.get_test_cases():
- if test_case.topology_type.topology_type is TopologyType.default:
+ if test_case.topology_type.topology_type is TopologyType.default():
# test case topology has not been set, use the one set by the test suite
self.add_to_required(test_case)
elif test_case.topology_type > test_case_or_suite.topology_type:
@@ -428,14 +428,8 @@ def __hash__(self):
return self.topology_type.__hash__()
def __str__(self):
- """Easy to read string of class and name of :attr:`topology_type`.
-
- Converts :attr:`TopologyType.default` to the actual value.
- """
- name = self.topology_type.name
- if self.topology_type is TopologyType.default:
- name = TopologyType.get_from_value(self.topology_type.value).name
- return f"{type(self.topology_type).__name__}.{name}"
+ """Easy to read string of class and name of :attr:`topology_type`."""
+ return f"{type(self.topology_type).__name__}.{self.topology_type.name}"
def __repr__(self):
"""Easy to read string of class and name of :attr:`topology_type`."""
@@ -450,7 +444,7 @@ class TestProtocol(Protocol):
#: The reason for skipping the test case or suite.
skip_reason: ClassVar[str] = ""
#: The topology type of the test case or suite.
- topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
+ topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default())
#: The capabilities the test case or suite requires in order to be executed.
required_capabilities: ClassVar[set[Capability]] = set()
@@ -466,7 +460,7 @@ def get_test_cases(cls) -> list[type["TestCase"]]:
def requires(
*nic_capabilities: NicCapability,
- topology_type: TopologyType = TopologyType.default,
+ topology_type: TopologyType = TopologyType.default(),
) -> Callable[[type[TestProtocol]], type[TestProtocol]]:
"""A decorator that adds the required capabilities to a test case or test suite.
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e53a321499..0acd746073 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -14,6 +14,7 @@
"""
from abc import ABC
+from functools import cached_property
from framework.config.node import (
OS,
@@ -86,6 +87,11 @@ def _init_ports(self) -> None:
self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
self.main_session.update_ports(self.ports)
+ @cached_property
+ def ports_by_name(self) -> dict[str, Port]:
+ """Ports mapped by the name assigned at configuration."""
+ return {port.name: port for port in self.ports}
+
def set_up_test_run(
self,
test_run_config: TestRunConfiguration,
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 7177da3371..8014d4a100 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2022 University of New Hampshire
# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""NIC port model.
@@ -13,19 +14,6 @@
from framework.config.node import PortConfig
-@dataclass(slots=True, frozen=True)
-class PortIdentifier:
- """The port identifier.
-
- Attributes:
- node: The node where the port resides.
- pci: The PCI address of the port on `node`.
- """
-
- node: str
- pci: str
-
-
@dataclass(slots=True)
class Port:
"""Physical port on a node.
@@ -36,20 +24,13 @@ class Port:
and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
Attributes:
- identifier: The PCI address of the port on a node.
- os_driver: The operating system driver name when the operating system controls the port,
- e.g.: ``i40e``.
- os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
- peer: The identifier of a port this port is connected with.
- The `peer` is on a different node.
+ config: The port's configuration.
mac_address: The MAC address of the port.
logical_name: The logical name of the port. Must be discovered.
"""
- identifier: PortIdentifier
- os_driver: str
- os_driver_for_dpdk: str
- peer: PortIdentifier
+ _node: str
+ config: PortConfig
mac_address: str = ""
logical_name: str = ""
@@ -60,33 +41,20 @@ def __init__(self, node_name: str, config: PortConfig):
node_name: The name of the port's node.
config: The test run configuration of the port.
"""
- self.identifier = PortIdentifier(
- node=node_name,
- pci=config.pci,
- )
- self.os_driver = config.os_driver
- self.os_driver_for_dpdk = config.os_driver_for_dpdk
- self.peer = PortIdentifier(node=config.peer_node, pci=config.peer_pci)
+ self._node = node_name
+ self.config = config
@property
def node(self) -> str:
"""The node where the port resides."""
- return self.identifier.node
+ return self._node
+
+ @property
+ def name(self) -> str:
+ """The name of the port."""
+ return self.config.name
@property
def pci(self) -> str:
"""The PCI address of the port."""
- return self.identifier.pci
-
-
-@dataclass(slots=True, frozen=True)
-class PortLink:
- """The physical, cabled connection between the ports.
-
- Attributes:
- sut_port: The port on the SUT node connected to `tg_port`.
- tg_port: The port on the TG node connected to `sut_port`.
- """
-
- sut_port: Port
- tg_port: Port
+ return self.config.pci
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 483733cede..440b5a059b 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -515,7 +515,7 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
return
for port in self.ports:
- driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
+ driver = port.config.os_driver_for_dpdk if for_dpdk else port.config.os_driver
self.main_session.send_command(
f"{self.path_to_devbind_script} -b {driver} --force {port.pci}",
privileged=True,
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index caee9b22ea..814c3f3fe4 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2024 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""Testbed topology representation.
@@ -7,14 +8,9 @@
The link information then implies what type of topology is available.
"""
-from dataclasses import dataclass
-from os import environ
-from typing import TYPE_CHECKING, Iterable
-
-if TYPE_CHECKING or environ.get("DTS_DOC_BUILD"):
- from enum import Enum as NoAliasEnum
-else:
- from aenum import NoAliasEnum
+from collections.abc import Iterator
+from enum import Enum
+from typing import NamedTuple
from framework.config.node import PortConfig
from framework.exception import ConfigurationError
@@ -22,7 +18,7 @@
from .port import Port
-class TopologyType(int, NoAliasEnum):
+class TopologyType(int, Enum):
"""Supported topology types."""
#: A topology with no Traffic Generator.
@@ -31,34 +27,20 @@ class TopologyType(int, NoAliasEnum):
one_link = 1
#: A topology with two physical links between the Sut node and the TG node.
two_links = 2
- #: The default topology required by test cases if not specified otherwise.
- default = 2
@classmethod
- def get_from_value(cls, value: int) -> "TopologyType":
- r"""Get the corresponding instance from value.
+ def default(cls) -> "TopologyType":
+ """The default topology required by test cases if not specified otherwise."""
+ return cls.two_links
- :class:`~enum.Enum`\s that don't allow aliases don't know which instance should be returned
- as there could be multiple valid instances. Except for the :attr:`default` value,
- :class:`TopologyType` is a regular :class:`~enum.Enum`.
- When getting an instance from value, we're not interested in the default,
- since we already know the value, allowing us to remove the ambiguity.
- Args:
- value: The value of the requested enum.
+class PortLink(NamedTuple):
+ """The physical, cabled connection between the ports."""
- Raises:
- ConfigurationError: If an unsupported link topology is supplied.
- """
- match value:
- case 0:
- return TopologyType.no_link
- case 1:
- return TopologyType.one_link
- case 2:
- return TopologyType.two_links
- case _:
- raise ConfigurationError("More than two links in a topology are not supported.")
+ #: The port on the SUT node connected to `tg_port`.
+ sut_port: Port
+ #: The port on the TG node connected to `sut_port`.
+ tg_port: Port
class Topology:
@@ -89,55 +71,43 @@ class Topology:
sut_port_egress: Port
tg_port_ingress: Port
- def __init__(self, sut_ports: Iterable[Port], tg_ports: Iterable[Port]):
- """Create the topology from `sut_ports` and `tg_ports`.
+ def __init__(self, port_links: Iterator[PortLink]):
+ """Create the topology from `port_links`.
Args:
- sut_ports: The SUT node's ports.
- tg_ports: The TG node's ports.
+ port_links: The test run's required port links.
+
+ Raises:
+ ConfigurationError: If an unsupported link topology is supplied.
"""
- port_links = []
- for sut_port in sut_ports:
- for tg_port in tg_ports:
- if (sut_port.identifier, sut_port.peer) == (
- tg_port.peer,
- tg_port.identifier,
- ):
- port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port))
-
- self.type = TopologyType.get_from_value(len(port_links))
dummy_port = Port(
"",
PortConfig(
+ name="dummy",
pci="0000:00:00.0",
os_driver_for_dpdk="",
os_driver="",
- peer_node="",
- peer_pci="0000:00:00.0",
),
)
+
+ self.type = TopologyType.no_link
self.tg_port_egress = dummy_port
self.sut_port_ingress = dummy_port
self.sut_port_egress = dummy_port
self.tg_port_ingress = dummy_port
- if self.type > TopologyType.no_link:
- self.tg_port_egress = port_links[0].tg_port
- self.sut_port_ingress = port_links[0].sut_port
+
+ if port_link := next(port_links, None):
+ self.type = TopologyType.one_link
+ self.tg_port_egress = port_link.tg_port
+ self.sut_port_ingress = port_link.sut_port
self.sut_port_egress = self.sut_port_ingress
self.tg_port_ingress = self.tg_port_egress
- if self.type > TopologyType.one_link:
- self.sut_port_egress = port_links[1].sut_port
- self.tg_port_ingress = port_links[1].tg_port
+ if port_link := next(port_links, None):
+ self.type = TopologyType.two_links
+ self.sut_port_egress = port_link.sut_port
+ self.tg_port_ingress = port_link.tg_port
-@dataclass(slots=True, frozen=True)
-class PortLink:
- """The physical, cabled connection between the ports.
-
- Attributes:
- sut_port: The port on the SUT node connected to `tg_port`.
- tg_port: The port on the TG node connected to `sut_port`.
- """
-
- sut_port: Port
- tg_port: Port
+ if next(port_links, None) is not None:
+ msg = "More than two links in a topology are not supported."
+ raise ConfigurationError(msg)
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 66f37a8813..d6f4c11d58 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -32,7 +32,13 @@
_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
_REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
-REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
+REGEX_FOR_BASE64_ENCODING: str = r"[-a-zA-Z0-9+\\/]*={0,3}"
+REGEX_FOR_IDENTIFIER: str = r"\w+(?:[\w -]*\w+)?"
+REGEX_FOR_PORT_LINK: str = (
+ rf"(?:(sut|tg)\.)?({REGEX_FOR_IDENTIFIER})" # left side
+ r"\s+<->\s+"
+ rf"(?:(sut|tg)\.)?({REGEX_FOR_IDENTIFIER})" # right side
+)
def expand_range(range_str: str) -> list[int]:
diff --git a/dts/nodes.example.yaml b/dts/nodes.example.yaml
index 454d97ab5d..6140dd9b7e 100644
--- a/dts/nodes.example.yaml
+++ b/dts/nodes.example.yaml
@@ -9,18 +9,14 @@
user: dtsuser
os: linux
ports:
- # sets up the physical link between "SUT 1"@0000:00:08.0 and "TG 1"@0000:00:08.0
- - pci: "0000:00:08.0"
+ - name: Port 0
+ pci: "0000:00:08.0"
os_driver_for_dpdk: vfio-pci # OS driver that DPDK will use
os_driver: i40e # OS driver to bind when the tests are not running
- peer_node: "TG 1"
- peer_pci: "0000:00:08.0"
- # sets up the physical link between "SUT 1"@0000:00:08.1 and "TG 1"@0000:00:08.1
- - pci: "0000:00:08.1"
+ - name: Port 1
+ pci: "0000:00:08.1"
os_driver_for_dpdk: vfio-pci
os_driver: i40e
- peer_node: "TG 1"
- peer_pci: "0000:00:08.1"
hugepages_2mb: # optional; if removed, will use system hugepage configuration
number_of: 256
force_first_numa: false
@@ -34,18 +30,14 @@
user: dtsuser
os: linux
ports:
- # sets up the physical link between "TG 1"@0000:00:08.0 and "SUT 1"@0000:00:08.0
- - pci: "0000:00:08.0"
+ - name: Port 0
+ pci: "0000:00:08.0"
os_driver_for_dpdk: rdma
os_driver: rdma
- peer_node: "SUT 1"
- peer_pci: "0000:00:08.0"
- # sets up the physical link between "SUT 1"@0000:00:08.0 and "TG 1"@0000:00:08.0
- - pci: "0000:00:08.1"
+ - name: Port 1
+ pci: "0000:00:08.1"
os_driver_for_dpdk: rdma
os_driver: rdma
- peer_node: "SUT 1"
- peer_pci: "0000:00:08.1"
hugepages_2mb: # optional; if removed, will use system hugepage configuration
number_of: 256
force_first_numa: false
diff --git a/dts/test_runs.example.yaml b/dts/test_runs.example.yaml
index 5cc167ebe1..821d6918d0 100644
--- a/dts/test_runs.example.yaml
+++ b/dts/test_runs.example.yaml
@@ -32,3 +32,6 @@
system_under_test_node: "SUT 1"
# Traffic generator node to use for this execution environment
traffic_generator_node: "TG 1"
+ port_topology:
+ - sut.Port 0 <-> tg.Port 0 # explicit link
+ - Port 1 <-> Port 1 # implicit link, left side is always SUT, right side is always TG.
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 2/7] dts: isolate test specification to config
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 3/7] dts: revamp Topology model Luca Vizzarro
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
In an effort to improve separation of concerns, make the TestRunConfig
class responsible for processing the configured test suites. Moreover,
give TestSuiteConfig a facility to yield references to the selected test
cases.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
dts/framework/config/__init__.py | 84 +++++++++++---------------------
dts/framework/config/test_run.py | 59 +++++++++++++++++-----
2 files changed, 76 insertions(+), 67 deletions(-)
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index f8ac2c0d18..7761a8b56f 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -27,9 +27,8 @@
and makes it thread safe should we ever want to move in that direction.
"""
-from functools import cached_property
from pathlib import Path
-from typing import Annotated, Any, Literal, NamedTuple, TypeVar, cast
+from typing import Annotated, Any, Literal, TypeVar, cast
import yaml
from pydantic import Field, TypeAdapter, ValidationError, field_validator, model_validator
@@ -46,18 +45,6 @@
)
from .test_run import TestRunConfiguration
-
-class TestRunWithNodesConfiguration(NamedTuple):
- """Tuple containing the configuration of the test run and its associated nodes."""
-
- #:
- test_run_config: TestRunConfiguration
- #:
- sut_node_config: SutNodeConfiguration
- #:
- tg_node_config: TGNodeConfiguration
-
-
TestRunsConfig = Annotated[list[TestRunConfiguration], Field(min_length=1)]
NodesConfig = Annotated[list[NodeConfigurationTypes], Field(min_length=1)]
@@ -71,40 +58,6 @@ class Configuration(FrozenModel):
#: Node configurations.
nodes: NodesConfig
- @cached_property
- def test_runs_with_nodes(self) -> list[TestRunWithNodesConfiguration]:
- """List of test runs with the associated nodes."""
- test_runs_with_nodes = []
-
- for test_run_no, test_run in enumerate(self.test_runs):
- sut_node_name = test_run.system_under_test_node
- sut_node = next(filter(lambda n: n.name == sut_node_name, self.nodes), None)
-
- assert sut_node is not None, (
- f"test_runs.{test_run_no}.sut_node_config.node_name "
- f"({test_run.system_under_test_node}) is not a valid node name"
- )
- assert isinstance(sut_node, SutNodeConfiguration), (
- f"test_runs.{test_run_no}.sut_node_config.node_name is a valid node name, "
- "but it is not a valid SUT node"
- )
-
- tg_node_name = test_run.traffic_generator_node
- tg_node = next(filter(lambda n: n.name == tg_node_name, self.nodes), None)
-
- assert tg_node is not None, (
- f"test_runs.{test_run_no}.tg_node_name "
- f"({test_run.traffic_generator_node}) is not a valid node name"
- )
- assert isinstance(tg_node, TGNodeConfiguration), (
- f"test_runs.{test_run_no}.tg_node_name is a valid node name, "
- "but it is not a valid TG node"
- )
-
- test_runs_with_nodes.append(TestRunWithNodesConfiguration(test_run, sut_node, tg_node))
-
- return test_runs_with_nodes
-
@field_validator("nodes")
@classmethod
def validate_node_names(cls, nodes: list[NodeConfiguration]) -> list[NodeConfiguration]:
@@ -164,14 +117,33 @@ def validate_port_links(self) -> Self:
return self
@model_validator(mode="after")
- def validate_test_runs_with_nodes(self) -> Self:
- """Validate the test runs to nodes associations.
-
- This validator relies on the cached property `test_runs_with_nodes` to run for the first
- time in this call, therefore triggering the assertions if needed.
- """
- if self.test_runs_with_nodes:
- pass
+ def validate_test_runs_against_nodes(self) -> Self:
+ """Validate the test runs to nodes associations."""
+ for test_run_no, test_run in enumerate(self.test_runs):
+ sut_node_name = test_run.system_under_test_node
+ sut_node = next((n for n in self.nodes if n.name == sut_node_name), None)
+
+ assert sut_node is not None, (
+ f"Test run {test_run_no}.system_under_test_node "
+ f"({sut_node_name}) is not a valid node name."
+ )
+ assert isinstance(sut_node, SutNodeConfiguration), (
+ f"Test run {test_run_no}.system_under_test_node is a valid node name, "
+ "but it is not a valid SUT node."
+ )
+
+ tg_node_name = test_run.traffic_generator_node
+ tg_node = next((n for n in self.nodes if n.name == tg_node_name), None)
+
+ assert tg_node is not None, (
+ f"Test run {test_run_no}.traffic_generator_name "
+ f"({tg_node_name}) is not a valid node name."
+ )
+ assert isinstance(tg_node, TGNodeConfiguration), (
+ f"Test run {test_run_no}.traffic_generator_name is a valid node name, "
+ "but it is not a valid TG node."
+ )
+
return self
diff --git a/dts/framework/config/test_run.py b/dts/framework/config/test_run.py
index 2092da725e..9ea898b15c 100644
--- a/dts/framework/config/test_run.py
+++ b/dts/framework/config/test_run.py
@@ -11,6 +11,8 @@
import re
import tarfile
+from collections import deque
+from collections.abc import Iterable
from enum import auto, unique
from functools import cached_property
from pathlib import Path, PurePath
@@ -25,7 +27,7 @@
from .common import FrozenModel, load_fields_from_settings
if TYPE_CHECKING:
- from framework.test_suite import TestSuiteSpec
+ from framework.test_suite import TestCase, TestSuite, TestSuiteSpec
@unique
@@ -233,6 +235,21 @@ def test_suite_spec(self) -> "TestSuiteSpec":
), f"{self.test_suite_name} is not a valid test suite module name."
return test_suite_spec
+ @cached_property
+ def test_cases(self) -> list[type["TestCase"]]:
+ """The objects of the selected test cases."""
+ available_test_cases = {t.name: t for t in self.test_suite_spec.class_obj.get_test_cases()}
+ selected_test_cases = []
+
+ for requested_test_case in self.test_cases_names:
+ assert requested_test_case in available_test_cases, (
+ f"{requested_test_case} is not a valid test case "
+ f"of test suite {self.test_suite_name}."
+ )
+ selected_test_cases.append(available_test_cases[requested_test_case])
+
+ return selected_test_cases or list(available_test_cases.values())
+
@model_validator(mode="before")
@classmethod
def convert_from_string(cls, data: Any) -> Any:
@@ -246,17 +263,11 @@ def convert_from_string(cls, data: Any) -> Any:
def validate_names(self) -> Self:
"""Validate the supplied test suite and test cases names.
- This validator relies on the cached property `test_suite_spec` to run for the first
- time in this call, therefore triggering the assertions if needed.
+ This validator relies on the cached properties `test_suite_spec` and `test_cases` to run for
+ the first time in this call, therefore triggering the assertions if needed.
"""
- available_test_cases = map(
- lambda t: t.name, self.test_suite_spec.class_obj.get_test_cases()
- )
- for requested_test_case in self.test_cases_names:
- assert requested_test_case in available_test_cases, (
- f"{requested_test_case} is not a valid test case "
- f"of test suite {self.test_suite_name}."
- )
+ if self.test_cases:
+ pass
return self
@@ -383,3 +394,29 @@ class TestRunConfiguration(FrozenModel):
fields_from_settings = model_validator(mode="before")(
load_fields_from_settings("test_suites", "random_seed")
)
+
+ def filter_tests(
+ self,
+ ) -> Iterable[tuple[type["TestSuite"], deque[type["TestCase"]]]]:
+ """Filter test suites and cases selected for execution."""
+ from framework.test_suite import TestCaseType
+
+ test_suites = [TestSuiteConfig(test_suite="smoke_tests")]
+
+ if self.skip_smoke_tests:
+ test_suites = self.test_suites
+ else:
+ test_suites += self.test_suites
+
+ return (
+ (
+ t.test_suite_spec.class_obj,
+ deque(
+ tt
+ for tt in t.test_cases
+ if (tt.test_type is TestCaseType.FUNCTIONAL and self.func)
+ or (tt.test_type is TestCaseType.PERFORMANCE and self.perf)
+ ),
+ )
+ for t in test_suites
+ )
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 3/7] dts: revamp Topology model
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 2/7] dts: isolate test specification to config Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 4/7] dts: improve Port model Luca Vizzarro
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Change the Topology model to add further flexility in its usage as a
standalone entry point to test suites.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
dts/framework/testbed_model/topology.py | 85 +++++++++++++------------
1 file changed, 45 insertions(+), 40 deletions(-)
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 814c3f3fe4..cf5c2c28ba 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -9,10 +9,12 @@
"""
from collections.abc import Iterator
+from dataclasses import dataclass
from enum import Enum
from typing import NamedTuple
-from framework.config.node import PortConfig
+from typing_extensions import Self
+
from framework.exception import ConfigurationError
from .port import Port
@@ -43,35 +45,32 @@ class PortLink(NamedTuple):
tg_port: Port
+@dataclass(frozen=True)
class Topology:
"""Testbed topology.
The topology contains ports processed into ingress and egress ports.
- If there are no ports on a node, dummy ports (ports with no actual values) are stored.
- If there is only one link available, the ports of this link are stored
+ If there are no ports on a node, accesses to :attr:`~Topology.tg_port_egress` and alike will
+ raise an exception. If there is only one link available, the ports of this link are stored
as both ingress and egress ports.
- The dummy ports shouldn't be used. It's up to :class:`~framework.runner.DTSRunner`
- to ensure no test case or suite requiring actual links is executed
- when the topology prohibits it and up to the developers to make sure that test cases
- not requiring any links don't use any ports. Otherwise, the underlying methods
- using the ports will fail.
+ It's up to :class:`~framework.test_run.TestRun` to ensure no test case or suite requiring actual
+ links is executed when the topology prohibits it and up to the developers to make sure that test
+ cases not requiring any links don't use any ports. Otherwise, the underlying methods using the
+ ports will fail.
Attributes:
type: The type of the topology.
- tg_port_egress: The egress port of the TG node.
- sut_port_ingress: The ingress port of the SUT node.
- sut_port_egress: The egress port of the SUT node.
- tg_port_ingress: The ingress port of the TG node.
+ sut_ports: The SUT ports.
+ tg_ports: The TG ports.
"""
type: TopologyType
- tg_port_egress: Port
- sut_port_ingress: Port
- sut_port_egress: Port
- tg_port_ingress: Port
+ sut_ports: list[Port]
+ tg_ports: list[Port]
- def __init__(self, port_links: Iterator[PortLink]):
+ @classmethod
+ def from_port_links(cls, port_links: Iterator[PortLink]) -> Self:
"""Create the topology from `port_links`.
Args:
@@ -80,34 +79,40 @@ def __init__(self, port_links: Iterator[PortLink]):
Raises:
ConfigurationError: If an unsupported link topology is supplied.
"""
- dummy_port = Port(
- "",
- PortConfig(
- name="dummy",
- pci="0000:00:00.0",
- os_driver_for_dpdk="",
- os_driver="",
- ),
- )
-
- self.type = TopologyType.no_link
- self.tg_port_egress = dummy_port
- self.sut_port_ingress = dummy_port
- self.sut_port_egress = dummy_port
- self.tg_port_ingress = dummy_port
+ type = TopologyType.no_link
if port_link := next(port_links, None):
- self.type = TopologyType.one_link
- self.tg_port_egress = port_link.tg_port
- self.sut_port_ingress = port_link.sut_port
- self.sut_port_egress = self.sut_port_ingress
- self.tg_port_ingress = self.tg_port_egress
+ type = TopologyType.one_link
+ sut_ports = [port_link.sut_port]
+ tg_ports = [port_link.tg_port]
if port_link := next(port_links, None):
- self.type = TopologyType.two_links
- self.sut_port_egress = port_link.sut_port
- self.tg_port_ingress = port_link.tg_port
+ type = TopologyType.two_links
+ sut_ports.append(port_link.sut_port)
+ tg_ports.append(port_link.tg_port)
if next(port_links, None) is not None:
msg = "More than two links in a topology are not supported."
raise ConfigurationError(msg)
+
+ return cls(type, sut_ports, tg_ports)
+
+ @property
+ def tg_port_egress(self) -> Port:
+ """The egress port of the TG node."""
+ return self.tg_ports[0]
+
+ @property
+ def sut_port_ingress(self) -> Port:
+ """The ingress port of the SUT node."""
+ return self.sut_ports[0]
+
+ @property
+ def sut_port_egress(self) -> Port:
+ """The egress port of the SUT node."""
+ return self.sut_ports[1 if self.type is TopologyType.two_links else 0]
+
+ @property
+ def tg_port_ingress(self) -> Port:
+ """The ingress port of the TG node."""
+ return self.tg_ports[1 if self.type is TopologyType.two_links else 0]
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 4/7] dts: improve Port model
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (2 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 3/7] dts: revamp Topology model Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 5/7] dts: add runtime status Luca Vizzarro
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Make Port models standalone, so that they can be directly manipulated by
the test suites as needed. Moreover, let them handle themselves and
retrieve the logical name and mac address in autonomy.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
dts/framework/testbed_model/linux_session.py | 47 +++++++++-------
dts/framework/testbed_model/node.py | 25 ++++-----
dts/framework/testbed_model/os_session.py | 16 +++---
dts/framework/testbed_model/port.py | 57 ++++++++++++--------
dts/framework/testbed_model/sut_node.py | 48 +++++++++--------
dts/framework/testbed_model/tg_node.py | 24 +++++++--
6 files changed, 127 insertions(+), 90 deletions(-)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index 99c29b9b1e..7c2b110c99 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -10,6 +10,8 @@
"""
import json
+from collections.abc import Iterable
+from functools import cached_property
from typing import TypedDict
from typing_extensions import NotRequired
@@ -149,31 +151,40 @@ def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: boo
self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
- def update_ports(self, ports: list[Port]) -> None:
- """Overrides :meth:`~.os_session.OSSession.update_ports`."""
- self._logger.debug("Gathering port info.")
- for port in ports:
- assert port.node == self.name, "Attempted to gather port info on the wrong node"
+ def get_port_info(self, pci_address: str) -> tuple[str, str]:
+ """Overrides :meth:`~.os_session.OSSession.get_port_info`.
- port_info_list = self._get_lshw_info()
- for port in ports:
- for port_info in port_info_list:
- if f"pci@{port.pci}" == port_info.get("businfo"):
- self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
- self._update_port_attr(port, port_info.get("serial"), "mac_address")
- port_info_list.remove(port_info)
- break
- else:
- self._logger.warning(f"No port at pci address {port.pci} found.")
-
- def bring_up_link(self, ports: list[Port]) -> None:
+ Raises:
+ ConfigurationError: If the port could not be found.
+ """
+ self._logger.debug(f"Gathering info for port {pci_address}.")
+
+ bus_info = f"pci@{pci_address}"
+ port = next(port for port in self._lshw_net_info if port.get("businfo") == bus_info)
+ if port is None:
+ raise ConfigurationError(f"Port {pci_address} could not be found on the node.")
+
+ logical_name = port.get("logicalname") or ""
+ if not logical_name:
+ self._logger.warning(f"Port {pci_address} does not have a valid logical name.")
+ # raise ConfigurationError(f"Port {pci_address} does not have a valid logical name.")
+
+ mac_address = port.get("serial") or ""
+ if not mac_address:
+ self._logger.warning(f"Port {pci_address} does not have a valid mac address.")
+ # raise ConfigurationError(f"Port {pci_address} does not have a valid mac address.")
+
+ return logical_name, mac_address
+
+ def bring_up_link(self, ports: Iterable[Port]) -> None:
"""Overrides :meth:`~.os_session.OSSession.bring_up_link`."""
for port in ports:
self.send_command(
f"ip link set dev {port.logical_name} up", privileged=True, verify=True
)
- def _get_lshw_info(self) -> list[LshwOutput]:
+ @cached_property
+ def _lshw_net_info(self) -> list[LshwOutput]:
output = self.send_command("lshw -quiet -json -C network", verify=True)
return json.loads(output.stdout)
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 0acd746073..1a4c825ed2 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -14,16 +14,14 @@
"""
from abc import ABC
+from collections.abc import Iterable
from functools import cached_property
from framework.config.node import (
OS,
NodeConfiguration,
)
-from framework.config.test_run import (
- DPDKBuildConfiguration,
- TestRunConfiguration,
-)
+from framework.config.test_run import TestRunConfiguration
from framework.exception import ConfigurationError
from framework.logger import DTSLogger, get_dts_logger
@@ -81,22 +79,14 @@ def __init__(self, node_config: NodeConfiguration):
self._logger.info(f"Connected to node: {self.name}")
self._get_remote_cpus()
self._other_sessions = []
- self._init_ports()
-
- def _init_ports(self) -> None:
- self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
- self.main_session.update_ports(self.ports)
+ self.ports = [Port(self, port_config) for port_config in self.config.ports]
@cached_property
def ports_by_name(self) -> dict[str, Port]:
"""Ports mapped by the name assigned at configuration."""
return {port.name: port for port in self.ports}
- def set_up_test_run(
- self,
- test_run_config: TestRunConfiguration,
- dpdk_build_config: DPDKBuildConfiguration,
- ) -> None:
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
"""Test run setup steps.
Configure hugepages on all DTS node types. Additional steps can be added by
@@ -105,15 +95,18 @@ def set_up_test_run(
Args:
test_run_config: A test run configuration according to which
the setup steps will be taken.
- dpdk_build_config: The build configuration of DPDK.
+ ports: The ports to set up for the test run.
"""
self._setup_hugepages()
- def tear_down_test_run(self) -> None:
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
"""Test run teardown steps.
There are currently no common execution teardown steps common to all DTS node types.
Additional steps can be added by extending the method in subclasses with the use of super().
+
+ Args:
+ ports: The ports to tear down for the test run.
"""
def create_session(self, name: str) -> OSSession:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index f3789fcf75..3c7b2a4f47 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -516,20 +516,18 @@ def get_arch_info(self) -> str:
"""
@abstractmethod
- def update_ports(self, ports: list[Port]) -> None:
- """Get additional information about ports from the operating system and update them.
+ def get_port_info(self, pci_address: str) -> tuple[str, str]:
+ """Get port information.
- The additional information is:
-
- * Logical name (e.g. ``enp7s0``) if applicable,
- * Mac address.
+ Returns:
+ A tuple containing the logical name and MAC address respectively.
- Args:
- ports: The ports to update.
+ Raises:
+ ConfigurationError: If the port could not be found.
"""
@abstractmethod
- def bring_up_link(self, ports: list[Port]) -> None:
+ def bring_up_link(self, ports: Iterable[Port]) -> None:
"""Send operating system specific command for bringing up link on node interfaces.
Args:
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 8014d4a100..f638120eeb 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -9,45 +9,42 @@
drivers and address.
"""
-from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Final
from framework.config.node import PortConfig
+if TYPE_CHECKING:
+ from .node import Node
+
-@dataclass(slots=True)
class Port:
"""Physical port on a node.
- The ports are identified by the node they're on and their PCI addresses. The port on the other
- side of the connection is also captured here.
- Each port is serviced by a driver, which may be different for the operating system (`os_driver`)
- and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
-
Attributes:
+ node: The port's node.
config: The port's configuration.
mac_address: The MAC address of the port.
- logical_name: The logical name of the port. Must be discovered.
+ logical_name: The logical name of the port.
+ bound_for_dpdk: :data:`True` if the port is bound to the driver for DPDK.
"""
- _node: str
- config: PortConfig
- mac_address: str = ""
- logical_name: str = ""
+ node: Final["Node"]
+ config: Final[PortConfig]
+ mac_address: Final[str]
+ logical_name: Final[str]
+ bound_for_dpdk: bool
- def __init__(self, node_name: str, config: PortConfig):
- """Initialize the port from `node_name` and `config`.
+ def __init__(self, node: "Node", config: PortConfig):
+ """Initialize the port from `node` and `config`.
Args:
- node_name: The name of the port's node.
+ node: The port's node.
config: The test run configuration of the port.
"""
- self._node = node_name
+ self.node = node
self.config = config
-
- @property
- def node(self) -> str:
- """The node where the port resides."""
- return self._node
+ self.logical_name, self.mac_address = node.main_session.get_port_info(config.pci)
+ self.bound_for_dpdk = False
@property
def name(self) -> str:
@@ -58,3 +55,21 @@ def name(self) -> str:
def pci(self) -> str:
"""The PCI address of the port."""
return self.config.pci
+
+ def configure_mtu(self, mtu: int):
+ """Configure the port's MTU value.
+
+ Args:
+ mtu: Desired MTU value.
+ """
+ return self.node.main_session.configure_port_mtu(mtu, self)
+
+ def to_dict(self) -> dict[str, Any]:
+ """Convert to a dictionary."""
+ return {
+ "node_name": self.node.name,
+ "name": self.name,
+ "pci": self.pci,
+ "mac_address": self.mac_address,
+ "logical_name": self.logical_name,
+ }
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 440b5a059b..9007d89b1c 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -13,6 +13,7 @@
import os
import time
+from collections.abc import Iterable
from dataclasses import dataclass
from pathlib import Path, PurePath
@@ -33,6 +34,7 @@
from framework.exception import ConfigurationError, RemoteFileNotFoundError
from framework.params.eal import EalParams
from framework.remote_session.remote_session import CommandResult
+from framework.testbed_model.port import Port
from framework.utils import MesonArgs, TarCompressionFormat
from .cpu import LogicalCore, LogicalCoreList
@@ -86,7 +88,6 @@ class SutNode(Node):
_node_info: OSSessionInfo | None
_compiler_version: str | None
_path_to_devbind_script: PurePath | None
- _ports_bound_to_dpdk: bool
def __init__(self, node_config: SutNodeConfiguration):
"""Extend the constructor with SUT node specifics.
@@ -196,11 +197,7 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
"""
return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
- def set_up_test_run(
- self,
- test_run_config: TestRunConfiguration,
- dpdk_build_config: DPDKBuildConfiguration,
- ) -> None:
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
"""Extend the test run setup with vdev config and DPDK build set up.
This method extends the setup process by configuring virtual devices and preparing the DPDK
@@ -209,22 +206,25 @@ def set_up_test_run(
Args:
test_run_config: A test run configuration according to which
the setup steps will be taken.
- dpdk_build_config: The build configuration of DPDK.
+ ports: The ports to set up for the test run.
"""
- super().set_up_test_run(test_run_config, dpdk_build_config)
+ super().set_up_test_run(test_run_config, ports)
for vdev in test_run_config.vdevs:
self.virtual_devices.append(VirtualDevice(vdev))
- self._set_up_dpdk(dpdk_build_config)
+ self._set_up_dpdk(test_run_config.dpdk_config, ports)
+
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
+ """Extend the test run teardown with virtual device teardown and DPDK teardown.
- def tear_down_test_run(self) -> None:
- """Extend the test run teardown with virtual device teardown and DPDK teardown."""
- super().tear_down_test_run()
+ Args:
+ ports: The ports to tear down for the test run.
+ """
+ super().tear_down_test_run(ports)
self.virtual_devices = []
- self._tear_down_dpdk()
+ self._tear_down_dpdk(ports)
def _set_up_dpdk(
- self,
- dpdk_build_config: DPDKBuildConfiguration,
+ self, dpdk_build_config: DPDKBuildConfiguration, ports: Iterable[Port]
) -> None:
"""Set up DPDK the SUT node and bind ports.
@@ -234,6 +234,7 @@ def _set_up_dpdk(
Args:
dpdk_build_config: A DPDK build configuration to test.
+ ports: The ports to use for DPDK.
"""
match dpdk_build_config.dpdk_location:
case RemoteDPDKTreeLocation(dpdk_tree=dpdk_tree):
@@ -254,16 +255,16 @@ def _set_up_dpdk(
self._configure_dpdk_build(build_options)
self._build_dpdk()
- self.bind_ports_to_driver()
+ self.bind_ports_to_driver(ports)
- def _tear_down_dpdk(self) -> None:
+ def _tear_down_dpdk(self, ports: Iterable[Port]) -> None:
"""Reset DPDK variables and bind port driver to the OS driver."""
self._env_vars = {}
self.__remote_dpdk_tree_path = None
self._remote_dpdk_build_dir = None
self._dpdk_version = None
self.compiler_version = None
- self.bind_ports_to_driver(for_dpdk=False)
+ self.bind_ports_to_driver(ports, for_dpdk=False)
def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
"""Set the path to the remote DPDK source tree based on the provided DPDK location.
@@ -504,21 +505,22 @@ def run_dpdk_app(
f"{app_path} {eal_params}", timeout, privileged=True, verify=True
)
- def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
+ def bind_ports_to_driver(self, ports: Iterable[Port], for_dpdk: bool = True) -> None:
"""Bind all ports on the SUT to a driver.
Args:
+ ports: The ports to act on.
for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
If :data:`False`, binds to os_driver.
"""
- if self._ports_bound_to_dpdk == for_dpdk:
- return
+ for port in ports:
+ if port.bound_for_dpdk == for_dpdk:
+ continue
- for port in self.ports:
driver = port.config.os_driver_for_dpdk if for_dpdk else port.config.os_driver
self.main_session.send_command(
f"{self.path_to_devbind_script} -b {driver} --force {port.pci}",
privileged=True,
verify=True,
)
- self._ports_bound_to_dpdk = for_dpdk
+ port.bound_for_dpdk = for_dpdk
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 8ab9ccb438..595836a664 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -9,9 +9,12 @@
A TG node is where the TG runs.
"""
+from collections.abc import Iterable
+
from scapy.packet import Packet
from framework.config.node import TGNodeConfiguration
+from framework.config.test_run import TestRunConfiguration
from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
PacketFilteringConfig,
)
@@ -51,9 +54,24 @@ def __init__(self, node_config: TGNodeConfiguration):
self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
self._logger.info(f"Created node: {self.name}")
- def _init_ports(self) -> None:
- super()._init_ports()
- self.main_session.bring_up_link(self.ports)
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
+ """Extend the test run setup with the setup of the traffic generator.
+
+ Args:
+ test_run_config: A test run configuration according to which
+ the setup steps will be taken.
+ ports: The ports to set up for the test run.
+ """
+ super().set_up_test_run(test_run_config, ports)
+ self.main_session.bring_up_link(ports)
+
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
+ """Extend the test run teardown with the teardown of the traffic generator.
+
+ Args:
+ ports: The ports to tear down for the test run.
+ """
+ super().tear_down_test_run(ports)
def send_packets_and_capture(
self,
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 5/7] dts: add runtime status
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (3 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 4/7] dts: improve Port model Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 6/7] dts: add global runtime context Luca Vizzarro
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Add a new module which defines the global runtime status of DTS and the
distinct execution stages and steps.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
doc/api/dts/framework.status.rst | 8 ++++
doc/api/dts/index.rst | 1 +
dts/framework/logger.py | 36 ++++--------------
dts/framework/status.py | 64 ++++++++++++++++++++++++++++++++
4 files changed, 81 insertions(+), 28 deletions(-)
create mode 100644 doc/api/dts/framework.status.rst
create mode 100644 dts/framework/status.py
diff --git a/doc/api/dts/framework.status.rst b/doc/api/dts/framework.status.rst
new file mode 100644
index 0000000000..07277b5301
--- /dev/null
+++ b/doc/api/dts/framework.status.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+status - DTS status definitions
+===========================================================
+
+.. automodule:: framework.status
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
index 534512dc17..cde603576c 100644
--- a/doc/api/dts/index.rst
+++ b/doc/api/dts/index.rst
@@ -29,6 +29,7 @@ Modules
framework.test_suite
framework.test_result
framework.settings
+ framework.status
framework.logger
framework.parser
framework.utils
diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index d2b8e37da4..7b1c8e6637 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -13,37 +13,17 @@
"""
import logging
-from enum import auto
from logging import FileHandler, StreamHandler
from pathlib import Path
from typing import ClassVar
-from .utils import StrEnum
+from framework.status import PRE_RUN, State
date_fmt = "%Y/%m/%d %H:%M:%S"
stream_fmt = "%(asctime)s - %(stage)s - %(name)s - %(levelname)s - %(message)s"
dts_root_logger_name = "dts"
-class DtsStage(StrEnum):
- """The DTS execution stage."""
-
- #:
- pre_run = auto()
- #:
- test_run_setup = auto()
- #:
- test_suite_setup = auto()
- #:
- test_suite = auto()
- #:
- test_suite_teardown = auto()
- #:
- test_run_teardown = auto()
- #:
- post_run = auto()
-
-
class DTSLogger(logging.Logger):
"""The DTS logger class.
@@ -55,7 +35,7 @@ class DTSLogger(logging.Logger):
a new stage switch occurs. This is useful mainly for logging per test suite.
"""
- _stage: ClassVar[DtsStage] = DtsStage.pre_run
+ _stage: ClassVar[State] = PRE_RUN
_extra_file_handlers: list[FileHandler] = []
def __init__(self, *args, **kwargs):
@@ -75,7 +55,7 @@ def makeRecord(self, *args, **kwargs) -> logging.LogRecord:
record: The generated record with the stage information.
"""
record = super().makeRecord(*args, **kwargs)
- record.stage = DTSLogger._stage
+ record.stage = str(DTSLogger._stage)
return record
def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
@@ -110,7 +90,7 @@ def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
self._add_file_handlers(Path(output_dir, self.name))
- def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None:
+ def set_stage(self, state: State, log_file_path: Path | None = None) -> None:
"""Set the DTS execution stage and optionally log to files.
Set the DTS execution stage of the DTSLog class and optionally add
@@ -120,15 +100,15 @@ def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None:
the other one is a machine-readable log file with extra debug information.
Args:
- stage: The DTS stage to set.
+ state: The DTS execution state to set.
log_file_path: An optional path of the log file to use. This should be a full path
(either relative or absolute) without suffix (which will be appended).
"""
self._remove_extra_file_handlers()
- if DTSLogger._stage != stage:
- self.info(f"Moving from stage '{DTSLogger._stage}' to stage '{stage}'.")
- DTSLogger._stage = stage
+ if DTSLogger._stage != state:
+ self.info(f"Moving from stage '{DTSLogger._stage}' " f"to stage '{state}'.")
+ DTSLogger._stage = state
if log_file_path:
self._extra_file_handlers.extend(self._add_file_handlers(log_file_path))
diff --git a/dts/framework/status.py b/dts/framework/status.py
new file mode 100644
index 0000000000..4a59aa50e6
--- /dev/null
+++ b/dts/framework/status.py
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+"""Running status of DTS.
+
+This module contains the definitions that represent the different states of execution within DTS.
+"""
+
+from enum import auto
+from typing import NamedTuple
+
+from .utils import StrEnum
+
+
+class Stage(StrEnum):
+ """Execution stage."""
+
+ #:
+ PRE_RUN = auto()
+ #:
+ TEST_RUN = auto()
+ #:
+ TEST_SUITE = auto()
+ #:
+ TEST_CASE = auto()
+ #:
+ POST_RUN = auto()
+
+
+class InternalState(StrEnum):
+ """Internal state of the current execution stage."""
+
+ #:
+ BEGIN = auto()
+ #:
+ SETUP = auto()
+ #:
+ RUN = auto()
+ #:
+ TEARDOWN = auto()
+ #:
+ END = auto()
+
+
+class State(NamedTuple):
+ """Representation of the DTS execution state."""
+
+ #:
+ stage: Stage
+ #:
+ state: InternalState
+
+ def __str__(self) -> str:
+ """A formatted name."""
+ name = self.stage.value.lower()
+ if self.state is not InternalState.RUN:
+ return f"{name}_{self.state.value.lower()}"
+ return name
+
+
+#: A ready-made pre-run DTS state.
+PRE_RUN = State(Stage.PRE_RUN, InternalState.RUN)
+#: A ready-made post-run DTS state.
+POST_RUN = State(Stage.POST_RUN, InternalState.RUN)
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 6/7] dts: add global runtime context
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (4 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 5/7] dts: add runtime status Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 7/7] dts: revamp runtime internals Luca Vizzarro
2025-02-04 21:08 ` [RFC PATCH 0/7] dts: revamp framework Dean Marx
7 siblings, 0 replies; 9+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Add a new context module which holds the runtime context. The new
context will describe the current scenario and aid underlying classes
used by the test suites to automatically infer their parameters. This
futher simplies the test writing process as the test writer won't need
to be concerned about the nodes, and can directly use the tools, e.g.
TestPmdShell, as needed which implement context awareness.
Bugzilla ID: 1461
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
doc/api/dts/framework.context.rst | 8 ++
doc/api/dts/index.rst | 1 +
dts/framework/context.py | 107 ++++++++++++++++++
dts/framework/remote_session/dpdk_shell.py | 53 +++------
.../single_active_interactive_shell.py | 14 +--
dts/framework/remote_session/testpmd_shell.py | 27 ++---
dts/framework/test_suite.py | 73 ++++++------
dts/tests/TestSuite_blocklist.py | 6 +-
dts/tests/TestSuite_checksum_offload.py | 14 +--
dts/tests/TestSuite_dual_vlan.py | 6 +-
dts/tests/TestSuite_dynamic_config.py | 8 +-
dts/tests/TestSuite_dynamic_queue_conf.py | 1 -
dts/tests/TestSuite_hello_world.py | 2 +-
dts/tests/TestSuite_l2fwd.py | 9 +-
dts/tests/TestSuite_mac_filter.py | 10 +-
dts/tests/TestSuite_mtu.py | 17 +--
dts/tests/TestSuite_pmd_buffer_scatter.py | 9 +-
...stSuite_port_restart_config_persistency.py | 8 +-
dts/tests/TestSuite_promisc_support.py | 8 +-
dts/tests/TestSuite_smoke_tests.py | 3 +-
dts/tests/TestSuite_softnic.py | 4 +-
dts/tests/TestSuite_uni_pkt.py | 14 +--
dts/tests/TestSuite_vlan.py | 8 +-
23 files changed, 237 insertions(+), 173 deletions(-)
create mode 100644 doc/api/dts/framework.context.rst
create mode 100644 dts/framework/context.py
diff --git a/doc/api/dts/framework.context.rst b/doc/api/dts/framework.context.rst
new file mode 100644
index 0000000000..a8c8b5022e
--- /dev/null
+++ b/doc/api/dts/framework.context.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+context - DTS execution context
+===========================================================
+
+.. automodule:: framework.context
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
index cde603576c..b211571430 100644
--- a/doc/api/dts/index.rst
+++ b/doc/api/dts/index.rst
@@ -29,6 +29,7 @@ Modules
framework.test_suite
framework.test_result
framework.settings
+ framework.context
framework.status
framework.logger
framework.parser
diff --git a/dts/framework/context.py b/dts/framework/context.py
new file mode 100644
index 0000000000..fc7fb6719b
--- /dev/null
+++ b/dts/framework/context.py
@@ -0,0 +1,107 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+"""Runtime contexts."""
+
+import functools
+from dataclasses import MISSING, dataclass, field, fields
+from typing import TYPE_CHECKING, ParamSpec
+
+from framework.exception import InternalError
+from framework.settings import SETTINGS
+from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
+from framework.testbed_model.topology import Topology
+
+if TYPE_CHECKING:
+ from framework.testbed_model.sut_node import SutNode
+ from framework.testbed_model.tg_node import TGNode
+
+P = ParamSpec("P")
+
+
+@dataclass
+class LocalContext:
+ """Updatable context local to test suites and cases.
+
+ Attributes:
+ lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
+ use. The default will select one lcore for each of two cores on one socket, in ascending
+ order of core ids.
+ ascending_cores: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
+ sort in descending order.
+ append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
+ timeout: The timeout used for the SSH channel that is dedicated to this interactive
+ shell. This timeout is for collecting output, so if reading from the buffer
+ and no output is gathered within the timeout, an exception is thrown.
+ """
+
+ lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = field(
+ default_factory=LogicalCoreCount
+ )
+ ascending_cores: bool = True
+ append_prefix_timestamp: bool = True
+ timeout: float = SETTINGS.timeout
+
+ def reset(self) -> None:
+ """Reset the local context to the default values."""
+ for _field in fields(LocalContext):
+ default = (
+ _field.default_factory()
+ if _field.default_factory is not MISSING
+ else _field.default
+ )
+
+ assert (
+ default is not MISSING
+ ), "{LocalContext.__name__} must have defaults on all fields!"
+
+ setattr(self, _field.name, default)
+
+
+@dataclass(frozen=True)
+class Context:
+ """Runtime context."""
+
+ sut_node: "SutNode"
+ tg_node: "TGNode"
+ topology: Topology
+ local: LocalContext = field(default_factory=LocalContext)
+
+
+__current_ctx: Context | None = None
+
+
+def get_ctx() -> Context:
+ """Retrieve the current runtime context.
+
+ Raises:
+ InternalError: If there is no context.
+ """
+ if __current_ctx:
+ return __current_ctx
+
+ raise InternalError("Attempted to retrieve context that has not been initialized yet.")
+
+
+def init_ctx(ctx: Context) -> None:
+ """Initialize context."""
+ global __current_ctx
+ __current_ctx = ctx
+
+
+def filter_cores(specifier: LogicalCoreCount | LogicalCoreList):
+ """Decorates functions that require a temporary update to the lcore specifier."""
+
+ def decorator(func):
+ @functools.wraps(func)
+ def wrapper(*args: P.args, **kwargs: P.kwargs):
+ local_ctx = get_ctx().local
+ old_specifier = local_ctx.lcore_filter_specifier
+ local_ctx.lcore_filter_specifier = specifier
+ result = func(*args, **kwargs)
+ local_ctx.lcore_filter_specifier = old_specifier
+ return result
+
+ return wrapper
+
+ return decorator
diff --git a/dts/framework/remote_session/dpdk_shell.py b/dts/framework/remote_session/dpdk_shell.py
index c11d9ab81c..b55deb7fa0 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -9,54 +9,45 @@
from abc import ABC
from pathlib import PurePath
+from framework.context import get_ctx
from framework.params.eal import EalParams
from framework.remote_session.single_active_interactive_shell import (
SingleActiveInteractiveShell,
)
-from framework.settings import SETTINGS
-from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
+from framework.testbed_model.cpu import LogicalCoreList
from framework.testbed_model.sut_node import SutNode
def compute_eal_params(
- sut_node: SutNode,
params: EalParams | None = None,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
) -> EalParams:
"""Compute EAL parameters based on the node's specifications.
Args:
- sut_node: The SUT node to compute the values for.
params: If :data:`None`, a new object is created and returned. Otherwise `params.lcore_list`
is modified according to `lcore_filter_specifier`. A DPDK file prefix is also added. If
`params.ports` is :data:`None`, then `sut_node.ports` is assigned to it.
- lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
- use. The default will select one lcore for each of two cores on one socket, in ascending
- order of core ids.
- ascending_cores: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
- sort in descending order.
- append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
"""
+ ctx = get_ctx()
+
if params is None:
params = EalParams()
if params.lcore_list is None:
params.lcore_list = LogicalCoreList(
- sut_node.filter_lcores(lcore_filter_specifier, ascending_cores)
+ ctx.sut_node.filter_lcores(ctx.local.lcore_filter_specifier, ctx.local.ascending_cores)
)
prefix = params.prefix
- if append_prefix_timestamp:
- prefix = f"{prefix}_{sut_node.dpdk_timestamp}"
- prefix = sut_node.main_session.get_dpdk_file_prefix(prefix)
+ if ctx.local.append_prefix_timestamp:
+ prefix = f"{prefix}_{ctx.sut_node.dpdk_timestamp}"
+ prefix = ctx.sut_node.main_session.get_dpdk_file_prefix(prefix)
if prefix:
- sut_node.dpdk_prefix_list.append(prefix)
+ ctx.sut_node.dpdk_prefix_list.append(prefix)
params.prefix = prefix
if params.allowed_ports is None:
- params.allowed_ports = sut_node.ports
+ params.allowed_ports = ctx.topology.sut_ports
return params
@@ -74,29 +65,15 @@ class DPDKShell(SingleActiveInteractiveShell, ABC):
def __init__(
self,
- node: SutNode,
+ name: str | None = None,
privileged: bool = True,
- timeout: float = SETTINGS.timeout,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
app_params: EalParams = EalParams(),
- name: str | None = None,
) -> None:
- """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`.
-
- Adds the `lcore_filter_specifier`, `ascending_cores` and `append_prefix_timestamp` arguments
- which are then used to compute the EAL parameters based on the node's configuration.
- """
- app_params = compute_eal_params(
- node,
- app_params,
- lcore_filter_specifier,
- ascending_cores,
- append_prefix_timestamp,
- )
+ """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`."""
+ app_params = compute_eal_params(app_params)
+ node = get_ctx().sut_node
- super().__init__(node, privileged, timeout, app_params, name)
+ super().__init__(node, name, privileged, app_params)
def _update_real_path(self, path: PurePath) -> None:
"""Extends :meth:`~.interactive_shell.InteractiveShell._update_real_path`.
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index cfe5baec14..2eec2f698a 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -27,6 +27,7 @@
from paramiko import Channel, channel
from typing_extensions import Self
+from framework.context import get_ctx
from framework.exception import (
InteractiveCommandExecutionError,
InteractiveSSHSessionDeadError,
@@ -34,7 +35,6 @@
)
from framework.logger import DTSLogger, get_dts_logger
from framework.params import Params
-from framework.settings import SETTINGS
from framework.testbed_model.node import Node
from framework.utils import MultiInheritanceBaseClass
@@ -90,10 +90,9 @@ class SingleActiveInteractiveShell(MultiInheritanceBaseClass, ABC):
def __init__(
self,
node: Node,
+ name: str | None = None,
privileged: bool = False,
- timeout: float = SETTINGS.timeout,
app_params: Params = Params(),
- name: str | None = None,
**kwargs,
) -> None:
"""Create an SSH channel during initialization.
@@ -103,13 +102,10 @@ def __init__(
Args:
node: The node on which to run start the interactive shell.
- privileged: Enables the shell to run as superuser.
- timeout: The timeout used for the SSH channel that is dedicated to this interactive
- shell. This timeout is for collecting output, so if reading from the buffer
- and no output is gathered within the timeout, an exception is thrown.
- app_params: The command line parameters to be passed to the application on startup.
name: Name for the interactive shell to use for logging. This name will be appended to
the name of the underlying node which it is running on.
+ privileged: Enables the shell to run as superuser.
+ app_params: The command line parameters to be passed to the application on startup.
**kwargs: Any additional arguments if any.
"""
self._node = node
@@ -118,7 +114,7 @@ def __init__(
self._logger = get_dts_logger(f"{node.name}.{name}")
self._app_params = app_params
self._privileged = privileged
- self._timeout = timeout
+ self._timeout = get_ctx().local.timeout
# Ensure path is properly formatted for the host
self._update_real_path(self.path)
super().__init__(**kwargs)
diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
index 9f07696aa2..c63d532e16 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -24,6 +24,9 @@
from pathlib import PurePath
from typing import TYPE_CHECKING, Any, ClassVar, Concatenate, ParamSpec, TypeAlias
+from framework.context import get_ctx
+from framework.testbed_model.topology import TopologyType
+
if TYPE_CHECKING or environ.get("DTS_DOC_BUILD"):
from enum import Enum as NoAliasEnum
else:
@@ -32,13 +35,11 @@
from typing_extensions import Self, Unpack
from framework.exception import InteractiveCommandExecutionError, InternalError
-from framework.params.testpmd import SimpleForwardingModes, TestPmdParams
+from framework.params.testpmd import PortTopology, SimpleForwardingModes, TestPmdParams
from framework.params.types import TestPmdParamsDict
from framework.parser import ParserFn, TextParser
from framework.remote_session.dpdk_shell import DPDKShell
from framework.settings import SETTINGS
-from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
-from framework.testbed_model.sut_node import SutNode
from framework.utils import REGEX_FOR_MAC_ADDRESS, StrEnum
P = ParamSpec("P")
@@ -1507,26 +1508,14 @@ class TestPmdShell(DPDKShell):
def __init__(
self,
- node: SutNode,
- privileged: bool = True,
- timeout: float = SETTINGS.timeout,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
name: str | None = None,
+ privileged: bool = True,
**app_params: Unpack[TestPmdParamsDict],
) -> None:
"""Overrides :meth:`~.dpdk_shell.DPDKShell.__init__`. Changes app_params to kwargs."""
- super().__init__(
- node,
- privileged,
- timeout,
- lcore_filter_specifier,
- ascending_cores,
- append_prefix_timestamp,
- TestPmdParams(**app_params),
- name,
- )
+ if "port_topology" not in app_params and get_ctx().topology.type is TopologyType.one_link:
+ app_params["port_topology"] = PortTopology.loop
+ super().__init__(name, privileged, TestPmdParams(**app_params))
self.ports_started = not self._app_params.disable_device_start
self.currently_forwarding = not self._app_params.auto_start
self._ports = None
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 3d168d522b..b9b527e40d 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -24,7 +24,7 @@
from ipaddress import IPv4Interface, IPv6Interface, ip_interface
from pkgutil import iter_modules
from types import ModuleType
-from typing import ClassVar, Protocol, TypeVar, Union, cast
+from typing import TYPE_CHECKING, ClassVar, Protocol, TypeVar, Union, cast
from scapy.layers.inet import IP
from scapy.layers.l2 import Ether
@@ -32,9 +32,6 @@
from typing_extensions import Self
from framework.testbed_model.capability import TestProtocol
-from framework.testbed_model.port import Port
-from framework.testbed_model.sut_node import SutNode
-from framework.testbed_model.tg_node import TGNode
from framework.testbed_model.topology import Topology
from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
PacketFilteringConfig,
@@ -44,6 +41,9 @@
from .logger import DTSLogger, get_dts_logger
from .utils import get_packet_summaries, to_pascal_case
+if TYPE_CHECKING:
+ from framework.context import Context
+
class TestSuite(TestProtocol):
"""The base class with building blocks needed by most test cases.
@@ -69,33 +69,19 @@ class TestSuite(TestProtocol):
The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
properly choose the IP addresses and other configuration that must be tailored to the testbed.
-
- Attributes:
- sut_node: The SUT node where the test suite is running.
- tg_node: The TG node where the test suite is running.
"""
- sut_node: SutNode
- tg_node: TGNode
#: Whether the test suite is blocking. A failure of a blocking test suite
#: will block the execution of all subsequent test suites in the current test run.
is_blocking: ClassVar[bool] = False
+ _ctx: "Context"
_logger: DTSLogger
- _sut_port_ingress: Port
- _sut_port_egress: Port
_sut_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
_sut_ip_address_egress: Union[IPv4Interface, IPv6Interface]
- _tg_port_ingress: Port
- _tg_port_egress: Port
_tg_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
_tg_ip_address_egress: Union[IPv4Interface, IPv6Interface]
- def __init__(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- ):
+ def __init__(self):
"""Initialize the test suite testbed information and basic configuration.
Find links between ports and set up default IP addresses to be used when
@@ -106,18 +92,25 @@ def __init__(
tg_node: The TG node where the test suite will run.
topology: The topology where the test suite will run.
"""
- self.sut_node = sut_node
- self.tg_node = tg_node
+ from framework.context import get_ctx
+
+ self._ctx = get_ctx()
self._logger = get_dts_logger(self.__class__.__name__)
- self._tg_port_egress = topology.tg_port_egress
- self._sut_port_ingress = topology.sut_port_ingress
- self._sut_port_egress = topology.sut_port_egress
- self._tg_port_ingress = topology.tg_port_ingress
self._sut_ip_address_ingress = ip_interface("192.168.100.2/24")
self._sut_ip_address_egress = ip_interface("192.168.101.2/24")
self._tg_ip_address_egress = ip_interface("192.168.100.3/24")
self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
+ @property
+ def name(self) -> str:
+ """The name of the test suite class."""
+ return type(self).__name__
+
+ @property
+ def topology(self) -> Topology:
+ """The current topology in use."""
+ return self._ctx.topology
+
@classmethod
def get_test_cases(cls) -> list[type["TestCase"]]:
"""A list of all the available test cases."""
@@ -254,10 +247,10 @@ def send_packets_and_capture(
A list of received packets.
"""
packets = self._adjust_addresses(packets)
- return self.tg_node.send_packets_and_capture(
+ return self._ctx.tg_node.send_packets_and_capture(
packets,
- self._tg_port_egress,
- self._tg_port_ingress,
+ self._ctx.topology.tg_port_egress,
+ self._ctx.topology.tg_port_ingress,
filter_config,
duration,
)
@@ -272,7 +265,7 @@ def send_packets(
packets: Packets to send.
"""
packets = self._adjust_addresses(packets)
- self.tg_node.send_packets(packets, self._tg_port_egress)
+ self._ctx.tg_node.send_packets(packets, self._ctx.topology.tg_port_egress)
def get_expected_packets(
self,
@@ -352,15 +345,15 @@ def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> li
# only be the Ether src/dst.
if "src" not in packet.fields:
packet.src = (
- self._sut_port_egress.mac_address
+ self.topology.sut_port_egress.mac_address
if expected
- else self._tg_port_egress.mac_address
+ else self.topology.tg_port_egress.mac_address
)
if "dst" not in packet.fields:
packet.dst = (
- self._tg_port_ingress.mac_address
+ self.topology.tg_port_ingress.mac_address
if expected
- else self._sut_port_ingress.mac_address
+ else self.topology.sut_port_ingress.mac_address
)
# update l3 addresses
@@ -400,10 +393,10 @@ def verify(self, condition: bool, failure_description: str) -> None:
def _fail_test_case_verify(self, failure_description: str) -> None:
self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
- for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ for command_res in self._ctx.sut_node.main_session.remote_session.history[-10:]:
self._logger.debug(command_res.command)
self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
- for command_res in self.tg_node.main_session.remote_session.history[-10:]:
+ for command_res in self._ctx.tg_node.main_session.remote_session.history[-10:]:
self._logger.debug(command_res.command)
raise TestCaseVerifyError(failure_description)
@@ -517,14 +510,14 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
self._logger.debug("Looking at the Ether layer.")
self._logger.debug(
f"Comparing received dst mac '{received_packet.dst}' "
- f"with expected '{self._tg_port_ingress.mac_address}'."
+ f"with expected '{self.topology.tg_port_ingress.mac_address}'."
)
- if received_packet.dst != self._tg_port_ingress.mac_address:
+ if received_packet.dst != self.topology.tg_port_ingress.mac_address:
return False
- expected_src_mac = self._tg_port_egress.mac_address
+ expected_src_mac = self.topology.tg_port_egress.mac_address
if l3:
- expected_src_mac = self._sut_port_egress.mac_address
+ expected_src_mac = self.topology.sut_port_egress.mac_address
self._logger.debug(
f"Comparing received src mac '{received_packet.src}' "
f"with expected '{expected_src_mac}'."
diff --git a/dts/tests/TestSuite_blocklist.py b/dts/tests/TestSuite_blocklist.py
index b9e9cd1d1a..ce7da1cc8f 100644
--- a/dts/tests/TestSuite_blocklist.py
+++ b/dts/tests/TestSuite_blocklist.py
@@ -18,7 +18,7 @@ class TestBlocklist(TestSuite):
def verify_blocklisted_ports(self, ports_to_block: list[Port]):
"""Runs testpmd with the given ports blocklisted and verifies the ports."""
- with TestPmdShell(self.sut_node, allowed_ports=[], blocked_ports=ports_to_block) as testpmd:
+ with TestPmdShell(allowed_ports=[], blocked_ports=ports_to_block) as testpmd:
allowlisted_ports = {port.device_name for port in testpmd.show_port_info_all()}
blocklisted_ports = {port.pci for port in ports_to_block}
@@ -49,7 +49,7 @@ def one_port_blocklisted(self):
Verify:
That the port was successfully blocklisted.
"""
- self.verify_blocklisted_ports(self.sut_node.ports[:1])
+ self.verify_blocklisted_ports(self.topology.sut_ports[:1])
@func_test
def all_but_one_port_blocklisted(self):
@@ -60,4 +60,4 @@ def all_but_one_port_blocklisted(self):
Verify:
That all specified ports were successfully blocklisted.
"""
- self.verify_blocklisted_ports(self.sut_node.ports[:-1])
+ self.verify_blocklisted_ports(self.topology.sut_ports[:-1])
diff --git a/dts/tests/TestSuite_checksum_offload.py b/dts/tests/TestSuite_checksum_offload.py
index a8bb6a71f7..b38d73421b 100644
--- a/dts/tests/TestSuite_checksum_offload.py
+++ b/dts/tests/TestSuite_checksum_offload.py
@@ -128,7 +128,7 @@ def test_insert_checksums(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP() / Raw(payload),
Ether(dst=mac_id) / IPv6(src="::1") / TCP() / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -160,7 +160,7 @@ def test_no_insert_checksums(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP() / Raw(payload),
Ether(dst=mac_id) / IPv6(src="::1") / TCP() / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
testpmd.start()
@@ -190,7 +190,7 @@ def test_l4_rx_checksum(self) -> None:
Ether(dst=mac_id) / IP() / UDP(chksum=0xF),
Ether(dst=mac_id) / IP() / TCP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -223,7 +223,7 @@ def test_l3_rx_checksum(self) -> None:
Ether(dst=mac_id) / IP(chksum=0xF) / UDP(),
Ether(dst=mac_id) / IP(chksum=0xF) / TCP(),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -260,7 +260,7 @@ def test_validate_rx_checksum(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP(chksum=0xF),
Ether(dst=mac_id) / IPv6(src="::1") / TCP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -299,7 +299,7 @@ def test_vlan_checksum(self) -> None:
Ether(dst=mac_id) / Dot1Q(vlan=1) / IPv6(src="::1") / UDP(chksum=0xF) / Raw(payload),
Ether(dst=mac_id) / Dot1Q(vlan=1) / IPv6(src="::1") / TCP(chksum=0xF) / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -333,7 +333,7 @@ def test_validate_sctp_checksum(self) -> None:
Ether(dst=mac_id) / IP() / SCTP(),
Ether(dst=mac_id) / IP() / SCTP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
testpmd.csum_set_hw(layers=ChecksumOffloadOptions.sctp)
diff --git a/dts/tests/TestSuite_dual_vlan.py b/dts/tests/TestSuite_dual_vlan.py
index bdbee7e8d1..6af503528d 100644
--- a/dts/tests/TestSuite_dual_vlan.py
+++ b/dts/tests/TestSuite_dual_vlan.py
@@ -193,7 +193,7 @@ def insert_second_vlan(self) -> None:
Packets are received.
Packet contains two VLAN tags.
"""
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.tx_vlan_set(port=self.tx_port, enable=True, vlan=self.vlan_insert_tag)
testpmd.start()
recv = self.send_packet_and_capture(
@@ -229,7 +229,7 @@ def all_vlan_functions(self) -> None:
/ Dot1Q(vlan=self.inner_vlan_tag)
/ Raw(b"X" * 20)
)
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.start()
recv = self.send_packet_and_capture(send_pkt)
self.verify(len(recv) > 0, "Unmodified packet was not received.")
@@ -269,7 +269,7 @@ def maintains_priority(self) -> None:
/ Dot1Q(vlan=self.inner_vlan_tag, prio=2)
/ Raw(b"X" * 20)
)
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.start()
recv = self.send_packet_and_capture(pkt)
self.verify(len(recv) > 0, "Did not receive any packets when testing VLAN priority.")
diff --git a/dts/tests/TestSuite_dynamic_config.py b/dts/tests/TestSuite_dynamic_config.py
index 5a33f6f3c2..a4bee2e90b 100644
--- a/dts/tests/TestSuite_dynamic_config.py
+++ b/dts/tests/TestSuite_dynamic_config.py
@@ -88,7 +88,7 @@ def test_default_mode(self) -> None:
and sends two packets; one matching source MAC address and one unknown.
Verifies that both are received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
is_promisc = testpmd.show_port_info(0).is_promiscuous_mode_enabled
self.verify(is_promisc, "Promiscuous mode was not enabled by default.")
testpmd.start()
@@ -106,7 +106,7 @@ def test_disable_promisc(self) -> None:
and sends two packets; one matching source MAC address and one unknown.
Verifies that only the matching address packet is received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
mac = testpmd.show_port_info(0).mac_address
self.send_packet_and_verify(should_receive=True, mac_address=str(mac))
@@ -120,7 +120,7 @@ def test_disable_promisc_broadcast(self) -> None:
and sends two packets; one matching source MAC address and one broadcast.
Verifies that both packets are received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
mac = testpmd.show_port_info(0).mac_address
self.send_packet_and_verify(should_receive=True, mac_address=str(mac))
@@ -134,7 +134,7 @@ def test_disable_promisc_multicast(self) -> None:
and sends two packets; one matching source MAC address and one multicast.
Verifies that the multicast packet is only received once allmulticast mode is enabled.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
testpmd.set_multicast_all(on=False)
# 01:00:5E:00:00:01 is the first of the multicast MAC range of addresses
diff --git a/dts/tests/TestSuite_dynamic_queue_conf.py b/dts/tests/TestSuite_dynamic_queue_conf.py
index e55716f545..344dd540eb 100644
--- a/dts/tests/TestSuite_dynamic_queue_conf.py
+++ b/dts/tests/TestSuite_dynamic_queue_conf.py
@@ -84,7 +84,6 @@ def wrap(self: "TestDynamicQueueConf", is_rx_testing: bool) -> None:
queues_to_config.add(random.randint(1, self.number_of_queues - 1))
unchanged_queues = set(range(self.number_of_queues)) - queues_to_config
with TestPmdShell(
- self.sut_node,
port_topology=PortTopology.chained,
rx_queues=self.number_of_queues,
tx_queues=self.number_of_queues,
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 031b94de4d..141f2bc4c9 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -23,6 +23,6 @@ def test_hello_world(self) -> None:
Verify:
The testpmd session throws no errors.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
self.log("Hello World!")
diff --git a/dts/tests/TestSuite_l2fwd.py b/dts/tests/TestSuite_l2fwd.py
index 0f6ff18907..0555d75ed8 100644
--- a/dts/tests/TestSuite_l2fwd.py
+++ b/dts/tests/TestSuite_l2fwd.py
@@ -7,6 +7,7 @@
The forwarding test is performed with several packets being sent at once.
"""
+from framework.context import filter_cores
from framework.params.testpmd import EthPeer, SimpleForwardingModes
from framework.remote_session.testpmd_shell import TestPmdShell
from framework.test_suite import TestSuite, func_test
@@ -33,6 +34,7 @@ def set_up_suite(self) -> None:
"""
self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
+ @filter_cores(LogicalCoreCount(cores_per_socket=4))
@func_test
def l2fwd_integrity(self) -> None:
"""Test the L2 forwarding integrity.
@@ -44,11 +46,12 @@ def l2fwd_integrity(self) -> None:
"""
queues = [1, 2, 4, 8]
+ self.topology.sut_ports[0]
+ self.topology.tg_ports[0]
+
with TestPmdShell(
- self.sut_node,
- lcore_filter_specifier=LogicalCoreCount(cores_per_socket=4),
forward_mode=SimpleForwardingModes.mac,
- eth_peer=[EthPeer(1, self.tg_node.ports[1].mac_address)],
+ eth_peer=[EthPeer(1, self.topology.tg_port_ingress.mac_address)],
disable_device_start=True,
) as shell:
for queues_num in queues:
diff --git a/dts/tests/TestSuite_mac_filter.py b/dts/tests/TestSuite_mac_filter.py
index 11e4b595c7..e6c55d3ec6 100644
--- a/dts/tests/TestSuite_mac_filter.py
+++ b/dts/tests/TestSuite_mac_filter.py
@@ -101,10 +101,10 @@ def test_add_remove_mac_addresses(self) -> None:
Remove the fake mac address from the PMD's address pool.
Send a packet with the fake mac address to the PMD. (Should not receive)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.set_promisc(0, enable=False)
testpmd.start()
- mac_address = self._sut_port_ingress.mac_address
+ mac_address = self.topology.sut_port_ingress.mac_address
# Send a packet with NIC default mac address
self.send_packet_and_verify(mac_address=mac_address, should_receive=True)
@@ -137,9 +137,9 @@ def test_invalid_address(self) -> None:
Determine the device's mac address pool size, and fill the pool with fake addresses.
Attempt to add another fake mac address, overloading the address pool. (Should fail)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
- mac_address = self._sut_port_ingress.mac_address
+ mac_address = self.topology.sut_port_ingress.mac_address
try:
testpmd.set_mac_addr(0, "00:00:00:00:00:00", add=True)
self.verify(False, "Invalid mac address added.")
@@ -191,7 +191,7 @@ def test_multicast_filter(self) -> None:
Remove the fake multicast address from the PMDs multicast address filter.
Send a packet with the fake multicast address to the PMD. (Should not receive)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
testpmd.set_promisc(0, enable=False)
multicast_address = "01:00:5E:00:00:00"
diff --git a/dts/tests/TestSuite_mtu.py b/dts/tests/TestSuite_mtu.py
index 3c96a36fc9..b445948091 100644
--- a/dts/tests/TestSuite_mtu.py
+++ b/dts/tests/TestSuite_mtu.py
@@ -51,8 +51,8 @@ def set_up_suite(self) -> None:
Set traffic generator MTU lengths to a size greater than scope of all
test cases.
"""
- self.tg_node.main_session.configure_port_mtu(JUMBO_MTU + 200, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(JUMBO_MTU + 200, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(JUMBO_MTU + 200)
+ self.topology.tg_port_ingress.configure_mtu(JUMBO_MTU + 200)
def send_packet_and_verify(self, pkt_size: int, should_receive: bool) -> None:
"""Generate, send a packet, and assess its behavior based on a given packet size.
@@ -156,11 +156,7 @@ def test_runtime_mtu_updating_and_forwarding(self) -> None:
Verify that standard MTU packets forward, in addition to packets within the limits of
an MTU size set during runtime.
"""
- with TestPmdShell(
- self.sut_node,
- tx_offloads=0x8000,
- mbuf_size=[JUMBO_MTU + 200],
- ) as testpmd:
+ with TestPmdShell(tx_offloads=0x8000, mbuf_size=[JUMBO_MTU + 200]) as testpmd:
# Configure the new MTU.
# Start packet capturing.
@@ -198,7 +194,6 @@ def test_cli_mtu_forwarding_for_std_packets(self) -> None:
MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -227,7 +222,6 @@ def test_cli_jumbo_forwarding_for_jumbo_mtu(self) -> None:
Verify that all packets are forwarded after pre-runtime MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -256,7 +250,6 @@ def test_cli_mtu_std_packets_for_jumbo_mtu(self) -> None:
MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -274,5 +267,5 @@ def tear_down_suite(self) -> None:
Teardown:
Set the MTU size of the traffic generator back to the standard 1518 byte size.
"""
- self.tg_node.main_session.configure_port_mtu(STANDARD_MTU, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(STANDARD_MTU, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(STANDARD_MTU)
+ self.topology.tg_port_ingress.configure_mtu(STANDARD_MTU)
diff --git a/dts/tests/TestSuite_pmd_buffer_scatter.py b/dts/tests/TestSuite_pmd_buffer_scatter.py
index a8c111eea7..5e23f28bc6 100644
--- a/dts/tests/TestSuite_pmd_buffer_scatter.py
+++ b/dts/tests/TestSuite_pmd_buffer_scatter.py
@@ -58,8 +58,8 @@ def set_up_suite(self) -> None:
Increase the MTU of both ports on the traffic generator to 9000
to support larger packet sizes.
"""
- self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(9000)
+ self.topology.tg_port_ingress.configure_mtu(9000)
def scatter_pktgen_send_packet(self, pkt_size: int) -> list[Packet]:
"""Generate and send a packet to the SUT then capture what is forwarded back.
@@ -110,7 +110,6 @@ def pmd_scatter(self, mb_size: int, enable_offload: bool = False) -> None:
Start testpmd and run functional test with preset `mb_size`.
"""
with TestPmdShell(
- self.sut_node,
forward_mode=SimpleForwardingModes.mac,
mbcache=200,
mbuf_size=[mb_size],
@@ -147,5 +146,5 @@ def tear_down_suite(self) -> None:
Teardown:
Set the MTU of the tg_node back to a more standard size of 1500.
"""
- self.tg_node.main_session.configure_port_mtu(1500, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(1500, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(1500)
+ self.topology.tg_port_ingress.configure_mtu(1500)
diff --git a/dts/tests/TestSuite_port_restart_config_persistency.py b/dts/tests/TestSuite_port_restart_config_persistency.py
index ad42c6c2e6..42ea221586 100644
--- a/dts/tests/TestSuite_port_restart_config_persistency.py
+++ b/dts/tests/TestSuite_port_restart_config_persistency.py
@@ -61,8 +61,8 @@ def port_configuration_persistence(self) -> None:
Verify:
The configuration persists after the port is restarted.
"""
- with TestPmdShell(self.sut_node, disable_device_start=True) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell(disable_device_start=True) as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_port_mtu(port_id=port_id, mtu=STANDARD_MTU, verify=True)
self.restart_port_and_verify(port_id, testpmd, "MTU")
@@ -90,8 +90,8 @@ def flow_ctrl_port_configuration_persistence(self) -> None:
Verify:
The configuration persists after the port is restarted.
"""
- with TestPmdShell(self.sut_node, disable_device_start=True) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell(disable_device_start=True) as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
flow_ctrl = TestPmdPortFlowCtrl(rx=True)
testpmd.set_flow_control(port=port_id, flow_ctrl=flow_ctrl)
self.restart_port_and_verify(port_id, testpmd, "flow_ctrl")
diff --git a/dts/tests/TestSuite_promisc_support.py b/dts/tests/TestSuite_promisc_support.py
index a3ea2461f0..445f6e1d69 100644
--- a/dts/tests/TestSuite_promisc_support.py
+++ b/dts/tests/TestSuite_promisc_support.py
@@ -38,10 +38,8 @@ def test_promisc_packets(self) -> None:
"""
packet = [Ether(dst=self.ALTERNATIVE_MAC_ADDRESS) / IP() / Raw(load=b"\x00" * 64)]
- with TestPmdShell(
- self.sut_node,
- ) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell() as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_promisc(port=port_id, enable=True, verify=True)
testpmd.start()
@@ -51,7 +49,7 @@ def test_promisc_packets(self) -> None:
testpmd.stop()
- for port_id in range(len(self.sut_node.ports)):
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_promisc(port=port_id, enable=False, verify=True)
testpmd.start()
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 7ed266dac0..8a5799c684 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -46,6 +46,7 @@ def set_up_suite(self) -> None:
Setup:
Set the build directory path and a list of NICs in the SUT node.
"""
+ self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir
self.nics_in_node = self.sut_node.config.ports
@@ -104,7 +105,7 @@ def test_devices_listed_in_testpmd(self) -> None:
Test:
List all devices found in testpmd and verify the configured devices are among them.
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
dev_list = [str(x) for x in testpmd.get_devices()]
for nic in self.nics_in_node:
self.verify(
diff --git a/dts/tests/TestSuite_softnic.py b/dts/tests/TestSuite_softnic.py
index 07480db392..370fd6b419 100644
--- a/dts/tests/TestSuite_softnic.py
+++ b/dts/tests/TestSuite_softnic.py
@@ -32,6 +32,7 @@ def set_up_suite(self) -> None:
Setup:
Generate the random packets that will be sent and create the softnic config files.
"""
+ self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
self.cli_file = self.prepare_softnic_files()
@@ -105,9 +106,8 @@ def softnic(self) -> None:
"""
with TestPmdShell(
- self.sut_node,
vdevs=[VirtualDevice(f"net_softnic0,firmware={self.cli_file},cpu_id=1,conn_port=8086")],
- eth_peer=[EthPeer(1, self.tg_node.ports[1].mac_address)],
+ eth_peer=[EthPeer(1, self.topology.tg_port_ingress.mac_address)],
port_topology=None,
) as shell:
shell.start()
diff --git a/dts/tests/TestSuite_uni_pkt.py b/dts/tests/TestSuite_uni_pkt.py
index 0898187675..656a69b0f1 100644
--- a/dts/tests/TestSuite_uni_pkt.py
+++ b/dts/tests/TestSuite_uni_pkt.py
@@ -85,7 +85,7 @@ def test_l2_packet_detect(self) -> None:
mac_id = "00:00:00:00:00:01"
packet_list = [Ether(dst=mac_id, type=0x88F7) / Raw(), Ether(dst=mac_id) / ARP() / Raw()]
flag_list = [RtePTypes.L2_ETHER_TIMESYNC, RtePTypes.L2_ETHER_ARP]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -118,7 +118,7 @@ def test_l3_l4_packet_detect(self) -> None:
RtePTypes.L4_ICMP,
RtePTypes.L4_FRAG | RtePTypes.L3_IPV4_EXT_UNKNOWN | RtePTypes.L2_ETHER,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -147,7 +147,7 @@ def test_ipv6_l4_packet_detect(self) -> None:
RtePTypes.L4_TCP,
RtePTypes.L3_IPV6_EXT_UNKNOWN,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -182,7 +182,7 @@ def test_l3_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_IP | RtePTypes.INNER_L4_ICMP,
RtePTypes.TUNNEL_IP | RtePTypes.INNER_L3_IPV6_EXT_UNKNOWN | RtePTypes.INNER_L4_FRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -215,7 +215,7 @@ def test_gre_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_SCTP,
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_ICMP,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -250,7 +250,7 @@ def test_nsh_packet_detect(self) -> None:
RtePTypes.L2_ETHER_NSH | RtePTypes.L3_IPV4_EXT_UNKNOWN | RtePTypes.L4_SCTP,
RtePTypes.L2_ETHER_NSH | RtePTypes.L3_IPV6_EXT_UNKNOWN | RtePTypes.L4_NONFRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -295,6 +295,6 @@ def test_vxlan_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_ICMP,
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L3_IPV6_EXT_UNKNOWN | RtePTypes.INNER_L4_FRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.rx_vxlan(4789, 0, True)
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index c67520baef..d2a9e614d4 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -124,7 +124,7 @@ def test_vlan_receipt_no_stripping(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.start()
self.send_vlan_packet_and_verify(True, strip=False, vlan_id=1)
@@ -137,7 +137,7 @@ def test_vlan_receipt_stripping(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.set_vlan_strip(port=0, enable=True)
testpmd.start()
@@ -150,7 +150,7 @@ def test_vlan_no_receipt(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.start()
self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
@@ -162,7 +162,7 @@ def test_vlan_header_insertion(self) -> None:
Test:
Create an interactive testpmd shell and verify a non-VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.mac)
testpmd.set_promisc(port=0, enable=False)
testpmd.stop_all_ports()
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 7/7] dts: revamp runtime internals
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (5 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 6/7] dts: add global runtime context Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-04 21:08 ` [RFC PATCH 0/7] dts: revamp framework Dean Marx
7 siblings, 0 replies; 9+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Enforce separation of concerns by letting test runs being isolated
through a new TestRun class and respective module. This also means that
any actions taken on the nodes must be handled exclusively by the test
run. An example being the creation and destruction of the traffic
generator. TestSuiteWithCases is now redundant as the configuration is
able to provide all the details about the test run's own test suites.
Any other runtime state which concerns the test runs, now belongs to
their class.
Finally, as the test run execution is isolated, all the runtime
internals are held in the new class. Internals which have been
completely reworked into a finite state machine (FSM), to simplify the
use and understanding of the different execution states, while
rendering the process of handling errors less repetitive and easier.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
doc/api/dts/framework.test_run.rst | 8 +
doc/api/dts/index.rst | 1 +
doc/guides/conf.py | 3 +-
dts/framework/exception.py | 33 +-
dts/framework/runner.py | 492 +---------------------
dts/framework/test_result.py | 143 +------
dts/framework/test_run.py | 443 +++++++++++++++++++
dts/framework/testbed_model/capability.py | 24 +-
dts/framework/testbed_model/tg_node.py | 6 +-
9 files changed, 524 insertions(+), 629 deletions(-)
create mode 100644 doc/api/dts/framework.test_run.rst
create mode 100644 dts/framework/test_run.py
diff --git a/doc/api/dts/framework.test_run.rst b/doc/api/dts/framework.test_run.rst
new file mode 100644
index 0000000000..8147320ed9
--- /dev/null
+++ b/doc/api/dts/framework.test_run.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+test_run - Test Run Execution
+===========================================================
+
+.. automodule:: framework.test_run
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
index b211571430..c76725eb75 100644
--- a/doc/api/dts/index.rst
+++ b/doc/api/dts/index.rst
@@ -26,6 +26,7 @@ Modules
:maxdepth: 1
framework.runner
+ framework.test_run
framework.test_suite
framework.test_result
framework.settings
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index e7508ea1d5..9ccd7d0c84 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -59,7 +59,8 @@
# DTS API docs additional configuration
if environ.get('DTS_DOC_BUILD'):
- extensions = ['sphinx.ext.napoleon', 'sphinx.ext.autodoc']
+ extensions = ['sphinx.ext.napoleon', 'sphinx.ext.autodoc', 'sphinx.ext.graphviz']
+ graphviz_output_format = "svg"
# Pydantic models require autodoc_pydantic for the right formatting
try:
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index d967ede09b..47e3fac05c 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -205,28 +205,27 @@ class TestCaseVerifyError(DTSError):
severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
-class BlockingTestSuiteError(DTSError):
- """A failure in a blocking test suite."""
+class InternalError(DTSError):
+ """An internal error or bug has occurred in DTS."""
#:
- severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
- _suite_name: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.INTERNAL_ERR
- def __init__(self, suite_name: str) -> None:
- """Define the meaning of the first argument.
- Args:
- suite_name: The blocking test suite.
- """
- self._suite_name = suite_name
+class SkippedTestException(DTSError):
+ """An exception raised when a test suite or case has been skipped."""
- def __str__(self) -> str:
- """Add some context to the string representation."""
- return f"Blocking suite {self._suite_name} failed."
+ #:
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.NO_ERR
+ def __init__(self, reason: str) -> None:
+ """Constructor.
-class InternalError(DTSError):
- """An internal error or bug has occurred in DTS."""
+ Args:
+ reason: The reason for the test being skipped.
+ """
+ self._reason = reason
- #:
- severity: ClassVar[ErrorSeverity] = ErrorSeverity.INTERNAL_ERR
+ def __str__(self) -> str:
+ """Stringify the exception."""
+ return self._reason
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index 60a885d8e6..8f5bf716a3 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -19,14 +19,12 @@
"""
import os
-import random
import sys
-from pathlib import Path
-from types import MethodType
-from typing import Iterable
from framework.config.common import ValidationContext
-from framework.testbed_model.capability import Capability, get_supported_capabilities
+from framework.status import POST_RUN
+from framework.test_run import TestRun
+from framework.testbed_model.node import Node
from framework.testbed_model.sut_node import SutNode
from framework.testbed_model.tg_node import TGNode
@@ -38,23 +36,12 @@
SutNodeConfiguration,
TGNodeConfiguration,
)
-from .config.test_run import (
- TestRunConfiguration,
- TestSuiteConfig,
-)
-from .exception import BlockingTestSuiteError, SSHTimeoutError, TestCaseVerifyError
-from .logger import DTSLogger, DtsStage, get_dts_logger
+from .logger import DTSLogger, get_dts_logger
from .settings import SETTINGS
from .test_result import (
DTSResult,
Result,
- TestCaseResult,
- TestRunResult,
- TestSuiteResult,
- TestSuiteWithCases,
)
-from .test_suite import TestCase, TestSuite
-from .testbed_model.topology import PortLink, Topology
class DTSRunner:
@@ -79,10 +66,6 @@ class DTSRunner:
_configuration: Configuration
_logger: DTSLogger
_result: DTSResult
- _test_suite_class_prefix: str
- _test_suite_module_prefix: str
- _func_test_case_regex: str
- _perf_test_case_regex: str
def __init__(self):
"""Initialize the instance with configuration, logger, result and string constants."""
@@ -92,10 +75,6 @@ def __init__(self):
os.makedirs(SETTINGS.output_dir)
self._logger.add_dts_root_logger_handlers(SETTINGS.verbose, SETTINGS.output_dir)
self._result = DTSResult(SETTINGS.output_dir, self._logger)
- self._test_suite_class_prefix = "Test"
- self._test_suite_module_prefix = "tests.TestSuite_"
- self._func_test_case_regex = r"test_(?!perf_)"
- self._perf_test_case_regex = r"test_perf_"
def run(self) -> None:
"""Run all test runs from the test run configuration.
@@ -131,45 +110,28 @@ def run(self) -> None:
the :option:`--test-suite` command line argument or
the :envvar:`DTS_TESTCASES` environment variable.
"""
- sut_nodes: dict[str, SutNode] = {}
- tg_nodes: dict[str, TGNode] = {}
+ nodes: list[Node] = []
try:
# check the python version of the server that runs dts
self._check_dts_python_version()
self._result.update_setup(Result.PASS)
+ for node_config in self._configuration.nodes:
+ node: Node
+
+ match node_config:
+ case SutNodeConfiguration():
+ node = SutNode(node_config)
+ case TGNodeConfiguration():
+ node = TGNode(node_config)
+
+ nodes.append(node)
+
# for all test run sections
- for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
- test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
- self._logger.set_stage(DtsStage.test_run_setup)
- self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
- self._init_random_seed(test_run_config)
+ for test_run_config in self._configuration.test_runs:
test_run_result = self._result.add_test_run(test_run_config)
- # we don't want to modify the original config, so create a copy
- test_run_test_suites = test_run_config.test_suites
- if not test_run_config.skip_smoke_tests:
- test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
- try:
- test_suites_with_cases = self._get_test_suites_with_cases(
- test_run_test_suites, test_run_config.func, test_run_config.perf
- )
- test_run_result.test_suites_with_cases = test_suites_with_cases
- except Exception as e:
- self._logger.exception(
- f"Invalid test suite configuration found: " f"{test_run_test_suites}."
- )
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- self._connect_nodes_and_run_test_run(
- sut_nodes,
- tg_nodes,
- sut_node_config,
- tg_node_config,
- test_run_config,
- test_run_result,
- test_suites_with_cases,
- )
+ test_run = TestRun(test_run_config, nodes, test_run_result)
+ test_run.spin()
except Exception as e:
self._logger.exception("An unexpected error has occurred.")
@@ -178,8 +140,8 @@ def run(self) -> None:
finally:
try:
- self._logger.set_stage(DtsStage.post_run)
- for node in (sut_nodes | tg_nodes).values():
+ self._logger.set_stage(POST_RUN)
+ for node in nodes:
node.close()
self._result.update_teardown(Result.PASS)
except Exception as e:
@@ -205,412 +167,6 @@ def _check_dts_python_version(self) -> None:
)
self._logger.warning("Please use Python >= 3.10 instead.")
- def _get_test_suites_with_cases(
- self,
- test_suite_configs: list[TestSuiteConfig],
- func: bool,
- perf: bool,
- ) -> list[TestSuiteWithCases]:
- """Get test suites with selected cases.
-
- The test suites with test cases defined in the user configuration are selected
- and the corresponding functions and classes are gathered.
-
- Args:
- test_suite_configs: Test suite configurations.
- func: Whether to include functional test cases in the final list.
- perf: Whether to include performance test cases in the final list.
-
- Returns:
- The test suites, each with test cases.
- """
- test_suites_with_cases = []
-
- for test_suite_config in test_suite_configs:
- test_suite_class = test_suite_config.test_suite_spec.class_obj
- test_cases: list[type[TestCase]] = []
- func_test_cases, perf_test_cases = test_suite_class.filter_test_cases(
- test_suite_config.test_cases_names
- )
- if func:
- test_cases.extend(func_test_cases)
- if perf:
- test_cases.extend(perf_test_cases)
-
- test_suites_with_cases.append(
- TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
- )
- return test_suites_with_cases
-
- def _connect_nodes_and_run_test_run(
- self,
- sut_nodes: dict[str, SutNode],
- tg_nodes: dict[str, TGNode],
- sut_node_config: SutNodeConfiguration,
- tg_node_config: TGNodeConfiguration,
- test_run_config: TestRunConfiguration,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Connect nodes, then continue to run the given test run.
-
- Connect the :class:`SutNode` and the :class:`TGNode` of this `test_run_config`.
- If either has already been connected, it's going to be in either `sut_nodes` or `tg_nodes`,
- respectively.
- If not, connect and add the node to the respective `sut_nodes` or `tg_nodes` :class:`dict`.
-
- Args:
- sut_nodes: A dictionary storing connected/to be connected SUT nodes.
- tg_nodes: A dictionary storing connected/to be connected TG nodes.
- sut_node_config: The test run's SUT node configuration.
- tg_node_config: The test run's TG node configuration.
- test_run_config: A test run configuration.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
- """
- sut_node = sut_nodes.get(sut_node_config.name)
- tg_node = tg_nodes.get(tg_node_config.name)
-
- try:
- if not sut_node:
- sut_node = SutNode(sut_node_config)
- sut_nodes[sut_node.name] = sut_node
- if not tg_node:
- tg_node = TGNode(tg_node_config)
- tg_nodes[tg_node.name] = tg_node
- except Exception as e:
- failed_node = test_run_config.system_under_test_node
- if sut_node:
- failed_node = test_run_config.traffic_generator_node
- self._logger.exception(f"The Creation of node {failed_node} failed.")
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- self._run_test_run(
- sut_node,
- tg_node,
- test_run_config,
- test_run_result,
- test_suites_with_cases,
- )
-
- def _run_test_run(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- test_run_config: TestRunConfiguration,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Run the given test run.
-
- This involves running the test run setup as well as running all test suites
- in the given test run. After that, the test run teardown is run.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- test_run_config: A test run configuration.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
-
- Raises:
- ConfigurationError: If the DPDK sources or build is not set up from config or settings.
- """
- self._logger.info(f"Running test run with SUT '{test_run_config.system_under_test_node}'.")
- test_run_result.ports = sut_node.ports
- test_run_result.sut_info = sut_node.node_info
- try:
- dpdk_build_config = test_run_config.dpdk_config
- sut_node.set_up_test_run(test_run_config, dpdk_build_config)
- test_run_result.dpdk_build_info = sut_node.get_dpdk_build_info()
- tg_node.set_up_test_run(test_run_config, dpdk_build_config)
- test_run_result.update_setup(Result.PASS)
- except Exception as e:
- self._logger.exception("Test run setup failed.")
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- topology = Topology(
- PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
- for link in test_run_config.port_topology
- )
- self._run_test_suites(
- sut_node, tg_node, topology, test_run_result, test_suites_with_cases
- )
-
- finally:
- try:
- self._logger.set_stage(DtsStage.test_run_teardown)
- sut_node.tear_down_test_run()
- tg_node.tear_down_test_run()
- test_run_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception("Test run teardown failed.")
- test_run_result.update_teardown(Result.FAIL, e)
-
- def _get_supported_capabilities(
- self,
- sut_node: SutNode,
- topology_config: Topology,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> set[Capability]:
- capabilities_to_check = set()
- for test_suite_with_cases in test_suites_with_cases:
- capabilities_to_check.update(test_suite_with_cases.required_capabilities)
-
- self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
-
- return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
-
- def _run_test_suites(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Run `test_suites_with_cases` with the current test run.
-
- The method assumes the DPDK we're testing has already been built on the SUT node.
-
- Before running any suites, the method determines whether they should be skipped
- by inspecting any required capabilities the test suite needs and comparing those
- to capabilities supported by the tested environment. If all capabilities are supported,
- the suite is run. If all test cases in a test suite would be skipped, the whole test suite
- is skipped (the setup and teardown is not run).
-
- If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites
- in the current test run won't be executed.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- topology: The test run's port topology.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
- """
- end_dpdk_build = False
- supported_capabilities = self._get_supported_capabilities(
- sut_node, topology, test_suites_with_cases
- )
- for test_suite_with_cases in test_suites_with_cases:
- test_suite_with_cases.mark_skip_unsupported(supported_capabilities)
- test_suite_result = test_run_result.add_test_suite(test_suite_with_cases)
- try:
- if not test_suite_with_cases.skip:
- self._run_test_suite(
- sut_node,
- tg_node,
- topology,
- test_suite_result,
- test_suite_with_cases,
- )
- else:
- self._logger.info(
- f"Test suite execution SKIPPED: "
- f"'{test_suite_with_cases.test_suite_class.__name__}'. Reason: "
- f"{test_suite_with_cases.test_suite_class.skip_reason}"
- )
- test_suite_result.update_setup(Result.SKIP)
- except BlockingTestSuiteError as e:
- self._logger.exception(
- f"An error occurred within {test_suite_with_cases.test_suite_class.__name__}. "
- "Skipping the rest of the test suites in this test run."
- )
- self._result.add_error(e)
- end_dpdk_build = True
- # if a blocking test failed and we need to bail out of suite executions
- if end_dpdk_build:
- break
-
- def _run_test_suite(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- test_suite_result: TestSuiteResult,
- test_suite_with_cases: TestSuiteWithCases,
- ) -> None:
- """Set up, execute and tear down `test_suite_with_cases`.
-
- The method assumes the DPDK we're testing has already been built on the SUT node.
-
- Test suite execution consists of running the discovered test cases.
- A test case run consists of setup, execution and teardown of said test case.
-
- Record the setup and the teardown and handle failures.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- topology: The port topology of the nodes.
- test_suite_result: The test suite level result object associated
- with the current test suite.
- test_suite_with_cases: The test suite with test cases to run.
-
- Raises:
- BlockingTestSuiteError: If a blocking test suite fails.
- """
- test_suite_name = test_suite_with_cases.test_suite_class.__name__
- self._logger.set_stage(
- DtsStage.test_suite_setup, Path(SETTINGS.output_dir, test_suite_name)
- )
- test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node, topology)
- try:
- self._logger.info(f"Starting test suite setup: {test_suite_name}")
- test_suite.set_up_suite()
- test_suite_result.update_setup(Result.PASS)
- self._logger.info(f"Test suite setup successful: {test_suite_name}")
- except Exception as e:
- self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
- test_suite_result.update_setup(Result.ERROR, e)
-
- else:
- self._execute_test_suite(
- test_suite,
- test_suite_with_cases.test_cases,
- test_suite_result,
- )
- finally:
- try:
- self._logger.set_stage(DtsStage.test_suite_teardown)
- test_suite.tear_down_suite()
- sut_node.kill_cleanup_dpdk_apps()
- test_suite_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
- self._logger.warning(
- f"Test suite '{test_suite_name}' teardown failed, "
- "the next test suite may be affected."
- )
- test_suite_result.update_setup(Result.ERROR, e)
- if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking:
- raise BlockingTestSuiteError(test_suite_name)
-
- def _execute_test_suite(
- self,
- test_suite: TestSuite,
- test_cases: Iterable[type[TestCase]],
- test_suite_result: TestSuiteResult,
- ) -> None:
- """Execute all `test_cases` in `test_suite`.
-
- If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment
- variable is set, in case of a test case failure, the test case will be executed again
- until it passes or it fails that many times in addition of the first failure.
-
- Args:
- test_suite: The test suite object.
- test_cases: The list of test case functions.
- test_suite_result: The test suite level result object associated
- with the current test suite.
- """
- self._logger.set_stage(DtsStage.test_suite)
- for test_case in test_cases:
- test_case_name = test_case.__name__
- test_case_result = test_suite_result.add_test_case(test_case_name)
- all_attempts = SETTINGS.re_run + 1
- attempt_nr = 1
- if not test_case.skip:
- self._run_test_case(test_suite, test_case, test_case_result)
- while not test_case_result and attempt_nr < all_attempts:
- attempt_nr += 1
- self._logger.info(
- f"Re-running FAILED test case '{test_case_name}'. "
- f"Attempt number {attempt_nr} out of {all_attempts}."
- )
- self._run_test_case(test_suite, test_case, test_case_result)
- else:
- self._logger.info(
- f"Test case execution SKIPPED: {test_case_name}. Reason: "
- f"{test_case.skip_reason}"
- )
- test_case_result.update_setup(Result.SKIP)
-
- def _run_test_case(
- self,
- test_suite: TestSuite,
- test_case: type[TestCase],
- test_case_result: TestCaseResult,
- ) -> None:
- """Setup, execute and teardown `test_case_method` from `test_suite`.
-
- Record the result of the setup and the teardown and handle failures.
-
- Args:
- test_suite: The test suite object.
- test_case: The test case function.
- test_case_result: The test case level result object associated
- with the current test case.
- """
- test_case_name = test_case.__name__
-
- try:
- # run set_up function for each case
- test_suite.set_up_test_case()
- test_case_result.update_setup(Result.PASS)
- except SSHTimeoutError as e:
- self._logger.exception(f"Test case setup FAILED: {test_case_name}")
- test_case_result.update_setup(Result.FAIL, e)
- except Exception as e:
- self._logger.exception(f"Test case setup ERROR: {test_case_name}")
- test_case_result.update_setup(Result.ERROR, e)
-
- else:
- # run test case if setup was successful
- self._execute_test_case(test_suite, test_case, test_case_result)
-
- finally:
- try:
- test_suite.tear_down_test_case()
- test_case_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
- self._logger.warning(
- f"Test case '{test_case_name}' teardown failed, "
- f"the next test case may be affected."
- )
- test_case_result.update_teardown(Result.ERROR, e)
- test_case_result.update(Result.ERROR)
-
- def _execute_test_case(
- self,
- test_suite: TestSuite,
- test_case: type[TestCase],
- test_case_result: TestCaseResult,
- ) -> None:
- """Execute `test_case_method` from `test_suite`, record the result and handle failures.
-
- Args:
- test_suite: The test suite object.
- test_case: The test case function.
- test_case_result: The test case level result object associated
- with the current test case.
-
- Raises:
- KeyboardInterrupt: If DTS has been interrupted by the user.
- """
- test_case_name = test_case.__name__
- try:
- self._logger.info(f"Starting test case execution: {test_case_name}")
- # Explicit method binding is required, otherwise mypy complains
- MethodType(test_case, test_suite)()
- test_case_result.update(Result.PASS)
- self._logger.info(f"Test case execution PASSED: {test_case_name}")
-
- except TestCaseVerifyError as e:
- self._logger.exception(f"Test case execution FAILED: {test_case_name}")
- test_case_result.update(Result.FAIL, e)
- except Exception as e:
- self._logger.exception(f"Test case execution ERROR: {test_case_name}")
- test_case_result.update(Result.ERROR, e)
- except KeyboardInterrupt:
- self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
- test_case_result.update(Result.SKIP)
- raise KeyboardInterrupt("Stop DTS")
-
def _exit_dts(self) -> None:
"""Process all errors and exit with the proper exit code."""
self._result.process()
@@ -619,9 +175,3 @@ def _exit_dts(self) -> None:
self._logger.info("DTS execution has ended.")
sys.exit(self._result.get_return_code())
-
- def _init_random_seed(self, conf: TestRunConfiguration) -> None:
- """Initialize the random seed to use for the test run."""
- seed = conf.random_seed or random.randrange(0xFFFF_FFFF)
- self._logger.info(f"Initializing test run with random seed {seed}.")
- random.seed(seed)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 1acb526b64..a59bac71bb 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -25,98 +25,18 @@
import json
from collections.abc import MutableSequence
-from dataclasses import asdict, dataclass, field
from enum import Enum, auto
from pathlib import Path
-from typing import Any, Callable, TypedDict, cast
+from typing import Any, Callable, TypedDict
-from framework.config.node import PortConfig
-from framework.testbed_model.capability import Capability
-
-from .config.test_run import TestRunConfiguration, TestSuiteConfig
+from .config.test_run import TestRunConfiguration
from .exception import DTSError, ErrorSeverity
from .logger import DTSLogger
-from .test_suite import TestCase, TestSuite
from .testbed_model.os_session import OSSessionInfo
from .testbed_model.port import Port
from .testbed_model.sut_node import DPDKBuildInfo
-@dataclass(slots=True, frozen=True)
-class TestSuiteWithCases:
- """A test suite class with test case methods.
-
- An auxiliary class holding a test case class with test case methods. The intended use of this
- class is to hold a subset of test cases (which could be all test cases) because we don't have
- all the data to instantiate the class at the point of inspection. The knowledge of this subset
- is needed in case an error occurs before the class is instantiated and we need to record
- which test cases were blocked by the error.
-
- Attributes:
- test_suite_class: The test suite class.
- test_cases: The test case methods.
- required_capabilities: The combined required capabilities of both the test suite
- and the subset of test cases.
- """
-
- test_suite_class: type[TestSuite]
- test_cases: list[type[TestCase]]
- required_capabilities: set[Capability] = field(default_factory=set, init=False)
-
- def __post_init__(self):
- """Gather the required capabilities of the test suite and all test cases."""
- for test_object in [self.test_suite_class] + self.test_cases:
- self.required_capabilities.update(test_object.required_capabilities)
-
- def create_config(self) -> TestSuiteConfig:
- """Generate a :class:`TestSuiteConfig` from the stored test suite with test cases.
-
- Returns:
- The :class:`TestSuiteConfig` representation.
- """
- return TestSuiteConfig(
- test_suite=self.test_suite_class.__name__,
- test_cases=[test_case.__name__ for test_case in self.test_cases],
- )
-
- def mark_skip_unsupported(self, supported_capabilities: set[Capability]) -> None:
- """Mark the test suite and test cases to be skipped.
-
- The mark is applied if object to be skipped requires any capabilities and at least one of
- them is not among `supported_capabilities`.
-
- Args:
- supported_capabilities: The supported capabilities.
- """
- for test_object in [self.test_suite_class, *self.test_cases]:
- capabilities_not_supported = test_object.required_capabilities - supported_capabilities
- if capabilities_not_supported:
- test_object.skip = True
- capability_str = (
- "capability" if len(capabilities_not_supported) == 1 else "capabilities"
- )
- test_object.skip_reason = (
- f"Required {capability_str} '{capabilities_not_supported}' not found."
- )
- if not self.test_suite_class.skip:
- if all(test_case.skip for test_case in self.test_cases):
- self.test_suite_class.skip = True
-
- self.test_suite_class.skip_reason = (
- "All test cases are marked to be skipped with reasons: "
- f"{' '.join(test_case.skip_reason for test_case in self.test_cases)}"
- )
-
- @property
- def skip(self) -> bool:
- """Skip the test suite if all test cases or the suite itself are to be skipped.
-
- Returns:
- :data:`True` if the test suite should be skipped, :data:`False` otherwise.
- """
- return all(test_case.skip for test_case in self.test_cases) or self.test_suite_class.skip
-
-
class Result(Enum):
"""The possible states that a setup, a teardown or a test case may end up in."""
@@ -463,7 +383,6 @@ class TestRunResult(BaseResult):
"""
_config: TestRunConfiguration
- _test_suites_with_cases: list[TestSuiteWithCases]
_ports: list[Port]
_sut_info: OSSessionInfo | None
_dpdk_build_info: DPDKBuildInfo | None
@@ -476,49 +395,23 @@ def __init__(self, test_run_config: TestRunConfiguration):
"""
super().__init__()
self._config = test_run_config
- self._test_suites_with_cases = []
self._ports = []
self._sut_info = None
self._dpdk_build_info = None
- def add_test_suite(
- self,
- test_suite_with_cases: TestSuiteWithCases,
- ) -> "TestSuiteResult":
+ def add_test_suite(self, test_suite_name: str) -> "TestSuiteResult":
"""Add and return the child result (test suite).
Args:
- test_suite_with_cases: The test suite with test cases.
+ test_suite_name: The test suite name.
Returns:
The test suite's result.
"""
- result = TestSuiteResult(test_suite_with_cases)
+ result = TestSuiteResult(test_suite_name)
self.child_results.append(result)
return result
- @property
- def test_suites_with_cases(self) -> list[TestSuiteWithCases]:
- """The test suites with test cases to be executed in this test run.
-
- The test suites can only be assigned once.
-
- Returns:
- The list of test suites with test cases. If an error occurs between
- the initialization of :class:`TestRunResult` and assigning test cases to the instance,
- return an empty list, representing that we don't know what to execute.
- """
- return self._test_suites_with_cases
-
- @test_suites_with_cases.setter
- def test_suites_with_cases(self, test_suites_with_cases: list[TestSuiteWithCases]) -> None:
- if self._test_suites_with_cases:
- raise ValueError(
- "Attempted to assign test suites to a test run result "
- "which already has test suites."
- )
- self._test_suites_with_cases = test_suites_with_cases
-
@property
def ports(self) -> list[Port]:
"""Get the list of ports associated with this test run."""
@@ -602,24 +495,14 @@ def to_dict(self) -> TestRunResultDict:
compiler_version = self.dpdk_build_info.compiler_version
dpdk_version = self.dpdk_build_info.dpdk_version
- ports = [asdict(port) for port in self.ports]
- for port in ports:
- port["config"] = cast(PortConfig, port["config"]).model_dump()
-
return {
"compiler_version": compiler_version,
"dpdk_version": dpdk_version,
- "ports": ports,
+ "ports": [port.to_dict() for port in self.ports],
"test_suites": [child.to_dict() for child in self.child_results],
"summary": results | self.generate_pass_rate_dict(results),
}
- def _mark_results(self, result) -> None:
- """Mark the test suite results as `result`."""
- for test_suite_with_cases in self._test_suites_with_cases:
- child_result = self.add_test_suite(test_suite_with_cases)
- child_result.update_setup(result)
-
class TestSuiteResult(BaseResult):
"""The test suite specific result.
@@ -631,18 +514,16 @@ class TestSuiteResult(BaseResult):
"""
test_suite_name: str
- _test_suite_with_cases: TestSuiteWithCases
_child_configs: list[str]
- def __init__(self, test_suite_with_cases: TestSuiteWithCases):
+ def __init__(self, test_suite_name: str):
"""Extend the constructor with test suite's config.
Args:
- test_suite_with_cases: The test suite with test cases.
+ test_suite_name: The test suite name.
"""
super().__init__()
- self.test_suite_name = test_suite_with_cases.test_suite_class.__name__
- self._test_suite_with_cases = test_suite_with_cases
+ self.test_suite_name = test_suite_name
def add_test_case(self, test_case_name: str) -> "TestCaseResult":
"""Add and return the child result (test case).
@@ -667,12 +548,6 @@ def to_dict(self) -> TestSuiteResultDict:
"test_cases": [child.to_dict() for child in self.child_results],
}
- def _mark_results(self, result) -> None:
- """Mark the test case results as `result`."""
- for test_case_method in self._test_suite_with_cases.test_cases:
- child_result = self.add_test_case(test_case_method.__name__)
- child_result.update_setup(result)
-
class TestCaseResult(BaseResult, FixtureResult):
r"""The test case specific result.
diff --git a/dts/framework/test_run.py b/dts/framework/test_run.py
new file mode 100644
index 0000000000..add0a62eb9
--- /dev/null
+++ b/dts/framework/test_run.py
@@ -0,0 +1,443 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+r"""Test run module.
+
+The test run is implemented as a finite state machine which maintains a globally accessible
+:class:`~.context.Context` and holds all the execution stages and state as defined in
+:class:`~.status.State`.
+
+The state machine is implemented in :meth:`~TestRun._runner` which can be run by calling
+:meth:`~TestRun.spin`.
+
+The following graph represents all the states and steps of the state machine. Each node represents a
+state labelled with the initials, e.g. ``TR.B`` is represented by :attr:`~.status.Stage.TEST_RUN`
+and :attr:`~.status.InternalState.BEGIN`. States represented by a double green circle are looping
+states. These states are only exited through:
+
+ * **next** which progresses to the next test suite/case.
+ * **end** which indicates that no more test suites/cases are available and
+ the loop is terminated.
+
+Red dashed links represent the path taken when an exception is
+raised in the origin state. If a state does not have one, then the execution progresses as usual.
+When :class:`~.exception.InternalError` is raised in any state, the state machine execution is
+immediately terminated.
+Orange dashed links represent exceptional conditions. Test suites and cases can be ``blocked`` or
+``skipped`` in the following conditions:
+
+ * If a *blocking* test suite fails, the ``blocked`` flag is raised.
+ * If the user sends a ``SIGINT`` signal, the ``blocked`` flag is raised.
+ * If a test suite and/or test case requires a capability unsupported by the test run, then this
+ is ``skipped`` and the state restarts from the beginning.
+
+Finally, test cases **retry** when they fail and DTS is configured to re-run.
+
+.. digraph:: test_run_fsm
+
+ bgcolor=transparent
+ nodesep=0.5
+ ranksep=0.3
+
+ node [fontname="sans-serif" fixedsize="true" width="0.7"]
+ edge [fontname="monospace" color="gray30" fontsize=12]
+ node [shape="circle"] "TR.S" "TR.T" "TS.S" "TS.T" "TC.S" "TC.T"
+
+ node [shape="doublecircle" style="bold" color="darkgreen"] "TR.R" "TS.R" "TC.R"
+
+ node [shape="box" style="filled" color="gray90"] "TR.B" "TR.E"
+ node [style="solid"] "TS.E" "TC.E"
+
+ node [shape="plaintext" fontname="monospace" fontsize=12 fixedsize="false"] "exit"
+
+ "TR.B" -> "TR.S" -> "TR.R"
+ "TR.R":e -> "TR.T":w [taillabel="end" labeldistance=1.5 labelangle=45]
+ "TR.T" -> "TR.E"
+ "TR.E" -> "exit" [style="solid" color="gray30"]
+
+ "TR.R" -> "TS.S" [headlabel="next" labeldistance=3 labelangle=320]
+ "TS.S" -> "TS.R"
+ "TS.R" -> "TS.T" [label="end"]
+ "TS.T" -> "TS.E" -> "TR.R"
+
+ "TS.R" -> "TC.S" [headlabel="next" labeldistance=3 labelangle=320]
+ "TC.S" -> "TC.R" -> "TC.T" -> "TC.E" -> "TS.R":se
+
+
+ edge [fontcolor="orange", color="orange" style="dashed"]
+ "TR.R":sw -> "TS.R":nw [taillabel="next\n(blocked)" labeldistance=13]
+ "TS.R":ne -> "TR.R" [taillabel="end\n(blocked)" labeldistance=7.5 labelangle=345]
+ "TR.R":w -> "TR.R":nw [headlabel="next\n(skipped)" labeldistance=4]
+ "TS.R":e -> "TS.R":e [taillabel="next\n(blocked)\n(skipped)" labelangle=325 labeldistance=7.5]
+ "TC.R":e -> "TC.R":e [taillabel="retry" labelangle=5 labeldistance=2.5]
+
+ edge [fontcolor="crimson" color="crimson"]
+ "TR.S" -> "TR.T"
+ "TS.S":w -> "TS.T":n
+ "TC.S" -> "TC.T"
+
+ node [fontcolor="crimson" color="crimson"]
+ "InternalError" -> "exit":ew
+"""
+
+import random
+from collections import deque
+from collections.abc import Generator, Iterable
+from functools import cached_property
+from pathlib import Path
+from types import MethodType
+from typing import cast
+
+from framework.config.test_run import TestRunConfiguration
+from framework.context import Context, init_ctx
+from framework.exception import (
+ InternalError,
+ SkippedTestException,
+ TestCaseVerifyError,
+)
+from framework.logger import DTSLogger, get_dts_logger
+from framework.settings import SETTINGS
+from framework.status import InternalState, Stage, State
+from framework.test_result import BaseResult, Result, TestCaseResult, TestRunResult, TestSuiteResult
+from framework.test_suite import TestCase, TestSuite
+from framework.testbed_model.capability import (
+ Capability,
+ get_supported_capabilities,
+ test_if_supported,
+)
+from framework.testbed_model.node import Node
+from framework.testbed_model.sut_node import SutNode
+from framework.testbed_model.tg_node import TGNode
+from framework.testbed_model.topology import PortLink, Topology
+
+TestScenario = tuple[type[TestSuite], deque[type[TestCase]]]
+
+
+class TestRun:
+ """Spins a test run."""
+
+ config: TestRunConfiguration
+ logger: DTSLogger
+
+ ctx: Context
+ result: TestRunResult
+ selected_tests: list[TestScenario]
+
+ _state: State
+ _remaining_tests: deque[TestScenario]
+ _max_retries: int
+
+ def __init__(self, config: TestRunConfiguration, nodes: Iterable[Node], result: TestRunResult):
+ """Test run constructor."""
+ self.config = config
+ self.logger = get_dts_logger()
+
+ sut_node = next(n for n in nodes if n.name == config.system_under_test_node)
+ sut_node = cast(SutNode, sut_node) # Config validation must render this valid.
+
+ tg_node = next(n for n in nodes if n.name == config.traffic_generator_node)
+ tg_node = cast(TGNode, tg_node) # Config validation must render this valid.
+
+ topology = Topology.from_port_links(
+ PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
+ for link in self.config.port_topology
+ )
+
+ self.ctx = Context(sut_node, tg_node, topology)
+ init_ctx(self.ctx)
+
+ self.result = result
+ self.selected_tests = list(self.config.filter_tests())
+
+ self._state = State(Stage.TEST_RUN, InternalState.BEGIN)
+ self._max_retries = SETTINGS.re_run
+
+ @cached_property
+ def required_capabilities(self) -> set[Capability]:
+ """The capabilities required to run this test run in its totality."""
+ caps = set()
+
+ for test_suite, test_cases in self.selected_tests:
+ caps.update(test_suite.required_capabilities)
+ for test_case in test_cases:
+ caps.update(test_case.required_capabilities)
+
+ return caps
+
+ def spin(self):
+ """Spin the internal state machine that executes the test run."""
+ self.logger.info(f"Running test run with SUT '{self.ctx.sut_node.name}'.")
+
+ runner = self._runner()
+ while next_state := next(runner, False):
+ previous_state = self._state
+ stage, internal_state = next_state
+ self._state = State(stage, internal_state)
+ self.logger.debug(f"FSM - moving from '{previous_state}' to '{self._state}'")
+
+ def _runner(self) -> Generator[tuple[Stage, InternalState], None, None]: # noqa: C901
+ """Process the current state.
+
+ Yields:
+ The next state.
+
+ Raises:
+ InternalError: If the test run has entered an illegal state or a critical error has
+ occurred.
+ """
+ running = True
+ blocked = False
+
+ remaining_attempts: int
+ remaining_test_cases: deque[type[TestCase]]
+ test_suite: TestSuite
+ test_suite_result: TestSuiteResult
+ test_case: type[TestCase]
+ test_case_result: TestCaseResult
+
+ while running:
+ state = self._state
+ try:
+ match state:
+ case Stage.TEST_RUN, InternalState.BEGIN:
+ yield state[0], InternalState.SETUP
+
+ case Stage.TEST_RUN, InternalState.SETUP:
+ self.update_logger_stage()
+ self.setup()
+ yield state[0], InternalState.RUN
+
+ case Stage.TEST_RUN, InternalState.RUN:
+ self.update_logger_stage()
+ try:
+ test_suite_class, remaining_test_cases = self._remaining_tests.popleft()
+ test_suite = test_suite_class()
+ test_suite_result = self.result.add_test_suite(test_suite.name)
+
+ if blocked:
+ test_suite_result.update_setup(Result.BLOCK)
+ self.logger.error(f"Test suite '{test_suite.name}' was BLOCKED.")
+ # Continue to allow the rest to mark as blocked, no need to setup.
+ yield Stage.TEST_SUITE, InternalState.RUN
+ continue
+
+ test_if_supported(test_suite_class, self.supported_capabilities)
+ self.ctx.local.reset()
+ yield Stage.TEST_SUITE, InternalState.SETUP
+ except IndexError:
+ # No more test suites. We are done here.
+ yield state[0], InternalState.TEARDOWN
+
+ case Stage.TEST_SUITE, InternalState.SETUP:
+ self.update_logger_stage(test_suite.name)
+ test_suite.set_up_suite()
+
+ test_suite_result.update_setup(Result.PASS)
+ yield state[0], InternalState.RUN
+
+ case Stage.TEST_SUITE, InternalState.RUN:
+ if not blocked:
+ self.update_logger_stage(test_suite.name)
+ try:
+ test_case = remaining_test_cases.popleft()
+ test_case_result = test_suite_result.add_test_case(test_case.name)
+
+ if blocked:
+ test_case_result.update_setup(Result.BLOCK)
+ continue
+
+ test_if_supported(test_case, self.supported_capabilities)
+ yield Stage.TEST_CASE, InternalState.SETUP
+ except IndexError:
+ if blocked and test_suite_result.setup_result.result is Result.BLOCK:
+ # Skip teardown if the test case AND suite were blocked.
+ yield state[0], InternalState.END
+ else:
+ # No more test cases. We are done here.
+ yield state[0], InternalState.TEARDOWN
+
+ case Stage.TEST_CASE, InternalState.SETUP:
+ test_suite.set_up_test_case()
+ remaining_attempts = self._max_retries
+
+ test_case_result.update_setup(Result.PASS)
+ yield state[0], InternalState.RUN
+
+ case Stage.TEST_CASE, InternalState.RUN:
+ self.logger.info(f"Running test case '{test_case.name}'.")
+ run_test_case = MethodType(test_case, test_suite)
+ run_test_case()
+
+ test_case_result.update(Result.PASS)
+ self.logger.info(f"Test case '{test_case.name}' execution PASSED.")
+ yield state[0], InternalState.TEARDOWN
+
+ case Stage.TEST_CASE, InternalState.TEARDOWN:
+ test_suite.tear_down_test_case()
+
+ test_case_result.update_teardown(Result.PASS)
+ yield state[0], InternalState.END
+
+ case Stage.TEST_CASE, InternalState.END:
+ yield Stage.TEST_SUITE, InternalState.RUN
+
+ case Stage.TEST_SUITE, InternalState.TEARDOWN:
+ self.update_logger_stage(test_suite.name)
+ test_suite.tear_down_suite()
+ self.ctx.sut_node.kill_cleanup_dpdk_apps()
+
+ test_suite_result.update_teardown(Result.PASS)
+ yield state[0], InternalState.END
+
+ case Stage.TEST_SUITE, InternalState.END:
+ if test_suite_result.get_errors() and test_suite.is_blocking:
+ self.logger.error(
+ f"An error occurred within blocking {test_suite.name}. "
+ "The remaining test suites will be skipped."
+ )
+ blocked = True
+ yield Stage.TEST_RUN, InternalState.RUN
+
+ case Stage.TEST_RUN, InternalState.TEARDOWN:
+ self.update_logger_stage()
+ self.teardown()
+ yield Stage.TEST_RUN, InternalState.END
+
+ case Stage.TEST_RUN, InternalState.END:
+ running = False
+
+ case _:
+ raise InternalError("Illegal state entered. How did I get here?")
+
+ except TestCaseVerifyError as e:
+ self.logger.error(f"Test case '{test_case.name}' execution FAILED: {e}")
+
+ remaining_attempts -= 1
+ if remaining_attempts > 0:
+ self.logger.info(f"Re-attempting. {remaining_attempts} attempts left.")
+ else:
+ test_case_result.update(Result.FAIL, e)
+ yield state[0], InternalState.TEARDOWN
+
+ except SkippedTestException as e:
+ if state[0] is Stage.TEST_RUN:
+ who = "suite"
+ name = test_suite.name
+ result_handler: BaseResult = test_suite_result
+ else:
+ who = "case"
+ name = test_case.name
+ result_handler = test_case_result
+ self.logger.info(f"Test {who} '{name}' execution SKIPPED with reason: {e}")
+ result_handler.update_setup(Result.SKIP)
+
+ except InternalError as e:
+ self.logger.error(
+ "A critical error has occurred. Unrecoverable state reached, shutting down."
+ )
+ # TODO: Handle final test suite result!
+ raise e
+
+ except (KeyboardInterrupt, Exception) as e:
+ match state[0]:
+ case Stage.TEST_RUN:
+ stage_str = "run"
+ case Stage.TEST_SUITE:
+ stage_str = f"suite '{test_suite.name}'"
+ case Stage.TEST_CASE:
+ stage_str = f"case '{test_case.name}'"
+
+ match state[1]:
+ case InternalState.SETUP:
+ state_str = "setup"
+ next_state = InternalState.TEARDOWN
+ case InternalState.RUN:
+ state_str = "execution"
+ next_state = InternalState.TEARDOWN
+ case InternalState.TEARDOWN:
+ state_str = "teardown"
+ next_state = InternalState.END
+
+ if isinstance(e, KeyboardInterrupt):
+ msg = (
+ f"Test {stage_str} {state_str} INTERRUPTED by user! "
+ "Shutting down gracefully."
+ )
+ result = Result.BLOCK
+ ex: Exception | None = None
+ blocked = True
+ else:
+ msg = (
+ "An unexpected error has occurred "
+ f"while running test {stage_str} {state_str}."
+ )
+ result = Result.ERROR
+ ex = e
+
+ match state:
+ case Stage.TEST_RUN, InternalState.SETUP:
+ self.result.update_setup(result, ex)
+ case Stage.TEST_RUN, InternalState.TEARDOWN:
+ self.result.update_teardown(result, ex)
+ case Stage.TEST_SUITE, InternalState.SETUP:
+ test_suite_result.update_setup(result, ex)
+ case Stage.TEST_SUITE, InternalState.TEARDOWN:
+ test_suite_result.update_teardown(result, ex)
+ case Stage.TEST_CASE, InternalState.SETUP:
+ test_case_result.update_setup(result, ex)
+ case Stage.TEST_CASE, InternalState.RUN:
+ test_case_result.update(result, ex)
+ case Stage.TEST_CASE, InternalState.TEARDOWN:
+ test_case_result.update_teardown(result, ex)
+ case _:
+ if ex:
+ raise InternalError(
+ "An error was raised in un uncontrolled state."
+ ) from ex
+
+ self.logger.error(msg)
+
+ if ex:
+ self.logger.exception(ex)
+
+ if state[1] is InternalState.TEARDOWN:
+ self.logger.warning(
+ "The environment may have not been cleaned up correctly. "
+ "The subsequent tests could be affected!"
+ )
+
+ yield state[0], next_state
+
+ def setup(self) -> None:
+ """Setup the test run."""
+ self.logger.info(f"Running on SUT node '{self.ctx.sut_node.name}'.")
+ self.init_random_seed()
+ self._remaining_tests = deque(self.selected_tests)
+
+ self.ctx.sut_node.set_up_test_run(self.config, self.ctx.topology.sut_ports)
+ self.ctx.tg_node.set_up_test_run(self.config, self.ctx.topology.tg_ports)
+
+ self.result.ports = self.ctx.topology.sut_ports + self.ctx.topology.tg_ports
+ self.result.sut_info = self.ctx.sut_node.node_info
+ self.result.dpdk_build_info = self.ctx.sut_node.get_dpdk_build_info()
+
+ self.logger.debug(f"Found capabilities to check: {self.required_capabilities}")
+ self.supported_capabilities = get_supported_capabilities(
+ self.ctx.sut_node, self.ctx.topology, self.required_capabilities
+ )
+
+ def teardown(self) -> None:
+ """Teardown the test run."""
+ self.ctx.sut_node.tear_down_test_run(self.ctx.topology.sut_ports)
+ self.ctx.tg_node.tear_down_test_run(self.ctx.topology.tg_ports)
+
+ def update_logger_stage(self, file_name: str | None = None) -> None:
+ """Update the current stage of the logger."""
+ log_file_path = Path(SETTINGS.output_dir, file_name) if file_name is not None else None
+ self.logger.set_stage(self._state, log_file_path)
+
+ def init_random_seed(self) -> None:
+ """Initialize the random seed to use for the test run."""
+ seed = self.config.random_seed or random.randrange(0xFFFF_FFFF)
+ self.logger.info(f"Initializing with random seed {seed}.")
+ random.seed(seed)
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 7b06ecd715..463fd69cd9 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2024 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""Testbed capabilities.
@@ -53,7 +54,7 @@ def test_scatter_mbuf_2048(self):
from typing_extensions import Self
-from framework.exception import ConfigurationError
+from framework.exception import ConfigurationError, SkippedTestException
from framework.logger import get_dts_logger
from framework.remote_session.testpmd_shell import (
NicCapability,
@@ -221,9 +222,7 @@ def get_supported_capabilities(
)
if cls.capabilities_to_check:
capabilities_to_check_map = cls._get_decorated_capabilities_map()
- with TestPmdShell(
- sut_node, privileged=True, disable_device_start=True
- ) as testpmd_shell:
+ with TestPmdShell() as testpmd_shell:
for (
conditional_capability_fn,
capabilities,
@@ -510,3 +509,20 @@ def get_supported_capabilities(
supported_capabilities.update(callback(sut_node, topology_config))
return supported_capabilities
+
+
+def test_if_supported(test: type[TestProtocol], supported_caps: set[Capability]) -> None:
+ """Test if the given test suite or test case is supported.
+
+ Args:
+ test: The test suite or case.
+ supported_caps: The capabilities that need to be checked against the test.
+
+ Raises:
+ SkippedTestException: If the test hasn't met the requirements.
+ """
+ unsupported_caps = test.required_capabilities - supported_caps
+ if unsupported_caps:
+ capability_str = "capabilities" if len(unsupported_caps) > 1 else "capability"
+ msg = f"Required {capability_str} '{unsupported_caps}' not found."
+ raise SkippedTestException(msg)
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 595836a664..290a3fbd74 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -37,9 +37,11 @@ class TGNode(Node):
must be a way to send traffic without that.
Attributes:
+ config: The traffic generator node configuration.
traffic_generator: The traffic generator running on the node.
"""
+ config: TGNodeConfiguration
traffic_generator: CapturingTrafficGenerator
def __init__(self, node_config: TGNodeConfiguration):
@@ -51,7 +53,6 @@ def __init__(self, node_config: TGNodeConfiguration):
node_config: The TG node's test run configuration.
"""
super().__init__(node_config)
- self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
self._logger.info(f"Created node: {self.name}")
def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
@@ -64,6 +65,7 @@ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable
"""
super().set_up_test_run(test_run_config, ports)
self.main_session.bring_up_link(ports)
+ self.traffic_generator = create_traffic_generator(self, self.config.traffic_generator)
def tear_down_test_run(self, ports: Iterable[Port]) -> None:
"""Extend the test run teardown with the teardown of the traffic generator.
@@ -72,6 +74,7 @@ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
ports: The ports to tear down for the test run.
"""
super().tear_down_test_run(ports)
+ self.traffic_generator.close()
def send_packets_and_capture(
self,
@@ -119,5 +122,4 @@ def close(self) -> None:
This extends the superclass method with TG cleanup.
"""
- self.traffic_generator.close()
super().close()
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 0/7] dts: revamp framework
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (6 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 7/7] dts: revamp runtime internals Luca Vizzarro
@ 2025-02-04 21:08 ` Dean Marx
7 siblings, 0 replies; 9+ messages in thread
From: Dean Marx @ 2025-02-04 21:08 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
Hi Luca,
I saw in the meeting minutes that the main purpose of this series is
to implement the separation of concern principle better in DTS. Just
wondering, what parts of the current framework did you think needed to
be separated and why? I'm taking an OOP class this semester and we
just started talking in depth about separation of concerns, so if you
wouldn't mind I'd be interested in your thought process. Working on a
review for the series currently as well, should be done relatively
soon.
Thanks,
Dean
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-02-04 21:08 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 2/7] dts: isolate test specification to config Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 3/7] dts: revamp Topology model Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 4/7] dts: improve Port model Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 5/7] dts: add runtime status Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 6/7] dts: add global runtime context Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 7/7] dts: revamp runtime internals Luca Vizzarro
2025-02-04 21:08 ` [RFC PATCH 0/7] dts: revamp framework Dean Marx
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).