* [RFC PATCH 0/7] dts: revamp framework
@ 2025-02-03 15:16 Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
` (8 more replies)
0 siblings, 9 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Hi there,
This series enables the topology configuration and implements it in the
framework. Moreover, it performs quite a few refactors and a change on
how the test suites operate through the use of a Context. Finally, the
runtime internals are now isolated under the new TestRun and are further
reworked into a finite state machine for ease of handling.
Mind that unfortunately these commits may be breaking in intermediate
steps. Due to the amount of work done in one go, it was rather difficult
to have individually working commits.
I am currently requesting for comments on how we can improve this
further. In the meantime, I am going to remove node discrimination (TG
vs SUT) in the nodes configuration and allow the test run to define the
TG and SUT configurations.
I understand this is a lot of changes so please bear with me, I may have
missed some documentation changes, or even added proper documentation in
some instances. Please do point everything out.
Best,
Luca
Luca Vizzarro (7):
dts: add port topology configuration
dts: isolate test specification to config
dts: revamp Topology model
dts: improve Port model
dts: add runtime status
dts: add global runtime context
dts: revamp runtime internals
doc/api/dts/framework.context.rst | 8 +
doc/api/dts/framework.status.rst | 8 +
doc/api/dts/framework.test_run.rst | 8 +
doc/api/dts/index.rst | 3 +
doc/guides/conf.py | 3 +-
dts/framework/config/__init__.py | 138 +++--
dts/framework/config/node.py | 25 +-
dts/framework/config/test_run.py | 144 +++++-
dts/framework/context.py | 107 ++++
dts/framework/exception.py | 33 +-
dts/framework/logger.py | 36 +-
dts/framework/remote_session/dpdk_shell.py | 53 +-
.../single_active_interactive_shell.py | 14 +-
dts/framework/remote_session/testpmd_shell.py | 27 +-
dts/framework/runner.py | 485 +-----------------
dts/framework/status.py | 64 +++
dts/framework/test_result.py | 136 +----
dts/framework/test_run.py | 443 ++++++++++++++++
dts/framework/test_suite.py | 73 ++-
dts/framework/testbed_model/capability.py | 42 +-
dts/framework/testbed_model/linux_session.py | 47 +-
dts/framework/testbed_model/node.py | 29 +-
dts/framework/testbed_model/os_session.py | 16 +-
dts/framework/testbed_model/port.py | 97 ++--
dts/framework/testbed_model/sut_node.py | 50 +-
dts/framework/testbed_model/tg_node.py | 30 +-
dts/framework/testbed_model/topology.py | 165 +++---
dts/framework/utils.py | 8 +-
dts/nodes.example.yaml | 24 +-
dts/test_runs.example.yaml | 3 +
dts/tests/TestSuite_blocklist.py | 6 +-
dts/tests/TestSuite_checksum_offload.py | 14 +-
dts/tests/TestSuite_dual_vlan.py | 6 +-
dts/tests/TestSuite_dynamic_config.py | 8 +-
dts/tests/TestSuite_dynamic_queue_conf.py | 1 -
dts/tests/TestSuite_hello_world.py | 2 +-
dts/tests/TestSuite_l2fwd.py | 9 +-
dts/tests/TestSuite_mac_filter.py | 10 +-
dts/tests/TestSuite_mtu.py | 17 +-
dts/tests/TestSuite_pmd_buffer_scatter.py | 9 +-
...stSuite_port_restart_config_persistency.py | 8 +-
dts/tests/TestSuite_promisc_support.py | 8 +-
dts/tests/TestSuite_smoke_tests.py | 3 +-
dts/tests/TestSuite_softnic.py | 4 +-
dts/tests/TestSuite_uni_pkt.py | 14 +-
dts/tests/TestSuite_vlan.py | 8 +-
46 files changed, 1287 insertions(+), 1159 deletions(-)
create mode 100644 doc/api/dts/framework.context.rst
create mode 100644 doc/api/dts/framework.status.rst
create mode 100644 doc/api/dts/framework.test_run.rst
create mode 100644 dts/framework/context.py
create mode 100644 dts/framework/status.py
create mode 100644 dts/framework/test_run.py
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC PATCH 1/7] dts: add port topology configuration
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-07 18:25 ` Nicholas Pratte
2025-02-11 18:00 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 2/7] dts: isolate test specification to config Luca Vizzarro
` (7 subsequent siblings)
8 siblings, 2 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
The current configuration makes the user re-specify the port links for
each port in an unintuitive and repetitive way. Moreover, this design
does not give the user to opportunity to map the port topology as
desired.
This change adds a port_topology field in the test runs, so that the
user can use map topologies for each run as required. Moreover it
simplies the process to link ports by defining a user friendly notation.
Bugzilla ID: 1478
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
dts/framework/config/__init__.py | 54 ++++++++----
dts/framework/config/node.py | 25 ++++--
dts/framework/config/test_run.py | 85 +++++++++++++++++-
dts/framework/runner.py | 13 ++-
dts/framework/test_result.py | 9 +-
dts/framework/testbed_model/capability.py | 18 ++--
dts/framework/testbed_model/node.py | 6 ++
dts/framework/testbed_model/port.py | 58 +++----------
dts/framework/testbed_model/sut_node.py | 2 +-
dts/framework/testbed_model/topology.py | 100 ++++++++--------------
dts/framework/utils.py | 8 +-
dts/nodes.example.yaml | 24 ++----
dts/test_runs.example.yaml | 3 +
13 files changed, 235 insertions(+), 170 deletions(-)
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index adbd4e952d..f8ac2c0d18 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -120,24 +120,46 @@ def validate_node_names(cls, nodes: list[NodeConfiguration]) -> list[NodeConfigu
return nodes
@model_validator(mode="after")
- def validate_ports(self) -> Self:
- """Validate that the ports are all linked to valid ones."""
- port_links: dict[tuple[str, str], Literal[False] | tuple[int, int]] = {
- (node.name, port.pci): False for node in self.nodes for port in node.ports
+ def validate_port_links(self) -> Self:
+ """Validate that all the test runs' port links are valid."""
+ existing_port_links: dict[tuple[str, str], Literal[False] | tuple[str, str]] = {
+ (node.name, port.name): False for node in self.nodes for port in node.ports
}
- for node_no, node in enumerate(self.nodes):
- for port_no, port in enumerate(node.ports):
- peer_port_identifier = (port.peer_node, port.peer_pci)
- peer_port = port_links.get(peer_port_identifier, None)
- assert peer_port is not None, (
- "invalid peer port specified for " f"nodes.{node_no}.ports.{port_no}"
- )
- assert peer_port is False, (
- f"the peer port specified for nodes.{node_no}.ports.{port_no} "
- f"is already linked to nodes.{peer_port[0]}.ports.{peer_port[1]}"
- )
- port_links[peer_port_identifier] = (node_no, port_no)
+ defined_port_links = [
+ (test_run_idx, test_run, link_idx, link)
+ for test_run_idx, test_run in enumerate(self.test_runs)
+ for link_idx, link in enumerate(test_run.port_topology)
+ ]
+ for test_run_idx, test_run, link_idx, link in defined_port_links:
+ sut_node_port_peer = existing_port_links.get(
+ (test_run.system_under_test_node, link.sut_port), None
+ )
+ assert sut_node_port_peer is not None, (
+ "Invalid SUT node port specified for link "
+ f"test_runs.{test_run_idx}.port_topology.{link_idx}."
+ )
+
+ assert sut_node_port_peer is False or sut_node_port_peer == link.right, (
+ f"The SUT node port for link test_runs.{test_run_idx}.port_topology.{link_idx} is "
+ f"already linked to port {sut_node_port_peer[0]}.{sut_node_port_peer[1]}."
+ )
+
+ tg_node_port_peer = existing_port_links.get(
+ (test_run.traffic_generator_node, link.tg_port), None
+ )
+ assert tg_node_port_peer is not None, (
+ "Invalid TG node port specified for link "
+ f"test_runs.{test_run_idx}.port_topology.{link_idx}."
+ )
+
+ assert tg_node_port_peer is False or sut_node_port_peer == link.left, (
+ f"The TG node port for link test_runs.{test_run_idx}.port_topology.{link_idx} is "
+ f"already linked to port {tg_node_port_peer[0]}.{tg_node_port_peer[1]}."
+ )
+
+ existing_port_links[link.left] = link.right
+ existing_port_links[link.right] = link.left
return self
diff --git a/dts/framework/config/node.py b/dts/framework/config/node.py
index a7ace514d9..97e0285912 100644
--- a/dts/framework/config/node.py
+++ b/dts/framework/config/node.py
@@ -12,9 +12,10 @@
from enum import Enum, auto, unique
from typing import Annotated, Literal
-from pydantic import Field
+from pydantic import Field, model_validator
+from typing_extensions import Self
-from framework.utils import REGEX_FOR_PCI_ADDRESS, StrEnum
+from framework.utils import REGEX_FOR_IDENTIFIER, REGEX_FOR_PCI_ADDRESS, StrEnum
from .common import FrozenModel
@@ -51,16 +52,14 @@ class HugepageConfiguration(FrozenModel):
class PortConfig(FrozenModel):
r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s."""
+ #: An identifier for the port. May contain letters, digits, underscores, hyphens and spaces.
+ name: str = Field(pattern=REGEX_FOR_IDENTIFIER)
#: The PCI address of the port.
pci: str = Field(pattern=REGEX_FOR_PCI_ADDRESS)
#: The driver that the kernel should bind this device to for DPDK to use it.
os_driver_for_dpdk: str = Field(examples=["vfio-pci", "mlx5_core"])
#: The operating system driver name when the operating system controls the port.
os_driver: str = Field(examples=["i40e", "ice", "mlx5_core"])
- #: The name of the peer node this port is connected to.
- peer_node: str
- #: The PCI address of the peer port connected to this port.
- peer_pci: str = Field(pattern=REGEX_FOR_PCI_ADDRESS)
class TrafficGeneratorConfig(FrozenModel):
@@ -94,7 +93,7 @@ class NodeConfiguration(FrozenModel):
r"""The configuration of :class:`~framework.testbed_model.node.Node`\s."""
#: The name of the :class:`~framework.testbed_model.node.Node`.
- name: str
+ name: str = Field(pattern=REGEX_FOR_IDENTIFIER)
#: The hostname of the :class:`~framework.testbed_model.node.Node`. Can also be an IP address.
hostname: str
#: The name of the user used to connect to the :class:`~framework.testbed_model.node.Node`.
@@ -108,6 +107,18 @@ class NodeConfiguration(FrozenModel):
#: The ports that can be used in testing.
ports: list[PortConfig] = Field(min_length=1)
+ @model_validator(mode="after")
+ def verify_unique_port_names(self) -> Self:
+ """Verify that there are no ports with the same name."""
+ used_port_names: dict[str, int] = {}
+ for idx, port in enumerate(self.ports):
+ assert port.name not in used_port_names, (
+ f"Cannot use port name '{port.name}' for ports.{idx}. "
+ f"This was already used in ports.{used_port_names[port.name]}."
+ )
+ used_port_names[port.name] = idx
+ return self
+
class DPDKConfiguration(FrozenModel):
"""Configuration of the DPDK EAL parameters."""
diff --git a/dts/framework/config/test_run.py b/dts/framework/config/test_run.py
index 006410b467..2092da725e 100644
--- a/dts/framework/config/test_run.py
+++ b/dts/framework/config/test_run.py
@@ -9,16 +9,18 @@
The root model of a test run configuration is :class:`TestRunConfiguration`.
"""
+import re
import tarfile
from enum import auto, unique
from functools import cached_property
from pathlib import Path, PurePath
-from typing import Any, Literal
+from typing import Any, Literal, NamedTuple
from pydantic import Field, field_validator, model_validator
from typing_extensions import TYPE_CHECKING, Self
-from framework.utils import StrEnum
+from framework.exception import InternalError
+from framework.utils import REGEX_FOR_PORT_LINK, StrEnum
from .common import FrozenModel, load_fields_from_settings
@@ -273,6 +275,83 @@ def fetch_all_test_suites() -> list[TestSuiteConfig]:
]
+class LinkPortIdentifier(NamedTuple):
+ """A tuple linking test run node type to port name."""
+
+ node_type: Literal["sut", "tg"]
+ port_name: str
+
+
+class PortLinkConfig(FrozenModel):
+ """A link between the ports of the nodes.
+
+ Can be represented as a string with the following notation:
+
+ .. code::
+
+ sut.PORT_0 <-> tg.PORT_0 # explicit node nomination
+ PORT_0 <-> PORT_0 # implicit node nomination. Left is SUT, right is TG.
+ """
+
+ #: The port at the left side of the link.
+ left: LinkPortIdentifier
+ #: The port at the right side of the link.
+ right: LinkPortIdentifier
+
+ @cached_property
+ def sut_port(self) -> str:
+ """Port name of the SUT node.
+
+ Raises:
+ InternalError: If a misconfiguration has been allowed to happen.
+ """
+ if self.left.node_type == "sut":
+ return self.left.port_name
+ if self.right.node_type == "sut":
+ return self.right.port_name
+
+ raise InternalError("Unreachable state reached.")
+
+ @cached_property
+ def tg_port(self) -> str:
+ """Port name of the TG node.
+
+ Raises:
+ InternalError: If a misconfiguration has been allowed to happen.
+ """
+ if self.left.node_type == "tg":
+ return self.left.port_name
+ if self.right.node_type == "tg":
+ return self.right.port_name
+
+ raise InternalError("Unreachable state reached.")
+
+ @model_validator(mode="before")
+ @classmethod
+ def convert_from_string(cls, data: Any) -> Any:
+ """Convert the string representation of the model into a valid mapping."""
+ if isinstance(data, str):
+ m = re.match(REGEX_FOR_PORT_LINK, data, re.I)
+ assert m is not None, (
+ "The provided link is malformed. Please use the following "
+ "notation: sut.PORT_0 <-> tg.PORT_0"
+ )
+
+ left = (m.group(1) or "sut").lower(), m.group(2)
+ right = (m.group(3) or "tg").lower(), m.group(4)
+
+ return {"left": left, "right": right}
+ return data
+
+ @model_validator(mode="after")
+ def verify_distinct_nodes(self) -> Self:
+ """Verify that each side of the link has distinct nodes."""
+ assert (
+ self.left.node_type != self.right.node_type
+ ), "Linking ports of the same node is unsupported."
+ return self
+
+
class TestRunConfiguration(FrozenModel):
"""The configuration of a test run.
@@ -298,6 +377,8 @@ class TestRunConfiguration(FrozenModel):
vdevs: list[str] = Field(default_factory=list)
#: The seed to use for pseudo-random generation.
random_seed: int | None = None
+ #: The port links between the specified nodes to use.
+ port_topology: list[PortLinkConfig] = Field(max_length=2)
fields_from_settings = model_validator(mode="before")(
load_fields_from_settings("test_suites", "random_seed")
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index 9f9789cf49..60a885d8e6 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -54,7 +54,7 @@
TestSuiteWithCases,
)
from .test_suite import TestCase, TestSuite
-from .testbed_model.topology import Topology
+from .testbed_model.topology import PortLink, Topology
class DTSRunner:
@@ -331,7 +331,13 @@ def _run_test_run(
test_run_result.update_setup(Result.FAIL, e)
else:
- self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
+ topology = Topology(
+ PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
+ for link in test_run_config.port_topology
+ )
+ self._run_test_suites(
+ sut_node, tg_node, topology, test_run_result, test_suites_with_cases
+ )
finally:
try:
@@ -361,6 +367,7 @@ def _run_test_suites(
self,
sut_node: SutNode,
tg_node: TGNode,
+ topology: Topology,
test_run_result: TestRunResult,
test_suites_with_cases: Iterable[TestSuiteWithCases],
) -> None:
@@ -380,11 +387,11 @@ def _run_test_suites(
Args:
sut_node: The test run's SUT node.
tg_node: The test run's TG node.
+ topology: The test run's port topology.
test_run_result: The test run's result.
test_suites_with_cases: The test suites with test cases to run.
"""
end_dpdk_build = False
- topology = Topology(sut_node.ports, tg_node.ports)
supported_capabilities = self._get_supported_capabilities(
sut_node, topology, test_suites_with_cases
)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index bffbc52505..1acb526b64 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -28,8 +28,9 @@
from dataclasses import asdict, dataclass, field
from enum import Enum, auto
from pathlib import Path
-from typing import Any, Callable, TypedDict
+from typing import Any, Callable, TypedDict, cast
+from framework.config.node import PortConfig
from framework.testbed_model.capability import Capability
from .config.test_run import TestRunConfiguration, TestSuiteConfig
@@ -601,10 +602,14 @@ def to_dict(self) -> TestRunResultDict:
compiler_version = self.dpdk_build_info.compiler_version
dpdk_version = self.dpdk_build_info.dpdk_version
+ ports = [asdict(port) for port in self.ports]
+ for port in ports:
+ port["config"] = cast(PortConfig, port["config"]).model_dump()
+
return {
"compiler_version": compiler_version,
"dpdk_version": dpdk_version,
- "ports": [asdict(port) for port in self.ports],
+ "ports": ports,
"test_suites": [child.to_dict() for child in self.child_results],
"summary": results | self.generate_pass_rate_dict(results),
}
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 6a7a1f5b6c..7b06ecd715 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -362,10 +362,10 @@ def set_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
the test suite's.
"""
if inspect.isclass(test_case_or_suite):
- if self.topology_type is not TopologyType.default:
+ if self.topology_type is not TopologyType.default():
self.add_to_required(test_case_or_suite)
for test_case in test_case_or_suite.get_test_cases():
- if test_case.topology_type.topology_type is TopologyType.default:
+ if test_case.topology_type.topology_type is TopologyType.default():
# test case topology has not been set, use the one set by the test suite
self.add_to_required(test_case)
elif test_case.topology_type > test_case_or_suite.topology_type:
@@ -428,14 +428,8 @@ def __hash__(self):
return self.topology_type.__hash__()
def __str__(self):
- """Easy to read string of class and name of :attr:`topology_type`.
-
- Converts :attr:`TopologyType.default` to the actual value.
- """
- name = self.topology_type.name
- if self.topology_type is TopologyType.default:
- name = TopologyType.get_from_value(self.topology_type.value).name
- return f"{type(self.topology_type).__name__}.{name}"
+ """Easy to read string of class and name of :attr:`topology_type`."""
+ return f"{type(self.topology_type).__name__}.{self.topology_type.name}"
def __repr__(self):
"""Easy to read string of class and name of :attr:`topology_type`."""
@@ -450,7 +444,7 @@ class TestProtocol(Protocol):
#: The reason for skipping the test case or suite.
skip_reason: ClassVar[str] = ""
#: The topology type of the test case or suite.
- topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
+ topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default())
#: The capabilities the test case or suite requires in order to be executed.
required_capabilities: ClassVar[set[Capability]] = set()
@@ -466,7 +460,7 @@ def get_test_cases(cls) -> list[type["TestCase"]]:
def requires(
*nic_capabilities: NicCapability,
- topology_type: TopologyType = TopologyType.default,
+ topology_type: TopologyType = TopologyType.default(),
) -> Callable[[type[TestProtocol]], type[TestProtocol]]:
"""A decorator that adds the required capabilities to a test case or test suite.
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e53a321499..0acd746073 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -14,6 +14,7 @@
"""
from abc import ABC
+from functools import cached_property
from framework.config.node import (
OS,
@@ -86,6 +87,11 @@ def _init_ports(self) -> None:
self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
self.main_session.update_ports(self.ports)
+ @cached_property
+ def ports_by_name(self) -> dict[str, Port]:
+ """Ports mapped by the name assigned at configuration."""
+ return {port.name: port for port in self.ports}
+
def set_up_test_run(
self,
test_run_config: TestRunConfiguration,
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 7177da3371..8014d4a100 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2022 University of New Hampshire
# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""NIC port model.
@@ -13,19 +14,6 @@
from framework.config.node import PortConfig
-@dataclass(slots=True, frozen=True)
-class PortIdentifier:
- """The port identifier.
-
- Attributes:
- node: The node where the port resides.
- pci: The PCI address of the port on `node`.
- """
-
- node: str
- pci: str
-
-
@dataclass(slots=True)
class Port:
"""Physical port on a node.
@@ -36,20 +24,13 @@ class Port:
and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
Attributes:
- identifier: The PCI address of the port on a node.
- os_driver: The operating system driver name when the operating system controls the port,
- e.g.: ``i40e``.
- os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
- peer: The identifier of a port this port is connected with.
- The `peer` is on a different node.
+ config: The port's configuration.
mac_address: The MAC address of the port.
logical_name: The logical name of the port. Must be discovered.
"""
- identifier: PortIdentifier
- os_driver: str
- os_driver_for_dpdk: str
- peer: PortIdentifier
+ _node: str
+ config: PortConfig
mac_address: str = ""
logical_name: str = ""
@@ -60,33 +41,20 @@ def __init__(self, node_name: str, config: PortConfig):
node_name: The name of the port's node.
config: The test run configuration of the port.
"""
- self.identifier = PortIdentifier(
- node=node_name,
- pci=config.pci,
- )
- self.os_driver = config.os_driver
- self.os_driver_for_dpdk = config.os_driver_for_dpdk
- self.peer = PortIdentifier(node=config.peer_node, pci=config.peer_pci)
+ self._node = node_name
+ self.config = config
@property
def node(self) -> str:
"""The node where the port resides."""
- return self.identifier.node
+ return self._node
+
+ @property
+ def name(self) -> str:
+ """The name of the port."""
+ return self.config.name
@property
def pci(self) -> str:
"""The PCI address of the port."""
- return self.identifier.pci
-
-
-@dataclass(slots=True, frozen=True)
-class PortLink:
- """The physical, cabled connection between the ports.
-
- Attributes:
- sut_port: The port on the SUT node connected to `tg_port`.
- tg_port: The port on the TG node connected to `sut_port`.
- """
-
- sut_port: Port
- tg_port: Port
+ return self.config.pci
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 483733cede..440b5a059b 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -515,7 +515,7 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
return
for port in self.ports:
- driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
+ driver = port.config.os_driver_for_dpdk if for_dpdk else port.config.os_driver
self.main_session.send_command(
f"{self.path_to_devbind_script} -b {driver} --force {port.pci}",
privileged=True,
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index caee9b22ea..814c3f3fe4 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2024 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""Testbed topology representation.
@@ -7,14 +8,9 @@
The link information then implies what type of topology is available.
"""
-from dataclasses import dataclass
-from os import environ
-from typing import TYPE_CHECKING, Iterable
-
-if TYPE_CHECKING or environ.get("DTS_DOC_BUILD"):
- from enum import Enum as NoAliasEnum
-else:
- from aenum import NoAliasEnum
+from collections.abc import Iterator
+from enum import Enum
+from typing import NamedTuple
from framework.config.node import PortConfig
from framework.exception import ConfigurationError
@@ -22,7 +18,7 @@
from .port import Port
-class TopologyType(int, NoAliasEnum):
+class TopologyType(int, Enum):
"""Supported topology types."""
#: A topology with no Traffic Generator.
@@ -31,34 +27,20 @@ class TopologyType(int, NoAliasEnum):
one_link = 1
#: A topology with two physical links between the Sut node and the TG node.
two_links = 2
- #: The default topology required by test cases if not specified otherwise.
- default = 2
@classmethod
- def get_from_value(cls, value: int) -> "TopologyType":
- r"""Get the corresponding instance from value.
+ def default(cls) -> "TopologyType":
+ """The default topology required by test cases if not specified otherwise."""
+ return cls.two_links
- :class:`~enum.Enum`\s that don't allow aliases don't know which instance should be returned
- as there could be multiple valid instances. Except for the :attr:`default` value,
- :class:`TopologyType` is a regular :class:`~enum.Enum`.
- When getting an instance from value, we're not interested in the default,
- since we already know the value, allowing us to remove the ambiguity.
- Args:
- value: The value of the requested enum.
+class PortLink(NamedTuple):
+ """The physical, cabled connection between the ports."""
- Raises:
- ConfigurationError: If an unsupported link topology is supplied.
- """
- match value:
- case 0:
- return TopologyType.no_link
- case 1:
- return TopologyType.one_link
- case 2:
- return TopologyType.two_links
- case _:
- raise ConfigurationError("More than two links in a topology are not supported.")
+ #: The port on the SUT node connected to `tg_port`.
+ sut_port: Port
+ #: The port on the TG node connected to `sut_port`.
+ tg_port: Port
class Topology:
@@ -89,55 +71,43 @@ class Topology:
sut_port_egress: Port
tg_port_ingress: Port
- def __init__(self, sut_ports: Iterable[Port], tg_ports: Iterable[Port]):
- """Create the topology from `sut_ports` and `tg_ports`.
+ def __init__(self, port_links: Iterator[PortLink]):
+ """Create the topology from `port_links`.
Args:
- sut_ports: The SUT node's ports.
- tg_ports: The TG node's ports.
+ port_links: The test run's required port links.
+
+ Raises:
+ ConfigurationError: If an unsupported link topology is supplied.
"""
- port_links = []
- for sut_port in sut_ports:
- for tg_port in tg_ports:
- if (sut_port.identifier, sut_port.peer) == (
- tg_port.peer,
- tg_port.identifier,
- ):
- port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port))
-
- self.type = TopologyType.get_from_value(len(port_links))
dummy_port = Port(
"",
PortConfig(
+ name="dummy",
pci="0000:00:00.0",
os_driver_for_dpdk="",
os_driver="",
- peer_node="",
- peer_pci="0000:00:00.0",
),
)
+
+ self.type = TopologyType.no_link
self.tg_port_egress = dummy_port
self.sut_port_ingress = dummy_port
self.sut_port_egress = dummy_port
self.tg_port_ingress = dummy_port
- if self.type > TopologyType.no_link:
- self.tg_port_egress = port_links[0].tg_port
- self.sut_port_ingress = port_links[0].sut_port
+
+ if port_link := next(port_links, None):
+ self.type = TopologyType.one_link
+ self.tg_port_egress = port_link.tg_port
+ self.sut_port_ingress = port_link.sut_port
self.sut_port_egress = self.sut_port_ingress
self.tg_port_ingress = self.tg_port_egress
- if self.type > TopologyType.one_link:
- self.sut_port_egress = port_links[1].sut_port
- self.tg_port_ingress = port_links[1].tg_port
+ if port_link := next(port_links, None):
+ self.type = TopologyType.two_links
+ self.sut_port_egress = port_link.sut_port
+ self.tg_port_ingress = port_link.tg_port
-@dataclass(slots=True, frozen=True)
-class PortLink:
- """The physical, cabled connection between the ports.
-
- Attributes:
- sut_port: The port on the SUT node connected to `tg_port`.
- tg_port: The port on the TG node connected to `sut_port`.
- """
-
- sut_port: Port
- tg_port: Port
+ if next(port_links, None) is not None:
+ msg = "More than two links in a topology are not supported."
+ raise ConfigurationError(msg)
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 66f37a8813..d6f4c11d58 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -32,7 +32,13 @@
_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
_REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
-REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
+REGEX_FOR_BASE64_ENCODING: str = r"[-a-zA-Z0-9+\\/]*={0,3}"
+REGEX_FOR_IDENTIFIER: str = r"\w+(?:[\w -]*\w+)?"
+REGEX_FOR_PORT_LINK: str = (
+ rf"(?:(sut|tg)\.)?({REGEX_FOR_IDENTIFIER})" # left side
+ r"\s+<->\s+"
+ rf"(?:(sut|tg)\.)?({REGEX_FOR_IDENTIFIER})" # right side
+)
def expand_range(range_str: str) -> list[int]:
diff --git a/dts/nodes.example.yaml b/dts/nodes.example.yaml
index 454d97ab5d..6140dd9b7e 100644
--- a/dts/nodes.example.yaml
+++ b/dts/nodes.example.yaml
@@ -9,18 +9,14 @@
user: dtsuser
os: linux
ports:
- # sets up the physical link between "SUT 1"@0000:00:08.0 and "TG 1"@0000:00:08.0
- - pci: "0000:00:08.0"
+ - name: Port 0
+ pci: "0000:00:08.0"
os_driver_for_dpdk: vfio-pci # OS driver that DPDK will use
os_driver: i40e # OS driver to bind when the tests are not running
- peer_node: "TG 1"
- peer_pci: "0000:00:08.0"
- # sets up the physical link between "SUT 1"@0000:00:08.1 and "TG 1"@0000:00:08.1
- - pci: "0000:00:08.1"
+ - name: Port 1
+ pci: "0000:00:08.1"
os_driver_for_dpdk: vfio-pci
os_driver: i40e
- peer_node: "TG 1"
- peer_pci: "0000:00:08.1"
hugepages_2mb: # optional; if removed, will use system hugepage configuration
number_of: 256
force_first_numa: false
@@ -34,18 +30,14 @@
user: dtsuser
os: linux
ports:
- # sets up the physical link between "TG 1"@0000:00:08.0 and "SUT 1"@0000:00:08.0
- - pci: "0000:00:08.0"
+ - name: Port 0
+ pci: "0000:00:08.0"
os_driver_for_dpdk: rdma
os_driver: rdma
- peer_node: "SUT 1"
- peer_pci: "0000:00:08.0"
- # sets up the physical link between "SUT 1"@0000:00:08.0 and "TG 1"@0000:00:08.0
- - pci: "0000:00:08.1"
+ - name: Port 1
+ pci: "0000:00:08.1"
os_driver_for_dpdk: rdma
os_driver: rdma
- peer_node: "SUT 1"
- peer_pci: "0000:00:08.1"
hugepages_2mb: # optional; if removed, will use system hugepage configuration
number_of: 256
force_first_numa: false
diff --git a/dts/test_runs.example.yaml b/dts/test_runs.example.yaml
index 5cc167ebe1..821d6918d0 100644
--- a/dts/test_runs.example.yaml
+++ b/dts/test_runs.example.yaml
@@ -32,3 +32,6 @@
system_under_test_node: "SUT 1"
# Traffic generator node to use for this execution environment
traffic_generator_node: "TG 1"
+ port_topology:
+ - sut.Port 0 <-> tg.Port 0 # explicit link
+ - Port 1 <-> Port 1 # implicit link, left side is always SUT, right side is always TG.
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC PATCH 2/7] dts: isolate test specification to config
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-10 19:09 ` Nicholas Pratte
2025-02-11 18:11 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 3/7] dts: revamp Topology model Luca Vizzarro
` (6 subsequent siblings)
8 siblings, 2 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
In an effort to improve separation of concerns, make the TestRunConfig
class responsible for processing the configured test suites. Moreover,
give TestSuiteConfig a facility to yield references to the selected test
cases.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
dts/framework/config/__init__.py | 84 +++++++++++---------------------
dts/framework/config/test_run.py | 59 +++++++++++++++++-----
2 files changed, 76 insertions(+), 67 deletions(-)
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index f8ac2c0d18..7761a8b56f 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -27,9 +27,8 @@
and makes it thread safe should we ever want to move in that direction.
"""
-from functools import cached_property
from pathlib import Path
-from typing import Annotated, Any, Literal, NamedTuple, TypeVar, cast
+from typing import Annotated, Any, Literal, TypeVar, cast
import yaml
from pydantic import Field, TypeAdapter, ValidationError, field_validator, model_validator
@@ -46,18 +45,6 @@
)
from .test_run import TestRunConfiguration
-
-class TestRunWithNodesConfiguration(NamedTuple):
- """Tuple containing the configuration of the test run and its associated nodes."""
-
- #:
- test_run_config: TestRunConfiguration
- #:
- sut_node_config: SutNodeConfiguration
- #:
- tg_node_config: TGNodeConfiguration
-
-
TestRunsConfig = Annotated[list[TestRunConfiguration], Field(min_length=1)]
NodesConfig = Annotated[list[NodeConfigurationTypes], Field(min_length=1)]
@@ -71,40 +58,6 @@ class Configuration(FrozenModel):
#: Node configurations.
nodes: NodesConfig
- @cached_property
- def test_runs_with_nodes(self) -> list[TestRunWithNodesConfiguration]:
- """List of test runs with the associated nodes."""
- test_runs_with_nodes = []
-
- for test_run_no, test_run in enumerate(self.test_runs):
- sut_node_name = test_run.system_under_test_node
- sut_node = next(filter(lambda n: n.name == sut_node_name, self.nodes), None)
-
- assert sut_node is not None, (
- f"test_runs.{test_run_no}.sut_node_config.node_name "
- f"({test_run.system_under_test_node}) is not a valid node name"
- )
- assert isinstance(sut_node, SutNodeConfiguration), (
- f"test_runs.{test_run_no}.sut_node_config.node_name is a valid node name, "
- "but it is not a valid SUT node"
- )
-
- tg_node_name = test_run.traffic_generator_node
- tg_node = next(filter(lambda n: n.name == tg_node_name, self.nodes), None)
-
- assert tg_node is not None, (
- f"test_runs.{test_run_no}.tg_node_name "
- f"({test_run.traffic_generator_node}) is not a valid node name"
- )
- assert isinstance(tg_node, TGNodeConfiguration), (
- f"test_runs.{test_run_no}.tg_node_name is a valid node name, "
- "but it is not a valid TG node"
- )
-
- test_runs_with_nodes.append(TestRunWithNodesConfiguration(test_run, sut_node, tg_node))
-
- return test_runs_with_nodes
-
@field_validator("nodes")
@classmethod
def validate_node_names(cls, nodes: list[NodeConfiguration]) -> list[NodeConfiguration]:
@@ -164,14 +117,33 @@ def validate_port_links(self) -> Self:
return self
@model_validator(mode="after")
- def validate_test_runs_with_nodes(self) -> Self:
- """Validate the test runs to nodes associations.
-
- This validator relies on the cached property `test_runs_with_nodes` to run for the first
- time in this call, therefore triggering the assertions if needed.
- """
- if self.test_runs_with_nodes:
- pass
+ def validate_test_runs_against_nodes(self) -> Self:
+ """Validate the test runs to nodes associations."""
+ for test_run_no, test_run in enumerate(self.test_runs):
+ sut_node_name = test_run.system_under_test_node
+ sut_node = next((n for n in self.nodes if n.name == sut_node_name), None)
+
+ assert sut_node is not None, (
+ f"Test run {test_run_no}.system_under_test_node "
+ f"({sut_node_name}) is not a valid node name."
+ )
+ assert isinstance(sut_node, SutNodeConfiguration), (
+ f"Test run {test_run_no}.system_under_test_node is a valid node name, "
+ "but it is not a valid SUT node."
+ )
+
+ tg_node_name = test_run.traffic_generator_node
+ tg_node = next((n for n in self.nodes if n.name == tg_node_name), None)
+
+ assert tg_node is not None, (
+ f"Test run {test_run_no}.traffic_generator_name "
+ f"({tg_node_name}) is not a valid node name."
+ )
+ assert isinstance(tg_node, TGNodeConfiguration), (
+ f"Test run {test_run_no}.traffic_generator_name is a valid node name, "
+ "but it is not a valid TG node."
+ )
+
return self
diff --git a/dts/framework/config/test_run.py b/dts/framework/config/test_run.py
index 2092da725e..9ea898b15c 100644
--- a/dts/framework/config/test_run.py
+++ b/dts/framework/config/test_run.py
@@ -11,6 +11,8 @@
import re
import tarfile
+from collections import deque
+from collections.abc import Iterable
from enum import auto, unique
from functools import cached_property
from pathlib import Path, PurePath
@@ -25,7 +27,7 @@
from .common import FrozenModel, load_fields_from_settings
if TYPE_CHECKING:
- from framework.test_suite import TestSuiteSpec
+ from framework.test_suite import TestCase, TestSuite, TestSuiteSpec
@unique
@@ -233,6 +235,21 @@ def test_suite_spec(self) -> "TestSuiteSpec":
), f"{self.test_suite_name} is not a valid test suite module name."
return test_suite_spec
+ @cached_property
+ def test_cases(self) -> list[type["TestCase"]]:
+ """The objects of the selected test cases."""
+ available_test_cases = {t.name: t for t in self.test_suite_spec.class_obj.get_test_cases()}
+ selected_test_cases = []
+
+ for requested_test_case in self.test_cases_names:
+ assert requested_test_case in available_test_cases, (
+ f"{requested_test_case} is not a valid test case "
+ f"of test suite {self.test_suite_name}."
+ )
+ selected_test_cases.append(available_test_cases[requested_test_case])
+
+ return selected_test_cases or list(available_test_cases.values())
+
@model_validator(mode="before")
@classmethod
def convert_from_string(cls, data: Any) -> Any:
@@ -246,17 +263,11 @@ def convert_from_string(cls, data: Any) -> Any:
def validate_names(self) -> Self:
"""Validate the supplied test suite and test cases names.
- This validator relies on the cached property `test_suite_spec` to run for the first
- time in this call, therefore triggering the assertions if needed.
+ This validator relies on the cached properties `test_suite_spec` and `test_cases` to run for
+ the first time in this call, therefore triggering the assertions if needed.
"""
- available_test_cases = map(
- lambda t: t.name, self.test_suite_spec.class_obj.get_test_cases()
- )
- for requested_test_case in self.test_cases_names:
- assert requested_test_case in available_test_cases, (
- f"{requested_test_case} is not a valid test case "
- f"of test suite {self.test_suite_name}."
- )
+ if self.test_cases:
+ pass
return self
@@ -383,3 +394,29 @@ class TestRunConfiguration(FrozenModel):
fields_from_settings = model_validator(mode="before")(
load_fields_from_settings("test_suites", "random_seed")
)
+
+ def filter_tests(
+ self,
+ ) -> Iterable[tuple[type["TestSuite"], deque[type["TestCase"]]]]:
+ """Filter test suites and cases selected for execution."""
+ from framework.test_suite import TestCaseType
+
+ test_suites = [TestSuiteConfig(test_suite="smoke_tests")]
+
+ if self.skip_smoke_tests:
+ test_suites = self.test_suites
+ else:
+ test_suites += self.test_suites
+
+ return (
+ (
+ t.test_suite_spec.class_obj,
+ deque(
+ tt
+ for tt in t.test_cases
+ if (tt.test_type is TestCaseType.FUNCTIONAL and self.func)
+ or (tt.test_type is TestCaseType.PERFORMANCE and self.perf)
+ ),
+ )
+ for t in test_suites
+ )
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC PATCH 3/7] dts: revamp Topology model
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 2/7] dts: isolate test specification to config Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-10 19:42 ` Nicholas Pratte
2025-02-11 18:18 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 4/7] dts: improve Port model Luca Vizzarro
` (5 subsequent siblings)
8 siblings, 2 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Change the Topology model to add further flexility in its usage as a
standalone entry point to test suites.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
dts/framework/testbed_model/topology.py | 85 +++++++++++++------------
1 file changed, 45 insertions(+), 40 deletions(-)
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 814c3f3fe4..cf5c2c28ba 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -9,10 +9,12 @@
"""
from collections.abc import Iterator
+from dataclasses import dataclass
from enum import Enum
from typing import NamedTuple
-from framework.config.node import PortConfig
+from typing_extensions import Self
+
from framework.exception import ConfigurationError
from .port import Port
@@ -43,35 +45,32 @@ class PortLink(NamedTuple):
tg_port: Port
+@dataclass(frozen=True)
class Topology:
"""Testbed topology.
The topology contains ports processed into ingress and egress ports.
- If there are no ports on a node, dummy ports (ports with no actual values) are stored.
- If there is only one link available, the ports of this link are stored
+ If there are no ports on a node, accesses to :attr:`~Topology.tg_port_egress` and alike will
+ raise an exception. If there is only one link available, the ports of this link are stored
as both ingress and egress ports.
- The dummy ports shouldn't be used. It's up to :class:`~framework.runner.DTSRunner`
- to ensure no test case or suite requiring actual links is executed
- when the topology prohibits it and up to the developers to make sure that test cases
- not requiring any links don't use any ports. Otherwise, the underlying methods
- using the ports will fail.
+ It's up to :class:`~framework.test_run.TestRun` to ensure no test case or suite requiring actual
+ links is executed when the topology prohibits it and up to the developers to make sure that test
+ cases not requiring any links don't use any ports. Otherwise, the underlying methods using the
+ ports will fail.
Attributes:
type: The type of the topology.
- tg_port_egress: The egress port of the TG node.
- sut_port_ingress: The ingress port of the SUT node.
- sut_port_egress: The egress port of the SUT node.
- tg_port_ingress: The ingress port of the TG node.
+ sut_ports: The SUT ports.
+ tg_ports: The TG ports.
"""
type: TopologyType
- tg_port_egress: Port
- sut_port_ingress: Port
- sut_port_egress: Port
- tg_port_ingress: Port
+ sut_ports: list[Port]
+ tg_ports: list[Port]
- def __init__(self, port_links: Iterator[PortLink]):
+ @classmethod
+ def from_port_links(cls, port_links: Iterator[PortLink]) -> Self:
"""Create the topology from `port_links`.
Args:
@@ -80,34 +79,40 @@ def __init__(self, port_links: Iterator[PortLink]):
Raises:
ConfigurationError: If an unsupported link topology is supplied.
"""
- dummy_port = Port(
- "",
- PortConfig(
- name="dummy",
- pci="0000:00:00.0",
- os_driver_for_dpdk="",
- os_driver="",
- ),
- )
-
- self.type = TopologyType.no_link
- self.tg_port_egress = dummy_port
- self.sut_port_ingress = dummy_port
- self.sut_port_egress = dummy_port
- self.tg_port_ingress = dummy_port
+ type = TopologyType.no_link
if port_link := next(port_links, None):
- self.type = TopologyType.one_link
- self.tg_port_egress = port_link.tg_port
- self.sut_port_ingress = port_link.sut_port
- self.sut_port_egress = self.sut_port_ingress
- self.tg_port_ingress = self.tg_port_egress
+ type = TopologyType.one_link
+ sut_ports = [port_link.sut_port]
+ tg_ports = [port_link.tg_port]
if port_link := next(port_links, None):
- self.type = TopologyType.two_links
- self.sut_port_egress = port_link.sut_port
- self.tg_port_ingress = port_link.tg_port
+ type = TopologyType.two_links
+ sut_ports.append(port_link.sut_port)
+ tg_ports.append(port_link.tg_port)
if next(port_links, None) is not None:
msg = "More than two links in a topology are not supported."
raise ConfigurationError(msg)
+
+ return cls(type, sut_ports, tg_ports)
+
+ @property
+ def tg_port_egress(self) -> Port:
+ """The egress port of the TG node."""
+ return self.tg_ports[0]
+
+ @property
+ def sut_port_ingress(self) -> Port:
+ """The ingress port of the SUT node."""
+ return self.sut_ports[0]
+
+ @property
+ def sut_port_egress(self) -> Port:
+ """The egress port of the SUT node."""
+ return self.sut_ports[1 if self.type is TopologyType.two_links else 0]
+
+ @property
+ def tg_port_ingress(self) -> Port:
+ """The ingress port of the TG node."""
+ return self.tg_ports[1 if self.type is TopologyType.two_links else 0]
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC PATCH 4/7] dts: improve Port model
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (2 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 3/7] dts: revamp Topology model Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-11 18:56 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 5/7] dts: add runtime status Luca Vizzarro
` (4 subsequent siblings)
8 siblings, 1 reply; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Make Port models standalone, so that they can be directly manipulated by
the test suites as needed. Moreover, let them handle themselves and
retrieve the logical name and mac address in autonomy.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
dts/framework/testbed_model/linux_session.py | 47 +++++++++-------
dts/framework/testbed_model/node.py | 25 ++++-----
dts/framework/testbed_model/os_session.py | 16 +++---
dts/framework/testbed_model/port.py | 57 ++++++++++++--------
dts/framework/testbed_model/sut_node.py | 48 +++++++++--------
dts/framework/testbed_model/tg_node.py | 24 +++++++--
6 files changed, 127 insertions(+), 90 deletions(-)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index 99c29b9b1e..7c2b110c99 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -10,6 +10,8 @@
"""
import json
+from collections.abc import Iterable
+from functools import cached_property
from typing import TypedDict
from typing_extensions import NotRequired
@@ -149,31 +151,40 @@ def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: boo
self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
- def update_ports(self, ports: list[Port]) -> None:
- """Overrides :meth:`~.os_session.OSSession.update_ports`."""
- self._logger.debug("Gathering port info.")
- for port in ports:
- assert port.node == self.name, "Attempted to gather port info on the wrong node"
+ def get_port_info(self, pci_address: str) -> tuple[str, str]:
+ """Overrides :meth:`~.os_session.OSSession.get_port_info`.
- port_info_list = self._get_lshw_info()
- for port in ports:
- for port_info in port_info_list:
- if f"pci@{port.pci}" == port_info.get("businfo"):
- self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
- self._update_port_attr(port, port_info.get("serial"), "mac_address")
- port_info_list.remove(port_info)
- break
- else:
- self._logger.warning(f"No port at pci address {port.pci} found.")
-
- def bring_up_link(self, ports: list[Port]) -> None:
+ Raises:
+ ConfigurationError: If the port could not be found.
+ """
+ self._logger.debug(f"Gathering info for port {pci_address}.")
+
+ bus_info = f"pci@{pci_address}"
+ port = next(port for port in self._lshw_net_info if port.get("businfo") == bus_info)
+ if port is None:
+ raise ConfigurationError(f"Port {pci_address} could not be found on the node.")
+
+ logical_name = port.get("logicalname") or ""
+ if not logical_name:
+ self._logger.warning(f"Port {pci_address} does not have a valid logical name.")
+ # raise ConfigurationError(f"Port {pci_address} does not have a valid logical name.")
+
+ mac_address = port.get("serial") or ""
+ if not mac_address:
+ self._logger.warning(f"Port {pci_address} does not have a valid mac address.")
+ # raise ConfigurationError(f"Port {pci_address} does not have a valid mac address.")
+
+ return logical_name, mac_address
+
+ def bring_up_link(self, ports: Iterable[Port]) -> None:
"""Overrides :meth:`~.os_session.OSSession.bring_up_link`."""
for port in ports:
self.send_command(
f"ip link set dev {port.logical_name} up", privileged=True, verify=True
)
- def _get_lshw_info(self) -> list[LshwOutput]:
+ @cached_property
+ def _lshw_net_info(self) -> list[LshwOutput]:
output = self.send_command("lshw -quiet -json -C network", verify=True)
return json.loads(output.stdout)
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 0acd746073..1a4c825ed2 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -14,16 +14,14 @@
"""
from abc import ABC
+from collections.abc import Iterable
from functools import cached_property
from framework.config.node import (
OS,
NodeConfiguration,
)
-from framework.config.test_run import (
- DPDKBuildConfiguration,
- TestRunConfiguration,
-)
+from framework.config.test_run import TestRunConfiguration
from framework.exception import ConfigurationError
from framework.logger import DTSLogger, get_dts_logger
@@ -81,22 +79,14 @@ def __init__(self, node_config: NodeConfiguration):
self._logger.info(f"Connected to node: {self.name}")
self._get_remote_cpus()
self._other_sessions = []
- self._init_ports()
-
- def _init_ports(self) -> None:
- self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
- self.main_session.update_ports(self.ports)
+ self.ports = [Port(self, port_config) for port_config in self.config.ports]
@cached_property
def ports_by_name(self) -> dict[str, Port]:
"""Ports mapped by the name assigned at configuration."""
return {port.name: port for port in self.ports}
- def set_up_test_run(
- self,
- test_run_config: TestRunConfiguration,
- dpdk_build_config: DPDKBuildConfiguration,
- ) -> None:
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
"""Test run setup steps.
Configure hugepages on all DTS node types. Additional steps can be added by
@@ -105,15 +95,18 @@ def set_up_test_run(
Args:
test_run_config: A test run configuration according to which
the setup steps will be taken.
- dpdk_build_config: The build configuration of DPDK.
+ ports: The ports to set up for the test run.
"""
self._setup_hugepages()
- def tear_down_test_run(self) -> None:
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
"""Test run teardown steps.
There are currently no common execution teardown steps common to all DTS node types.
Additional steps can be added by extending the method in subclasses with the use of super().
+
+ Args:
+ ports: The ports to tear down for the test run.
"""
def create_session(self, name: str) -> OSSession:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index f3789fcf75..3c7b2a4f47 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -516,20 +516,18 @@ def get_arch_info(self) -> str:
"""
@abstractmethod
- def update_ports(self, ports: list[Port]) -> None:
- """Get additional information about ports from the operating system and update them.
+ def get_port_info(self, pci_address: str) -> tuple[str, str]:
+ """Get port information.
- The additional information is:
-
- * Logical name (e.g. ``enp7s0``) if applicable,
- * Mac address.
+ Returns:
+ A tuple containing the logical name and MAC address respectively.
- Args:
- ports: The ports to update.
+ Raises:
+ ConfigurationError: If the port could not be found.
"""
@abstractmethod
- def bring_up_link(self, ports: list[Port]) -> None:
+ def bring_up_link(self, ports: Iterable[Port]) -> None:
"""Send operating system specific command for bringing up link on node interfaces.
Args:
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 8014d4a100..f638120eeb 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -9,45 +9,42 @@
drivers and address.
"""
-from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Final
from framework.config.node import PortConfig
+if TYPE_CHECKING:
+ from .node import Node
+
-@dataclass(slots=True)
class Port:
"""Physical port on a node.
- The ports are identified by the node they're on and their PCI addresses. The port on the other
- side of the connection is also captured here.
- Each port is serviced by a driver, which may be different for the operating system (`os_driver`)
- and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
-
Attributes:
+ node: The port's node.
config: The port's configuration.
mac_address: The MAC address of the port.
- logical_name: The logical name of the port. Must be discovered.
+ logical_name: The logical name of the port.
+ bound_for_dpdk: :data:`True` if the port is bound to the driver for DPDK.
"""
- _node: str
- config: PortConfig
- mac_address: str = ""
- logical_name: str = ""
+ node: Final["Node"]
+ config: Final[PortConfig]
+ mac_address: Final[str]
+ logical_name: Final[str]
+ bound_for_dpdk: bool
- def __init__(self, node_name: str, config: PortConfig):
- """Initialize the port from `node_name` and `config`.
+ def __init__(self, node: "Node", config: PortConfig):
+ """Initialize the port from `node` and `config`.
Args:
- node_name: The name of the port's node.
+ node: The port's node.
config: The test run configuration of the port.
"""
- self._node = node_name
+ self.node = node
self.config = config
-
- @property
- def node(self) -> str:
- """The node where the port resides."""
- return self._node
+ self.logical_name, self.mac_address = node.main_session.get_port_info(config.pci)
+ self.bound_for_dpdk = False
@property
def name(self) -> str:
@@ -58,3 +55,21 @@ def name(self) -> str:
def pci(self) -> str:
"""The PCI address of the port."""
return self.config.pci
+
+ def configure_mtu(self, mtu: int):
+ """Configure the port's MTU value.
+
+ Args:
+ mtu: Desired MTU value.
+ """
+ return self.node.main_session.configure_port_mtu(mtu, self)
+
+ def to_dict(self) -> dict[str, Any]:
+ """Convert to a dictionary."""
+ return {
+ "node_name": self.node.name,
+ "name": self.name,
+ "pci": self.pci,
+ "mac_address": self.mac_address,
+ "logical_name": self.logical_name,
+ }
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 440b5a059b..9007d89b1c 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -13,6 +13,7 @@
import os
import time
+from collections.abc import Iterable
from dataclasses import dataclass
from pathlib import Path, PurePath
@@ -33,6 +34,7 @@
from framework.exception import ConfigurationError, RemoteFileNotFoundError
from framework.params.eal import EalParams
from framework.remote_session.remote_session import CommandResult
+from framework.testbed_model.port import Port
from framework.utils import MesonArgs, TarCompressionFormat
from .cpu import LogicalCore, LogicalCoreList
@@ -86,7 +88,6 @@ class SutNode(Node):
_node_info: OSSessionInfo | None
_compiler_version: str | None
_path_to_devbind_script: PurePath | None
- _ports_bound_to_dpdk: bool
def __init__(self, node_config: SutNodeConfiguration):
"""Extend the constructor with SUT node specifics.
@@ -196,11 +197,7 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
"""
return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
- def set_up_test_run(
- self,
- test_run_config: TestRunConfiguration,
- dpdk_build_config: DPDKBuildConfiguration,
- ) -> None:
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
"""Extend the test run setup with vdev config and DPDK build set up.
This method extends the setup process by configuring virtual devices and preparing the DPDK
@@ -209,22 +206,25 @@ def set_up_test_run(
Args:
test_run_config: A test run configuration according to which
the setup steps will be taken.
- dpdk_build_config: The build configuration of DPDK.
+ ports: The ports to set up for the test run.
"""
- super().set_up_test_run(test_run_config, dpdk_build_config)
+ super().set_up_test_run(test_run_config, ports)
for vdev in test_run_config.vdevs:
self.virtual_devices.append(VirtualDevice(vdev))
- self._set_up_dpdk(dpdk_build_config)
+ self._set_up_dpdk(test_run_config.dpdk_config, ports)
+
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
+ """Extend the test run teardown with virtual device teardown and DPDK teardown.
- def tear_down_test_run(self) -> None:
- """Extend the test run teardown with virtual device teardown and DPDK teardown."""
- super().tear_down_test_run()
+ Args:
+ ports: The ports to tear down for the test run.
+ """
+ super().tear_down_test_run(ports)
self.virtual_devices = []
- self._tear_down_dpdk()
+ self._tear_down_dpdk(ports)
def _set_up_dpdk(
- self,
- dpdk_build_config: DPDKBuildConfiguration,
+ self, dpdk_build_config: DPDKBuildConfiguration, ports: Iterable[Port]
) -> None:
"""Set up DPDK the SUT node and bind ports.
@@ -234,6 +234,7 @@ def _set_up_dpdk(
Args:
dpdk_build_config: A DPDK build configuration to test.
+ ports: The ports to use for DPDK.
"""
match dpdk_build_config.dpdk_location:
case RemoteDPDKTreeLocation(dpdk_tree=dpdk_tree):
@@ -254,16 +255,16 @@ def _set_up_dpdk(
self._configure_dpdk_build(build_options)
self._build_dpdk()
- self.bind_ports_to_driver()
+ self.bind_ports_to_driver(ports)
- def _tear_down_dpdk(self) -> None:
+ def _tear_down_dpdk(self, ports: Iterable[Port]) -> None:
"""Reset DPDK variables and bind port driver to the OS driver."""
self._env_vars = {}
self.__remote_dpdk_tree_path = None
self._remote_dpdk_build_dir = None
self._dpdk_version = None
self.compiler_version = None
- self.bind_ports_to_driver(for_dpdk=False)
+ self.bind_ports_to_driver(ports, for_dpdk=False)
def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
"""Set the path to the remote DPDK source tree based on the provided DPDK location.
@@ -504,21 +505,22 @@ def run_dpdk_app(
f"{app_path} {eal_params}", timeout, privileged=True, verify=True
)
- def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
+ def bind_ports_to_driver(self, ports: Iterable[Port], for_dpdk: bool = True) -> None:
"""Bind all ports on the SUT to a driver.
Args:
+ ports: The ports to act on.
for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
If :data:`False`, binds to os_driver.
"""
- if self._ports_bound_to_dpdk == for_dpdk:
- return
+ for port in ports:
+ if port.bound_for_dpdk == for_dpdk:
+ continue
- for port in self.ports:
driver = port.config.os_driver_for_dpdk if for_dpdk else port.config.os_driver
self.main_session.send_command(
f"{self.path_to_devbind_script} -b {driver} --force {port.pci}",
privileged=True,
verify=True,
)
- self._ports_bound_to_dpdk = for_dpdk
+ port.bound_for_dpdk = for_dpdk
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 8ab9ccb438..595836a664 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -9,9 +9,12 @@
A TG node is where the TG runs.
"""
+from collections.abc import Iterable
+
from scapy.packet import Packet
from framework.config.node import TGNodeConfiguration
+from framework.config.test_run import TestRunConfiguration
from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
PacketFilteringConfig,
)
@@ -51,9 +54,24 @@ def __init__(self, node_config: TGNodeConfiguration):
self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
self._logger.info(f"Created node: {self.name}")
- def _init_ports(self) -> None:
- super()._init_ports()
- self.main_session.bring_up_link(self.ports)
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
+ """Extend the test run setup with the setup of the traffic generator.
+
+ Args:
+ test_run_config: A test run configuration according to which
+ the setup steps will be taken.
+ ports: The ports to set up for the test run.
+ """
+ super().set_up_test_run(test_run_config, ports)
+ self.main_session.bring_up_link(ports)
+
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
+ """Extend the test run teardown with the teardown of the traffic generator.
+
+ Args:
+ ports: The ports to tear down for the test run.
+ """
+ super().tear_down_test_run(ports)
def send_packets_and_capture(
self,
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC PATCH 5/7] dts: add runtime status
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (3 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 4/7] dts: improve Port model Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-11 19:45 ` Dean Marx
2025-02-12 18:50 ` Nicholas Pratte
2025-02-03 15:16 ` [RFC PATCH 6/7] dts: add global runtime context Luca Vizzarro
` (3 subsequent siblings)
8 siblings, 2 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Add a new module which defines the global runtime status of DTS and the
distinct execution stages and steps.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
doc/api/dts/framework.status.rst | 8 ++++
doc/api/dts/index.rst | 1 +
dts/framework/logger.py | 36 ++++--------------
dts/framework/status.py | 64 ++++++++++++++++++++++++++++++++
4 files changed, 81 insertions(+), 28 deletions(-)
create mode 100644 doc/api/dts/framework.status.rst
create mode 100644 dts/framework/status.py
diff --git a/doc/api/dts/framework.status.rst b/doc/api/dts/framework.status.rst
new file mode 100644
index 0000000000..07277b5301
--- /dev/null
+++ b/doc/api/dts/framework.status.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+status - DTS status definitions
+===========================================================
+
+.. automodule:: framework.status
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
index 534512dc17..cde603576c 100644
--- a/doc/api/dts/index.rst
+++ b/doc/api/dts/index.rst
@@ -29,6 +29,7 @@ Modules
framework.test_suite
framework.test_result
framework.settings
+ framework.status
framework.logger
framework.parser
framework.utils
diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index d2b8e37da4..7b1c8e6637 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -13,37 +13,17 @@
"""
import logging
-from enum import auto
from logging import FileHandler, StreamHandler
from pathlib import Path
from typing import ClassVar
-from .utils import StrEnum
+from framework.status import PRE_RUN, State
date_fmt = "%Y/%m/%d %H:%M:%S"
stream_fmt = "%(asctime)s - %(stage)s - %(name)s - %(levelname)s - %(message)s"
dts_root_logger_name = "dts"
-class DtsStage(StrEnum):
- """The DTS execution stage."""
-
- #:
- pre_run = auto()
- #:
- test_run_setup = auto()
- #:
- test_suite_setup = auto()
- #:
- test_suite = auto()
- #:
- test_suite_teardown = auto()
- #:
- test_run_teardown = auto()
- #:
- post_run = auto()
-
-
class DTSLogger(logging.Logger):
"""The DTS logger class.
@@ -55,7 +35,7 @@ class DTSLogger(logging.Logger):
a new stage switch occurs. This is useful mainly for logging per test suite.
"""
- _stage: ClassVar[DtsStage] = DtsStage.pre_run
+ _stage: ClassVar[State] = PRE_RUN
_extra_file_handlers: list[FileHandler] = []
def __init__(self, *args, **kwargs):
@@ -75,7 +55,7 @@ def makeRecord(self, *args, **kwargs) -> logging.LogRecord:
record: The generated record with the stage information.
"""
record = super().makeRecord(*args, **kwargs)
- record.stage = DTSLogger._stage
+ record.stage = str(DTSLogger._stage)
return record
def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
@@ -110,7 +90,7 @@ def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
self._add_file_handlers(Path(output_dir, self.name))
- def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None:
+ def set_stage(self, state: State, log_file_path: Path | None = None) -> None:
"""Set the DTS execution stage and optionally log to files.
Set the DTS execution stage of the DTSLog class and optionally add
@@ -120,15 +100,15 @@ def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None:
the other one is a machine-readable log file with extra debug information.
Args:
- stage: The DTS stage to set.
+ state: The DTS execution state to set.
log_file_path: An optional path of the log file to use. This should be a full path
(either relative or absolute) without suffix (which will be appended).
"""
self._remove_extra_file_handlers()
- if DTSLogger._stage != stage:
- self.info(f"Moving from stage '{DTSLogger._stage}' to stage '{stage}'.")
- DTSLogger._stage = stage
+ if DTSLogger._stage != state:
+ self.info(f"Moving from stage '{DTSLogger._stage}' " f"to stage '{state}'.")
+ DTSLogger._stage = state
if log_file_path:
self._extra_file_handlers.extend(self._add_file_handlers(log_file_path))
diff --git a/dts/framework/status.py b/dts/framework/status.py
new file mode 100644
index 0000000000..4a59aa50e6
--- /dev/null
+++ b/dts/framework/status.py
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+"""Running status of DTS.
+
+This module contains the definitions that represent the different states of execution within DTS.
+"""
+
+from enum import auto
+from typing import NamedTuple
+
+from .utils import StrEnum
+
+
+class Stage(StrEnum):
+ """Execution stage."""
+
+ #:
+ PRE_RUN = auto()
+ #:
+ TEST_RUN = auto()
+ #:
+ TEST_SUITE = auto()
+ #:
+ TEST_CASE = auto()
+ #:
+ POST_RUN = auto()
+
+
+class InternalState(StrEnum):
+ """Internal state of the current execution stage."""
+
+ #:
+ BEGIN = auto()
+ #:
+ SETUP = auto()
+ #:
+ RUN = auto()
+ #:
+ TEARDOWN = auto()
+ #:
+ END = auto()
+
+
+class State(NamedTuple):
+ """Representation of the DTS execution state."""
+
+ #:
+ stage: Stage
+ #:
+ state: InternalState
+
+ def __str__(self) -> str:
+ """A formatted name."""
+ name = self.stage.value.lower()
+ if self.state is not InternalState.RUN:
+ return f"{name}_{self.state.value.lower()}"
+ return name
+
+
+#: A ready-made pre-run DTS state.
+PRE_RUN = State(Stage.PRE_RUN, InternalState.RUN)
+#: A ready-made post-run DTS state.
+POST_RUN = State(Stage.POST_RUN, InternalState.RUN)
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC PATCH 6/7] dts: add global runtime context
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (4 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 5/7] dts: add runtime status Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-11 20:26 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 7/7] dts: revamp runtime internals Luca Vizzarro
` (2 subsequent siblings)
8 siblings, 1 reply; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Add a new context module which holds the runtime context. The new
context will describe the current scenario and aid underlying classes
used by the test suites to automatically infer their parameters. This
futher simplies the test writing process as the test writer won't need
to be concerned about the nodes, and can directly use the tools, e.g.
TestPmdShell, as needed which implement context awareness.
Bugzilla ID: 1461
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
doc/api/dts/framework.context.rst | 8 ++
doc/api/dts/index.rst | 1 +
dts/framework/context.py | 107 ++++++++++++++++++
dts/framework/remote_session/dpdk_shell.py | 53 +++------
.../single_active_interactive_shell.py | 14 +--
dts/framework/remote_session/testpmd_shell.py | 27 ++---
dts/framework/test_suite.py | 73 ++++++------
dts/tests/TestSuite_blocklist.py | 6 +-
dts/tests/TestSuite_checksum_offload.py | 14 +--
dts/tests/TestSuite_dual_vlan.py | 6 +-
dts/tests/TestSuite_dynamic_config.py | 8 +-
dts/tests/TestSuite_dynamic_queue_conf.py | 1 -
dts/tests/TestSuite_hello_world.py | 2 +-
dts/tests/TestSuite_l2fwd.py | 9 +-
dts/tests/TestSuite_mac_filter.py | 10 +-
dts/tests/TestSuite_mtu.py | 17 +--
dts/tests/TestSuite_pmd_buffer_scatter.py | 9 +-
...stSuite_port_restart_config_persistency.py | 8 +-
dts/tests/TestSuite_promisc_support.py | 8 +-
dts/tests/TestSuite_smoke_tests.py | 3 +-
dts/tests/TestSuite_softnic.py | 4 +-
dts/tests/TestSuite_uni_pkt.py | 14 +--
dts/tests/TestSuite_vlan.py | 8 +-
23 files changed, 237 insertions(+), 173 deletions(-)
create mode 100644 doc/api/dts/framework.context.rst
create mode 100644 dts/framework/context.py
diff --git a/doc/api/dts/framework.context.rst b/doc/api/dts/framework.context.rst
new file mode 100644
index 0000000000..a8c8b5022e
--- /dev/null
+++ b/doc/api/dts/framework.context.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+context - DTS execution context
+===========================================================
+
+.. automodule:: framework.context
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
index cde603576c..b211571430 100644
--- a/doc/api/dts/index.rst
+++ b/doc/api/dts/index.rst
@@ -29,6 +29,7 @@ Modules
framework.test_suite
framework.test_result
framework.settings
+ framework.context
framework.status
framework.logger
framework.parser
diff --git a/dts/framework/context.py b/dts/framework/context.py
new file mode 100644
index 0000000000..fc7fb6719b
--- /dev/null
+++ b/dts/framework/context.py
@@ -0,0 +1,107 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+"""Runtime contexts."""
+
+import functools
+from dataclasses import MISSING, dataclass, field, fields
+from typing import TYPE_CHECKING, ParamSpec
+
+from framework.exception import InternalError
+from framework.settings import SETTINGS
+from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
+from framework.testbed_model.topology import Topology
+
+if TYPE_CHECKING:
+ from framework.testbed_model.sut_node import SutNode
+ from framework.testbed_model.tg_node import TGNode
+
+P = ParamSpec("P")
+
+
+@dataclass
+class LocalContext:
+ """Updatable context local to test suites and cases.
+
+ Attributes:
+ lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
+ use. The default will select one lcore for each of two cores on one socket, in ascending
+ order of core ids.
+ ascending_cores: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
+ sort in descending order.
+ append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
+ timeout: The timeout used for the SSH channel that is dedicated to this interactive
+ shell. This timeout is for collecting output, so if reading from the buffer
+ and no output is gathered within the timeout, an exception is thrown.
+ """
+
+ lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = field(
+ default_factory=LogicalCoreCount
+ )
+ ascending_cores: bool = True
+ append_prefix_timestamp: bool = True
+ timeout: float = SETTINGS.timeout
+
+ def reset(self) -> None:
+ """Reset the local context to the default values."""
+ for _field in fields(LocalContext):
+ default = (
+ _field.default_factory()
+ if _field.default_factory is not MISSING
+ else _field.default
+ )
+
+ assert (
+ default is not MISSING
+ ), "{LocalContext.__name__} must have defaults on all fields!"
+
+ setattr(self, _field.name, default)
+
+
+@dataclass(frozen=True)
+class Context:
+ """Runtime context."""
+
+ sut_node: "SutNode"
+ tg_node: "TGNode"
+ topology: Topology
+ local: LocalContext = field(default_factory=LocalContext)
+
+
+__current_ctx: Context | None = None
+
+
+def get_ctx() -> Context:
+ """Retrieve the current runtime context.
+
+ Raises:
+ InternalError: If there is no context.
+ """
+ if __current_ctx:
+ return __current_ctx
+
+ raise InternalError("Attempted to retrieve context that has not been initialized yet.")
+
+
+def init_ctx(ctx: Context) -> None:
+ """Initialize context."""
+ global __current_ctx
+ __current_ctx = ctx
+
+
+def filter_cores(specifier: LogicalCoreCount | LogicalCoreList):
+ """Decorates functions that require a temporary update to the lcore specifier."""
+
+ def decorator(func):
+ @functools.wraps(func)
+ def wrapper(*args: P.args, **kwargs: P.kwargs):
+ local_ctx = get_ctx().local
+ old_specifier = local_ctx.lcore_filter_specifier
+ local_ctx.lcore_filter_specifier = specifier
+ result = func(*args, **kwargs)
+ local_ctx.lcore_filter_specifier = old_specifier
+ return result
+
+ return wrapper
+
+ return decorator
diff --git a/dts/framework/remote_session/dpdk_shell.py b/dts/framework/remote_session/dpdk_shell.py
index c11d9ab81c..b55deb7fa0 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -9,54 +9,45 @@
from abc import ABC
from pathlib import PurePath
+from framework.context import get_ctx
from framework.params.eal import EalParams
from framework.remote_session.single_active_interactive_shell import (
SingleActiveInteractiveShell,
)
-from framework.settings import SETTINGS
-from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
+from framework.testbed_model.cpu import LogicalCoreList
from framework.testbed_model.sut_node import SutNode
def compute_eal_params(
- sut_node: SutNode,
params: EalParams | None = None,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
) -> EalParams:
"""Compute EAL parameters based on the node's specifications.
Args:
- sut_node: The SUT node to compute the values for.
params: If :data:`None`, a new object is created and returned. Otherwise `params.lcore_list`
is modified according to `lcore_filter_specifier`. A DPDK file prefix is also added. If
`params.ports` is :data:`None`, then `sut_node.ports` is assigned to it.
- lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
- use. The default will select one lcore for each of two cores on one socket, in ascending
- order of core ids.
- ascending_cores: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
- sort in descending order.
- append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
"""
+ ctx = get_ctx()
+
if params is None:
params = EalParams()
if params.lcore_list is None:
params.lcore_list = LogicalCoreList(
- sut_node.filter_lcores(lcore_filter_specifier, ascending_cores)
+ ctx.sut_node.filter_lcores(ctx.local.lcore_filter_specifier, ctx.local.ascending_cores)
)
prefix = params.prefix
- if append_prefix_timestamp:
- prefix = f"{prefix}_{sut_node.dpdk_timestamp}"
- prefix = sut_node.main_session.get_dpdk_file_prefix(prefix)
+ if ctx.local.append_prefix_timestamp:
+ prefix = f"{prefix}_{ctx.sut_node.dpdk_timestamp}"
+ prefix = ctx.sut_node.main_session.get_dpdk_file_prefix(prefix)
if prefix:
- sut_node.dpdk_prefix_list.append(prefix)
+ ctx.sut_node.dpdk_prefix_list.append(prefix)
params.prefix = prefix
if params.allowed_ports is None:
- params.allowed_ports = sut_node.ports
+ params.allowed_ports = ctx.topology.sut_ports
return params
@@ -74,29 +65,15 @@ class DPDKShell(SingleActiveInteractiveShell, ABC):
def __init__(
self,
- node: SutNode,
+ name: str | None = None,
privileged: bool = True,
- timeout: float = SETTINGS.timeout,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
app_params: EalParams = EalParams(),
- name: str | None = None,
) -> None:
- """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`.
-
- Adds the `lcore_filter_specifier`, `ascending_cores` and `append_prefix_timestamp` arguments
- which are then used to compute the EAL parameters based on the node's configuration.
- """
- app_params = compute_eal_params(
- node,
- app_params,
- lcore_filter_specifier,
- ascending_cores,
- append_prefix_timestamp,
- )
+ """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`."""
+ app_params = compute_eal_params(app_params)
+ node = get_ctx().sut_node
- super().__init__(node, privileged, timeout, app_params, name)
+ super().__init__(node, name, privileged, app_params)
def _update_real_path(self, path: PurePath) -> None:
"""Extends :meth:`~.interactive_shell.InteractiveShell._update_real_path`.
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index cfe5baec14..2eec2f698a 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -27,6 +27,7 @@
from paramiko import Channel, channel
from typing_extensions import Self
+from framework.context import get_ctx
from framework.exception import (
InteractiveCommandExecutionError,
InteractiveSSHSessionDeadError,
@@ -34,7 +35,6 @@
)
from framework.logger import DTSLogger, get_dts_logger
from framework.params import Params
-from framework.settings import SETTINGS
from framework.testbed_model.node import Node
from framework.utils import MultiInheritanceBaseClass
@@ -90,10 +90,9 @@ class SingleActiveInteractiveShell(MultiInheritanceBaseClass, ABC):
def __init__(
self,
node: Node,
+ name: str | None = None,
privileged: bool = False,
- timeout: float = SETTINGS.timeout,
app_params: Params = Params(),
- name: str | None = None,
**kwargs,
) -> None:
"""Create an SSH channel during initialization.
@@ -103,13 +102,10 @@ def __init__(
Args:
node: The node on which to run start the interactive shell.
- privileged: Enables the shell to run as superuser.
- timeout: The timeout used for the SSH channel that is dedicated to this interactive
- shell. This timeout is for collecting output, so if reading from the buffer
- and no output is gathered within the timeout, an exception is thrown.
- app_params: The command line parameters to be passed to the application on startup.
name: Name for the interactive shell to use for logging. This name will be appended to
the name of the underlying node which it is running on.
+ privileged: Enables the shell to run as superuser.
+ app_params: The command line parameters to be passed to the application on startup.
**kwargs: Any additional arguments if any.
"""
self._node = node
@@ -118,7 +114,7 @@ def __init__(
self._logger = get_dts_logger(f"{node.name}.{name}")
self._app_params = app_params
self._privileged = privileged
- self._timeout = timeout
+ self._timeout = get_ctx().local.timeout
# Ensure path is properly formatted for the host
self._update_real_path(self.path)
super().__init__(**kwargs)
diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
index 9f07696aa2..c63d532e16 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -24,6 +24,9 @@
from pathlib import PurePath
from typing import TYPE_CHECKING, Any, ClassVar, Concatenate, ParamSpec, TypeAlias
+from framework.context import get_ctx
+from framework.testbed_model.topology import TopologyType
+
if TYPE_CHECKING or environ.get("DTS_DOC_BUILD"):
from enum import Enum as NoAliasEnum
else:
@@ -32,13 +35,11 @@
from typing_extensions import Self, Unpack
from framework.exception import InteractiveCommandExecutionError, InternalError
-from framework.params.testpmd import SimpleForwardingModes, TestPmdParams
+from framework.params.testpmd import PortTopology, SimpleForwardingModes, TestPmdParams
from framework.params.types import TestPmdParamsDict
from framework.parser import ParserFn, TextParser
from framework.remote_session.dpdk_shell import DPDKShell
from framework.settings import SETTINGS
-from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
-from framework.testbed_model.sut_node import SutNode
from framework.utils import REGEX_FOR_MAC_ADDRESS, StrEnum
P = ParamSpec("P")
@@ -1507,26 +1508,14 @@ class TestPmdShell(DPDKShell):
def __init__(
self,
- node: SutNode,
- privileged: bool = True,
- timeout: float = SETTINGS.timeout,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
name: str | None = None,
+ privileged: bool = True,
**app_params: Unpack[TestPmdParamsDict],
) -> None:
"""Overrides :meth:`~.dpdk_shell.DPDKShell.__init__`. Changes app_params to kwargs."""
- super().__init__(
- node,
- privileged,
- timeout,
- lcore_filter_specifier,
- ascending_cores,
- append_prefix_timestamp,
- TestPmdParams(**app_params),
- name,
- )
+ if "port_topology" not in app_params and get_ctx().topology.type is TopologyType.one_link:
+ app_params["port_topology"] = PortTopology.loop
+ super().__init__(name, privileged, TestPmdParams(**app_params))
self.ports_started = not self._app_params.disable_device_start
self.currently_forwarding = not self._app_params.auto_start
self._ports = None
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 3d168d522b..b9b527e40d 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -24,7 +24,7 @@
from ipaddress import IPv4Interface, IPv6Interface, ip_interface
from pkgutil import iter_modules
from types import ModuleType
-from typing import ClassVar, Protocol, TypeVar, Union, cast
+from typing import TYPE_CHECKING, ClassVar, Protocol, TypeVar, Union, cast
from scapy.layers.inet import IP
from scapy.layers.l2 import Ether
@@ -32,9 +32,6 @@
from typing_extensions import Self
from framework.testbed_model.capability import TestProtocol
-from framework.testbed_model.port import Port
-from framework.testbed_model.sut_node import SutNode
-from framework.testbed_model.tg_node import TGNode
from framework.testbed_model.topology import Topology
from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
PacketFilteringConfig,
@@ -44,6 +41,9 @@
from .logger import DTSLogger, get_dts_logger
from .utils import get_packet_summaries, to_pascal_case
+if TYPE_CHECKING:
+ from framework.context import Context
+
class TestSuite(TestProtocol):
"""The base class with building blocks needed by most test cases.
@@ -69,33 +69,19 @@ class TestSuite(TestProtocol):
The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
properly choose the IP addresses and other configuration that must be tailored to the testbed.
-
- Attributes:
- sut_node: The SUT node where the test suite is running.
- tg_node: The TG node where the test suite is running.
"""
- sut_node: SutNode
- tg_node: TGNode
#: Whether the test suite is blocking. A failure of a blocking test suite
#: will block the execution of all subsequent test suites in the current test run.
is_blocking: ClassVar[bool] = False
+ _ctx: "Context"
_logger: DTSLogger
- _sut_port_ingress: Port
- _sut_port_egress: Port
_sut_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
_sut_ip_address_egress: Union[IPv4Interface, IPv6Interface]
- _tg_port_ingress: Port
- _tg_port_egress: Port
_tg_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
_tg_ip_address_egress: Union[IPv4Interface, IPv6Interface]
- def __init__(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- ):
+ def __init__(self):
"""Initialize the test suite testbed information and basic configuration.
Find links between ports and set up default IP addresses to be used when
@@ -106,18 +92,25 @@ def __init__(
tg_node: The TG node where the test suite will run.
topology: The topology where the test suite will run.
"""
- self.sut_node = sut_node
- self.tg_node = tg_node
+ from framework.context import get_ctx
+
+ self._ctx = get_ctx()
self._logger = get_dts_logger(self.__class__.__name__)
- self._tg_port_egress = topology.tg_port_egress
- self._sut_port_ingress = topology.sut_port_ingress
- self._sut_port_egress = topology.sut_port_egress
- self._tg_port_ingress = topology.tg_port_ingress
self._sut_ip_address_ingress = ip_interface("192.168.100.2/24")
self._sut_ip_address_egress = ip_interface("192.168.101.2/24")
self._tg_ip_address_egress = ip_interface("192.168.100.3/24")
self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
+ @property
+ def name(self) -> str:
+ """The name of the test suite class."""
+ return type(self).__name__
+
+ @property
+ def topology(self) -> Topology:
+ """The current topology in use."""
+ return self._ctx.topology
+
@classmethod
def get_test_cases(cls) -> list[type["TestCase"]]:
"""A list of all the available test cases."""
@@ -254,10 +247,10 @@ def send_packets_and_capture(
A list of received packets.
"""
packets = self._adjust_addresses(packets)
- return self.tg_node.send_packets_and_capture(
+ return self._ctx.tg_node.send_packets_and_capture(
packets,
- self._tg_port_egress,
- self._tg_port_ingress,
+ self._ctx.topology.tg_port_egress,
+ self._ctx.topology.tg_port_ingress,
filter_config,
duration,
)
@@ -272,7 +265,7 @@ def send_packets(
packets: Packets to send.
"""
packets = self._adjust_addresses(packets)
- self.tg_node.send_packets(packets, self._tg_port_egress)
+ self._ctx.tg_node.send_packets(packets, self._ctx.topology.tg_port_egress)
def get_expected_packets(
self,
@@ -352,15 +345,15 @@ def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> li
# only be the Ether src/dst.
if "src" not in packet.fields:
packet.src = (
- self._sut_port_egress.mac_address
+ self.topology.sut_port_egress.mac_address
if expected
- else self._tg_port_egress.mac_address
+ else self.topology.tg_port_egress.mac_address
)
if "dst" not in packet.fields:
packet.dst = (
- self._tg_port_ingress.mac_address
+ self.topology.tg_port_ingress.mac_address
if expected
- else self._sut_port_ingress.mac_address
+ else self.topology.sut_port_ingress.mac_address
)
# update l3 addresses
@@ -400,10 +393,10 @@ def verify(self, condition: bool, failure_description: str) -> None:
def _fail_test_case_verify(self, failure_description: str) -> None:
self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
- for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ for command_res in self._ctx.sut_node.main_session.remote_session.history[-10:]:
self._logger.debug(command_res.command)
self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
- for command_res in self.tg_node.main_session.remote_session.history[-10:]:
+ for command_res in self._ctx.tg_node.main_session.remote_session.history[-10:]:
self._logger.debug(command_res.command)
raise TestCaseVerifyError(failure_description)
@@ -517,14 +510,14 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
self._logger.debug("Looking at the Ether layer.")
self._logger.debug(
f"Comparing received dst mac '{received_packet.dst}' "
- f"with expected '{self._tg_port_ingress.mac_address}'."
+ f"with expected '{self.topology.tg_port_ingress.mac_address}'."
)
- if received_packet.dst != self._tg_port_ingress.mac_address:
+ if received_packet.dst != self.topology.tg_port_ingress.mac_address:
return False
- expected_src_mac = self._tg_port_egress.mac_address
+ expected_src_mac = self.topology.tg_port_egress.mac_address
if l3:
- expected_src_mac = self._sut_port_egress.mac_address
+ expected_src_mac = self.topology.sut_port_egress.mac_address
self._logger.debug(
f"Comparing received src mac '{received_packet.src}' "
f"with expected '{expected_src_mac}'."
diff --git a/dts/tests/TestSuite_blocklist.py b/dts/tests/TestSuite_blocklist.py
index b9e9cd1d1a..ce7da1cc8f 100644
--- a/dts/tests/TestSuite_blocklist.py
+++ b/dts/tests/TestSuite_blocklist.py
@@ -18,7 +18,7 @@ class TestBlocklist(TestSuite):
def verify_blocklisted_ports(self, ports_to_block: list[Port]):
"""Runs testpmd with the given ports blocklisted and verifies the ports."""
- with TestPmdShell(self.sut_node, allowed_ports=[], blocked_ports=ports_to_block) as testpmd:
+ with TestPmdShell(allowed_ports=[], blocked_ports=ports_to_block) as testpmd:
allowlisted_ports = {port.device_name for port in testpmd.show_port_info_all()}
blocklisted_ports = {port.pci for port in ports_to_block}
@@ -49,7 +49,7 @@ def one_port_blocklisted(self):
Verify:
That the port was successfully blocklisted.
"""
- self.verify_blocklisted_ports(self.sut_node.ports[:1])
+ self.verify_blocklisted_ports(self.topology.sut_ports[:1])
@func_test
def all_but_one_port_blocklisted(self):
@@ -60,4 +60,4 @@ def all_but_one_port_blocklisted(self):
Verify:
That all specified ports were successfully blocklisted.
"""
- self.verify_blocklisted_ports(self.sut_node.ports[:-1])
+ self.verify_blocklisted_ports(self.topology.sut_ports[:-1])
diff --git a/dts/tests/TestSuite_checksum_offload.py b/dts/tests/TestSuite_checksum_offload.py
index a8bb6a71f7..b38d73421b 100644
--- a/dts/tests/TestSuite_checksum_offload.py
+++ b/dts/tests/TestSuite_checksum_offload.py
@@ -128,7 +128,7 @@ def test_insert_checksums(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP() / Raw(payload),
Ether(dst=mac_id) / IPv6(src="::1") / TCP() / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -160,7 +160,7 @@ def test_no_insert_checksums(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP() / Raw(payload),
Ether(dst=mac_id) / IPv6(src="::1") / TCP() / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
testpmd.start()
@@ -190,7 +190,7 @@ def test_l4_rx_checksum(self) -> None:
Ether(dst=mac_id) / IP() / UDP(chksum=0xF),
Ether(dst=mac_id) / IP() / TCP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -223,7 +223,7 @@ def test_l3_rx_checksum(self) -> None:
Ether(dst=mac_id) / IP(chksum=0xF) / UDP(),
Ether(dst=mac_id) / IP(chksum=0xF) / TCP(),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -260,7 +260,7 @@ def test_validate_rx_checksum(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP(chksum=0xF),
Ether(dst=mac_id) / IPv6(src="::1") / TCP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -299,7 +299,7 @@ def test_vlan_checksum(self) -> None:
Ether(dst=mac_id) / Dot1Q(vlan=1) / IPv6(src="::1") / UDP(chksum=0xF) / Raw(payload),
Ether(dst=mac_id) / Dot1Q(vlan=1) / IPv6(src="::1") / TCP(chksum=0xF) / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -333,7 +333,7 @@ def test_validate_sctp_checksum(self) -> None:
Ether(dst=mac_id) / IP() / SCTP(),
Ether(dst=mac_id) / IP() / SCTP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
testpmd.csum_set_hw(layers=ChecksumOffloadOptions.sctp)
diff --git a/dts/tests/TestSuite_dual_vlan.py b/dts/tests/TestSuite_dual_vlan.py
index bdbee7e8d1..6af503528d 100644
--- a/dts/tests/TestSuite_dual_vlan.py
+++ b/dts/tests/TestSuite_dual_vlan.py
@@ -193,7 +193,7 @@ def insert_second_vlan(self) -> None:
Packets are received.
Packet contains two VLAN tags.
"""
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.tx_vlan_set(port=self.tx_port, enable=True, vlan=self.vlan_insert_tag)
testpmd.start()
recv = self.send_packet_and_capture(
@@ -229,7 +229,7 @@ def all_vlan_functions(self) -> None:
/ Dot1Q(vlan=self.inner_vlan_tag)
/ Raw(b"X" * 20)
)
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.start()
recv = self.send_packet_and_capture(send_pkt)
self.verify(len(recv) > 0, "Unmodified packet was not received.")
@@ -269,7 +269,7 @@ def maintains_priority(self) -> None:
/ Dot1Q(vlan=self.inner_vlan_tag, prio=2)
/ Raw(b"X" * 20)
)
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.start()
recv = self.send_packet_and_capture(pkt)
self.verify(len(recv) > 0, "Did not receive any packets when testing VLAN priority.")
diff --git a/dts/tests/TestSuite_dynamic_config.py b/dts/tests/TestSuite_dynamic_config.py
index 5a33f6f3c2..a4bee2e90b 100644
--- a/dts/tests/TestSuite_dynamic_config.py
+++ b/dts/tests/TestSuite_dynamic_config.py
@@ -88,7 +88,7 @@ def test_default_mode(self) -> None:
and sends two packets; one matching source MAC address and one unknown.
Verifies that both are received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
is_promisc = testpmd.show_port_info(0).is_promiscuous_mode_enabled
self.verify(is_promisc, "Promiscuous mode was not enabled by default.")
testpmd.start()
@@ -106,7 +106,7 @@ def test_disable_promisc(self) -> None:
and sends two packets; one matching source MAC address and one unknown.
Verifies that only the matching address packet is received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
mac = testpmd.show_port_info(0).mac_address
self.send_packet_and_verify(should_receive=True, mac_address=str(mac))
@@ -120,7 +120,7 @@ def test_disable_promisc_broadcast(self) -> None:
and sends two packets; one matching source MAC address and one broadcast.
Verifies that both packets are received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
mac = testpmd.show_port_info(0).mac_address
self.send_packet_and_verify(should_receive=True, mac_address=str(mac))
@@ -134,7 +134,7 @@ def test_disable_promisc_multicast(self) -> None:
and sends two packets; one matching source MAC address and one multicast.
Verifies that the multicast packet is only received once allmulticast mode is enabled.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
testpmd.set_multicast_all(on=False)
# 01:00:5E:00:00:01 is the first of the multicast MAC range of addresses
diff --git a/dts/tests/TestSuite_dynamic_queue_conf.py b/dts/tests/TestSuite_dynamic_queue_conf.py
index e55716f545..344dd540eb 100644
--- a/dts/tests/TestSuite_dynamic_queue_conf.py
+++ b/dts/tests/TestSuite_dynamic_queue_conf.py
@@ -84,7 +84,6 @@ def wrap(self: "TestDynamicQueueConf", is_rx_testing: bool) -> None:
queues_to_config.add(random.randint(1, self.number_of_queues - 1))
unchanged_queues = set(range(self.number_of_queues)) - queues_to_config
with TestPmdShell(
- self.sut_node,
port_topology=PortTopology.chained,
rx_queues=self.number_of_queues,
tx_queues=self.number_of_queues,
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 031b94de4d..141f2bc4c9 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -23,6 +23,6 @@ def test_hello_world(self) -> None:
Verify:
The testpmd session throws no errors.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
self.log("Hello World!")
diff --git a/dts/tests/TestSuite_l2fwd.py b/dts/tests/TestSuite_l2fwd.py
index 0f6ff18907..0555d75ed8 100644
--- a/dts/tests/TestSuite_l2fwd.py
+++ b/dts/tests/TestSuite_l2fwd.py
@@ -7,6 +7,7 @@
The forwarding test is performed with several packets being sent at once.
"""
+from framework.context import filter_cores
from framework.params.testpmd import EthPeer, SimpleForwardingModes
from framework.remote_session.testpmd_shell import TestPmdShell
from framework.test_suite import TestSuite, func_test
@@ -33,6 +34,7 @@ def set_up_suite(self) -> None:
"""
self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
+ @filter_cores(LogicalCoreCount(cores_per_socket=4))
@func_test
def l2fwd_integrity(self) -> None:
"""Test the L2 forwarding integrity.
@@ -44,11 +46,12 @@ def l2fwd_integrity(self) -> None:
"""
queues = [1, 2, 4, 8]
+ self.topology.sut_ports[0]
+ self.topology.tg_ports[0]
+
with TestPmdShell(
- self.sut_node,
- lcore_filter_specifier=LogicalCoreCount(cores_per_socket=4),
forward_mode=SimpleForwardingModes.mac,
- eth_peer=[EthPeer(1, self.tg_node.ports[1].mac_address)],
+ eth_peer=[EthPeer(1, self.topology.tg_port_ingress.mac_address)],
disable_device_start=True,
) as shell:
for queues_num in queues:
diff --git a/dts/tests/TestSuite_mac_filter.py b/dts/tests/TestSuite_mac_filter.py
index 11e4b595c7..e6c55d3ec6 100644
--- a/dts/tests/TestSuite_mac_filter.py
+++ b/dts/tests/TestSuite_mac_filter.py
@@ -101,10 +101,10 @@ def test_add_remove_mac_addresses(self) -> None:
Remove the fake mac address from the PMD's address pool.
Send a packet with the fake mac address to the PMD. (Should not receive)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.set_promisc(0, enable=False)
testpmd.start()
- mac_address = self._sut_port_ingress.mac_address
+ mac_address = self.topology.sut_port_ingress.mac_address
# Send a packet with NIC default mac address
self.send_packet_and_verify(mac_address=mac_address, should_receive=True)
@@ -137,9 +137,9 @@ def test_invalid_address(self) -> None:
Determine the device's mac address pool size, and fill the pool with fake addresses.
Attempt to add another fake mac address, overloading the address pool. (Should fail)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
- mac_address = self._sut_port_ingress.mac_address
+ mac_address = self.topology.sut_port_ingress.mac_address
try:
testpmd.set_mac_addr(0, "00:00:00:00:00:00", add=True)
self.verify(False, "Invalid mac address added.")
@@ -191,7 +191,7 @@ def test_multicast_filter(self) -> None:
Remove the fake multicast address from the PMDs multicast address filter.
Send a packet with the fake multicast address to the PMD. (Should not receive)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
testpmd.set_promisc(0, enable=False)
multicast_address = "01:00:5E:00:00:00"
diff --git a/dts/tests/TestSuite_mtu.py b/dts/tests/TestSuite_mtu.py
index 3c96a36fc9..b445948091 100644
--- a/dts/tests/TestSuite_mtu.py
+++ b/dts/tests/TestSuite_mtu.py
@@ -51,8 +51,8 @@ def set_up_suite(self) -> None:
Set traffic generator MTU lengths to a size greater than scope of all
test cases.
"""
- self.tg_node.main_session.configure_port_mtu(JUMBO_MTU + 200, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(JUMBO_MTU + 200, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(JUMBO_MTU + 200)
+ self.topology.tg_port_ingress.configure_mtu(JUMBO_MTU + 200)
def send_packet_and_verify(self, pkt_size: int, should_receive: bool) -> None:
"""Generate, send a packet, and assess its behavior based on a given packet size.
@@ -156,11 +156,7 @@ def test_runtime_mtu_updating_and_forwarding(self) -> None:
Verify that standard MTU packets forward, in addition to packets within the limits of
an MTU size set during runtime.
"""
- with TestPmdShell(
- self.sut_node,
- tx_offloads=0x8000,
- mbuf_size=[JUMBO_MTU + 200],
- ) as testpmd:
+ with TestPmdShell(tx_offloads=0x8000, mbuf_size=[JUMBO_MTU + 200]) as testpmd:
# Configure the new MTU.
# Start packet capturing.
@@ -198,7 +194,6 @@ def test_cli_mtu_forwarding_for_std_packets(self) -> None:
MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -227,7 +222,6 @@ def test_cli_jumbo_forwarding_for_jumbo_mtu(self) -> None:
Verify that all packets are forwarded after pre-runtime MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -256,7 +250,6 @@ def test_cli_mtu_std_packets_for_jumbo_mtu(self) -> None:
MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -274,5 +267,5 @@ def tear_down_suite(self) -> None:
Teardown:
Set the MTU size of the traffic generator back to the standard 1518 byte size.
"""
- self.tg_node.main_session.configure_port_mtu(STANDARD_MTU, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(STANDARD_MTU, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(STANDARD_MTU)
+ self.topology.tg_port_ingress.configure_mtu(STANDARD_MTU)
diff --git a/dts/tests/TestSuite_pmd_buffer_scatter.py b/dts/tests/TestSuite_pmd_buffer_scatter.py
index a8c111eea7..5e23f28bc6 100644
--- a/dts/tests/TestSuite_pmd_buffer_scatter.py
+++ b/dts/tests/TestSuite_pmd_buffer_scatter.py
@@ -58,8 +58,8 @@ def set_up_suite(self) -> None:
Increase the MTU of both ports on the traffic generator to 9000
to support larger packet sizes.
"""
- self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(9000)
+ self.topology.tg_port_ingress.configure_mtu(9000)
def scatter_pktgen_send_packet(self, pkt_size: int) -> list[Packet]:
"""Generate and send a packet to the SUT then capture what is forwarded back.
@@ -110,7 +110,6 @@ def pmd_scatter(self, mb_size: int, enable_offload: bool = False) -> None:
Start testpmd and run functional test with preset `mb_size`.
"""
with TestPmdShell(
- self.sut_node,
forward_mode=SimpleForwardingModes.mac,
mbcache=200,
mbuf_size=[mb_size],
@@ -147,5 +146,5 @@ def tear_down_suite(self) -> None:
Teardown:
Set the MTU of the tg_node back to a more standard size of 1500.
"""
- self.tg_node.main_session.configure_port_mtu(1500, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(1500, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(1500)
+ self.topology.tg_port_ingress.configure_mtu(1500)
diff --git a/dts/tests/TestSuite_port_restart_config_persistency.py b/dts/tests/TestSuite_port_restart_config_persistency.py
index ad42c6c2e6..42ea221586 100644
--- a/dts/tests/TestSuite_port_restart_config_persistency.py
+++ b/dts/tests/TestSuite_port_restart_config_persistency.py
@@ -61,8 +61,8 @@ def port_configuration_persistence(self) -> None:
Verify:
The configuration persists after the port is restarted.
"""
- with TestPmdShell(self.sut_node, disable_device_start=True) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell(disable_device_start=True) as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_port_mtu(port_id=port_id, mtu=STANDARD_MTU, verify=True)
self.restart_port_and_verify(port_id, testpmd, "MTU")
@@ -90,8 +90,8 @@ def flow_ctrl_port_configuration_persistence(self) -> None:
Verify:
The configuration persists after the port is restarted.
"""
- with TestPmdShell(self.sut_node, disable_device_start=True) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell(disable_device_start=True) as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
flow_ctrl = TestPmdPortFlowCtrl(rx=True)
testpmd.set_flow_control(port=port_id, flow_ctrl=flow_ctrl)
self.restart_port_and_verify(port_id, testpmd, "flow_ctrl")
diff --git a/dts/tests/TestSuite_promisc_support.py b/dts/tests/TestSuite_promisc_support.py
index a3ea2461f0..445f6e1d69 100644
--- a/dts/tests/TestSuite_promisc_support.py
+++ b/dts/tests/TestSuite_promisc_support.py
@@ -38,10 +38,8 @@ def test_promisc_packets(self) -> None:
"""
packet = [Ether(dst=self.ALTERNATIVE_MAC_ADDRESS) / IP() / Raw(load=b"\x00" * 64)]
- with TestPmdShell(
- self.sut_node,
- ) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell() as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_promisc(port=port_id, enable=True, verify=True)
testpmd.start()
@@ -51,7 +49,7 @@ def test_promisc_packets(self) -> None:
testpmd.stop()
- for port_id in range(len(self.sut_node.ports)):
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_promisc(port=port_id, enable=False, verify=True)
testpmd.start()
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 7ed266dac0..8a5799c684 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -46,6 +46,7 @@ def set_up_suite(self) -> None:
Setup:
Set the build directory path and a list of NICs in the SUT node.
"""
+ self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir
self.nics_in_node = self.sut_node.config.ports
@@ -104,7 +105,7 @@ def test_devices_listed_in_testpmd(self) -> None:
Test:
List all devices found in testpmd and verify the configured devices are among them.
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
dev_list = [str(x) for x in testpmd.get_devices()]
for nic in self.nics_in_node:
self.verify(
diff --git a/dts/tests/TestSuite_softnic.py b/dts/tests/TestSuite_softnic.py
index 07480db392..370fd6b419 100644
--- a/dts/tests/TestSuite_softnic.py
+++ b/dts/tests/TestSuite_softnic.py
@@ -32,6 +32,7 @@ def set_up_suite(self) -> None:
Setup:
Generate the random packets that will be sent and create the softnic config files.
"""
+ self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
self.cli_file = self.prepare_softnic_files()
@@ -105,9 +106,8 @@ def softnic(self) -> None:
"""
with TestPmdShell(
- self.sut_node,
vdevs=[VirtualDevice(f"net_softnic0,firmware={self.cli_file},cpu_id=1,conn_port=8086")],
- eth_peer=[EthPeer(1, self.tg_node.ports[1].mac_address)],
+ eth_peer=[EthPeer(1, self.topology.tg_port_ingress.mac_address)],
port_topology=None,
) as shell:
shell.start()
diff --git a/dts/tests/TestSuite_uni_pkt.py b/dts/tests/TestSuite_uni_pkt.py
index 0898187675..656a69b0f1 100644
--- a/dts/tests/TestSuite_uni_pkt.py
+++ b/dts/tests/TestSuite_uni_pkt.py
@@ -85,7 +85,7 @@ def test_l2_packet_detect(self) -> None:
mac_id = "00:00:00:00:00:01"
packet_list = [Ether(dst=mac_id, type=0x88F7) / Raw(), Ether(dst=mac_id) / ARP() / Raw()]
flag_list = [RtePTypes.L2_ETHER_TIMESYNC, RtePTypes.L2_ETHER_ARP]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -118,7 +118,7 @@ def test_l3_l4_packet_detect(self) -> None:
RtePTypes.L4_ICMP,
RtePTypes.L4_FRAG | RtePTypes.L3_IPV4_EXT_UNKNOWN | RtePTypes.L2_ETHER,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -147,7 +147,7 @@ def test_ipv6_l4_packet_detect(self) -> None:
RtePTypes.L4_TCP,
RtePTypes.L3_IPV6_EXT_UNKNOWN,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -182,7 +182,7 @@ def test_l3_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_IP | RtePTypes.INNER_L4_ICMP,
RtePTypes.TUNNEL_IP | RtePTypes.INNER_L3_IPV6_EXT_UNKNOWN | RtePTypes.INNER_L4_FRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -215,7 +215,7 @@ def test_gre_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_SCTP,
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_ICMP,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -250,7 +250,7 @@ def test_nsh_packet_detect(self) -> None:
RtePTypes.L2_ETHER_NSH | RtePTypes.L3_IPV4_EXT_UNKNOWN | RtePTypes.L4_SCTP,
RtePTypes.L2_ETHER_NSH | RtePTypes.L3_IPV6_EXT_UNKNOWN | RtePTypes.L4_NONFRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -295,6 +295,6 @@ def test_vxlan_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_ICMP,
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L3_IPV6_EXT_UNKNOWN | RtePTypes.INNER_L4_FRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.rx_vxlan(4789, 0, True)
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index c67520baef..d2a9e614d4 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -124,7 +124,7 @@ def test_vlan_receipt_no_stripping(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.start()
self.send_vlan_packet_and_verify(True, strip=False, vlan_id=1)
@@ -137,7 +137,7 @@ def test_vlan_receipt_stripping(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.set_vlan_strip(port=0, enable=True)
testpmd.start()
@@ -150,7 +150,7 @@ def test_vlan_no_receipt(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.start()
self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
@@ -162,7 +162,7 @@ def test_vlan_header_insertion(self) -> None:
Test:
Create an interactive testpmd shell and verify a non-VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.mac)
testpmd.set_promisc(port=0, enable=False)
testpmd.stop_all_ports()
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC PATCH 7/7] dts: revamp runtime internals
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (5 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 6/7] dts: add global runtime context Luca Vizzarro
@ 2025-02-03 15:16 ` Luca Vizzarro
2025-02-11 20:50 ` Dean Marx
2025-02-04 21:08 ` [RFC PATCH 0/7] dts: revamp framework Dean Marx
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
8 siblings, 1 reply; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-03 15:16 UTC (permalink / raw)
To: dev; +Cc: Luca Vizzarro, Patrick Robb, Paul Szczepanek
Enforce separation of concerns by letting test runs being isolated
through a new TestRun class and respective module. This also means that
any actions taken on the nodes must be handled exclusively by the test
run. An example being the creation and destruction of the traffic
generator. TestSuiteWithCases is now redundant as the configuration is
able to provide all the details about the test run's own test suites.
Any other runtime state which concerns the test runs, now belongs to
their class.
Finally, as the test run execution is isolated, all the runtime
internals are held in the new class. Internals which have been
completely reworked into a finite state machine (FSM), to simplify the
use and understanding of the different execution states, while
rendering the process of handling errors less repetitive and easier.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
---
doc/api/dts/framework.test_run.rst | 8 +
doc/api/dts/index.rst | 1 +
doc/guides/conf.py | 3 +-
dts/framework/exception.py | 33 +-
dts/framework/runner.py | 492 +---------------------
dts/framework/test_result.py | 143 +------
dts/framework/test_run.py | 443 +++++++++++++++++++
dts/framework/testbed_model/capability.py | 24 +-
dts/framework/testbed_model/tg_node.py | 6 +-
9 files changed, 524 insertions(+), 629 deletions(-)
create mode 100644 doc/api/dts/framework.test_run.rst
create mode 100644 dts/framework/test_run.py
diff --git a/doc/api/dts/framework.test_run.rst b/doc/api/dts/framework.test_run.rst
new file mode 100644
index 0000000000..8147320ed9
--- /dev/null
+++ b/doc/api/dts/framework.test_run.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+test_run - Test Run Execution
+===========================================================
+
+.. automodule:: framework.test_run
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
index b211571430..c76725eb75 100644
--- a/doc/api/dts/index.rst
+++ b/doc/api/dts/index.rst
@@ -26,6 +26,7 @@ Modules
:maxdepth: 1
framework.runner
+ framework.test_run
framework.test_suite
framework.test_result
framework.settings
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index e7508ea1d5..9ccd7d0c84 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -59,7 +59,8 @@
# DTS API docs additional configuration
if environ.get('DTS_DOC_BUILD'):
- extensions = ['sphinx.ext.napoleon', 'sphinx.ext.autodoc']
+ extensions = ['sphinx.ext.napoleon', 'sphinx.ext.autodoc', 'sphinx.ext.graphviz']
+ graphviz_output_format = "svg"
# Pydantic models require autodoc_pydantic for the right formatting
try:
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index d967ede09b..47e3fac05c 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -205,28 +205,27 @@ class TestCaseVerifyError(DTSError):
severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
-class BlockingTestSuiteError(DTSError):
- """A failure in a blocking test suite."""
+class InternalError(DTSError):
+ """An internal error or bug has occurred in DTS."""
#:
- severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
- _suite_name: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.INTERNAL_ERR
- def __init__(self, suite_name: str) -> None:
- """Define the meaning of the first argument.
- Args:
- suite_name: The blocking test suite.
- """
- self._suite_name = suite_name
+class SkippedTestException(DTSError):
+ """An exception raised when a test suite or case has been skipped."""
- def __str__(self) -> str:
- """Add some context to the string representation."""
- return f"Blocking suite {self._suite_name} failed."
+ #:
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.NO_ERR
+ def __init__(self, reason: str) -> None:
+ """Constructor.
-class InternalError(DTSError):
- """An internal error or bug has occurred in DTS."""
+ Args:
+ reason: The reason for the test being skipped.
+ """
+ self._reason = reason
- #:
- severity: ClassVar[ErrorSeverity] = ErrorSeverity.INTERNAL_ERR
+ def __str__(self) -> str:
+ """Stringify the exception."""
+ return self._reason
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index 60a885d8e6..8f5bf716a3 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -19,14 +19,12 @@
"""
import os
-import random
import sys
-from pathlib import Path
-from types import MethodType
-from typing import Iterable
from framework.config.common import ValidationContext
-from framework.testbed_model.capability import Capability, get_supported_capabilities
+from framework.status import POST_RUN
+from framework.test_run import TestRun
+from framework.testbed_model.node import Node
from framework.testbed_model.sut_node import SutNode
from framework.testbed_model.tg_node import TGNode
@@ -38,23 +36,12 @@
SutNodeConfiguration,
TGNodeConfiguration,
)
-from .config.test_run import (
- TestRunConfiguration,
- TestSuiteConfig,
-)
-from .exception import BlockingTestSuiteError, SSHTimeoutError, TestCaseVerifyError
-from .logger import DTSLogger, DtsStage, get_dts_logger
+from .logger import DTSLogger, get_dts_logger
from .settings import SETTINGS
from .test_result import (
DTSResult,
Result,
- TestCaseResult,
- TestRunResult,
- TestSuiteResult,
- TestSuiteWithCases,
)
-from .test_suite import TestCase, TestSuite
-from .testbed_model.topology import PortLink, Topology
class DTSRunner:
@@ -79,10 +66,6 @@ class DTSRunner:
_configuration: Configuration
_logger: DTSLogger
_result: DTSResult
- _test_suite_class_prefix: str
- _test_suite_module_prefix: str
- _func_test_case_regex: str
- _perf_test_case_regex: str
def __init__(self):
"""Initialize the instance with configuration, logger, result and string constants."""
@@ -92,10 +75,6 @@ def __init__(self):
os.makedirs(SETTINGS.output_dir)
self._logger.add_dts_root_logger_handlers(SETTINGS.verbose, SETTINGS.output_dir)
self._result = DTSResult(SETTINGS.output_dir, self._logger)
- self._test_suite_class_prefix = "Test"
- self._test_suite_module_prefix = "tests.TestSuite_"
- self._func_test_case_regex = r"test_(?!perf_)"
- self._perf_test_case_regex = r"test_perf_"
def run(self) -> None:
"""Run all test runs from the test run configuration.
@@ -131,45 +110,28 @@ def run(self) -> None:
the :option:`--test-suite` command line argument or
the :envvar:`DTS_TESTCASES` environment variable.
"""
- sut_nodes: dict[str, SutNode] = {}
- tg_nodes: dict[str, TGNode] = {}
+ nodes: list[Node] = []
try:
# check the python version of the server that runs dts
self._check_dts_python_version()
self._result.update_setup(Result.PASS)
+ for node_config in self._configuration.nodes:
+ node: Node
+
+ match node_config:
+ case SutNodeConfiguration():
+ node = SutNode(node_config)
+ case TGNodeConfiguration():
+ node = TGNode(node_config)
+
+ nodes.append(node)
+
# for all test run sections
- for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
- test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
- self._logger.set_stage(DtsStage.test_run_setup)
- self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
- self._init_random_seed(test_run_config)
+ for test_run_config in self._configuration.test_runs:
test_run_result = self._result.add_test_run(test_run_config)
- # we don't want to modify the original config, so create a copy
- test_run_test_suites = test_run_config.test_suites
- if not test_run_config.skip_smoke_tests:
- test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
- try:
- test_suites_with_cases = self._get_test_suites_with_cases(
- test_run_test_suites, test_run_config.func, test_run_config.perf
- )
- test_run_result.test_suites_with_cases = test_suites_with_cases
- except Exception as e:
- self._logger.exception(
- f"Invalid test suite configuration found: " f"{test_run_test_suites}."
- )
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- self._connect_nodes_and_run_test_run(
- sut_nodes,
- tg_nodes,
- sut_node_config,
- tg_node_config,
- test_run_config,
- test_run_result,
- test_suites_with_cases,
- )
+ test_run = TestRun(test_run_config, nodes, test_run_result)
+ test_run.spin()
except Exception as e:
self._logger.exception("An unexpected error has occurred.")
@@ -178,8 +140,8 @@ def run(self) -> None:
finally:
try:
- self._logger.set_stage(DtsStage.post_run)
- for node in (sut_nodes | tg_nodes).values():
+ self._logger.set_stage(POST_RUN)
+ for node in nodes:
node.close()
self._result.update_teardown(Result.PASS)
except Exception as e:
@@ -205,412 +167,6 @@ def _check_dts_python_version(self) -> None:
)
self._logger.warning("Please use Python >= 3.10 instead.")
- def _get_test_suites_with_cases(
- self,
- test_suite_configs: list[TestSuiteConfig],
- func: bool,
- perf: bool,
- ) -> list[TestSuiteWithCases]:
- """Get test suites with selected cases.
-
- The test suites with test cases defined in the user configuration are selected
- and the corresponding functions and classes are gathered.
-
- Args:
- test_suite_configs: Test suite configurations.
- func: Whether to include functional test cases in the final list.
- perf: Whether to include performance test cases in the final list.
-
- Returns:
- The test suites, each with test cases.
- """
- test_suites_with_cases = []
-
- for test_suite_config in test_suite_configs:
- test_suite_class = test_suite_config.test_suite_spec.class_obj
- test_cases: list[type[TestCase]] = []
- func_test_cases, perf_test_cases = test_suite_class.filter_test_cases(
- test_suite_config.test_cases_names
- )
- if func:
- test_cases.extend(func_test_cases)
- if perf:
- test_cases.extend(perf_test_cases)
-
- test_suites_with_cases.append(
- TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
- )
- return test_suites_with_cases
-
- def _connect_nodes_and_run_test_run(
- self,
- sut_nodes: dict[str, SutNode],
- tg_nodes: dict[str, TGNode],
- sut_node_config: SutNodeConfiguration,
- tg_node_config: TGNodeConfiguration,
- test_run_config: TestRunConfiguration,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Connect nodes, then continue to run the given test run.
-
- Connect the :class:`SutNode` and the :class:`TGNode` of this `test_run_config`.
- If either has already been connected, it's going to be in either `sut_nodes` or `tg_nodes`,
- respectively.
- If not, connect and add the node to the respective `sut_nodes` or `tg_nodes` :class:`dict`.
-
- Args:
- sut_nodes: A dictionary storing connected/to be connected SUT nodes.
- tg_nodes: A dictionary storing connected/to be connected TG nodes.
- sut_node_config: The test run's SUT node configuration.
- tg_node_config: The test run's TG node configuration.
- test_run_config: A test run configuration.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
- """
- sut_node = sut_nodes.get(sut_node_config.name)
- tg_node = tg_nodes.get(tg_node_config.name)
-
- try:
- if not sut_node:
- sut_node = SutNode(sut_node_config)
- sut_nodes[sut_node.name] = sut_node
- if not tg_node:
- tg_node = TGNode(tg_node_config)
- tg_nodes[tg_node.name] = tg_node
- except Exception as e:
- failed_node = test_run_config.system_under_test_node
- if sut_node:
- failed_node = test_run_config.traffic_generator_node
- self._logger.exception(f"The Creation of node {failed_node} failed.")
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- self._run_test_run(
- sut_node,
- tg_node,
- test_run_config,
- test_run_result,
- test_suites_with_cases,
- )
-
- def _run_test_run(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- test_run_config: TestRunConfiguration,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Run the given test run.
-
- This involves running the test run setup as well as running all test suites
- in the given test run. After that, the test run teardown is run.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- test_run_config: A test run configuration.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
-
- Raises:
- ConfigurationError: If the DPDK sources or build is not set up from config or settings.
- """
- self._logger.info(f"Running test run with SUT '{test_run_config.system_under_test_node}'.")
- test_run_result.ports = sut_node.ports
- test_run_result.sut_info = sut_node.node_info
- try:
- dpdk_build_config = test_run_config.dpdk_config
- sut_node.set_up_test_run(test_run_config, dpdk_build_config)
- test_run_result.dpdk_build_info = sut_node.get_dpdk_build_info()
- tg_node.set_up_test_run(test_run_config, dpdk_build_config)
- test_run_result.update_setup(Result.PASS)
- except Exception as e:
- self._logger.exception("Test run setup failed.")
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- topology = Topology(
- PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
- for link in test_run_config.port_topology
- )
- self._run_test_suites(
- sut_node, tg_node, topology, test_run_result, test_suites_with_cases
- )
-
- finally:
- try:
- self._logger.set_stage(DtsStage.test_run_teardown)
- sut_node.tear_down_test_run()
- tg_node.tear_down_test_run()
- test_run_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception("Test run teardown failed.")
- test_run_result.update_teardown(Result.FAIL, e)
-
- def _get_supported_capabilities(
- self,
- sut_node: SutNode,
- topology_config: Topology,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> set[Capability]:
- capabilities_to_check = set()
- for test_suite_with_cases in test_suites_with_cases:
- capabilities_to_check.update(test_suite_with_cases.required_capabilities)
-
- self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
-
- return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
-
- def _run_test_suites(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Run `test_suites_with_cases` with the current test run.
-
- The method assumes the DPDK we're testing has already been built on the SUT node.
-
- Before running any suites, the method determines whether they should be skipped
- by inspecting any required capabilities the test suite needs and comparing those
- to capabilities supported by the tested environment. If all capabilities are supported,
- the suite is run. If all test cases in a test suite would be skipped, the whole test suite
- is skipped (the setup and teardown is not run).
-
- If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites
- in the current test run won't be executed.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- topology: The test run's port topology.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
- """
- end_dpdk_build = False
- supported_capabilities = self._get_supported_capabilities(
- sut_node, topology, test_suites_with_cases
- )
- for test_suite_with_cases in test_suites_with_cases:
- test_suite_with_cases.mark_skip_unsupported(supported_capabilities)
- test_suite_result = test_run_result.add_test_suite(test_suite_with_cases)
- try:
- if not test_suite_with_cases.skip:
- self._run_test_suite(
- sut_node,
- tg_node,
- topology,
- test_suite_result,
- test_suite_with_cases,
- )
- else:
- self._logger.info(
- f"Test suite execution SKIPPED: "
- f"'{test_suite_with_cases.test_suite_class.__name__}'. Reason: "
- f"{test_suite_with_cases.test_suite_class.skip_reason}"
- )
- test_suite_result.update_setup(Result.SKIP)
- except BlockingTestSuiteError as e:
- self._logger.exception(
- f"An error occurred within {test_suite_with_cases.test_suite_class.__name__}. "
- "Skipping the rest of the test suites in this test run."
- )
- self._result.add_error(e)
- end_dpdk_build = True
- # if a blocking test failed and we need to bail out of suite executions
- if end_dpdk_build:
- break
-
- def _run_test_suite(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- test_suite_result: TestSuiteResult,
- test_suite_with_cases: TestSuiteWithCases,
- ) -> None:
- """Set up, execute and tear down `test_suite_with_cases`.
-
- The method assumes the DPDK we're testing has already been built on the SUT node.
-
- Test suite execution consists of running the discovered test cases.
- A test case run consists of setup, execution and teardown of said test case.
-
- Record the setup and the teardown and handle failures.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- topology: The port topology of the nodes.
- test_suite_result: The test suite level result object associated
- with the current test suite.
- test_suite_with_cases: The test suite with test cases to run.
-
- Raises:
- BlockingTestSuiteError: If a blocking test suite fails.
- """
- test_suite_name = test_suite_with_cases.test_suite_class.__name__
- self._logger.set_stage(
- DtsStage.test_suite_setup, Path(SETTINGS.output_dir, test_suite_name)
- )
- test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node, topology)
- try:
- self._logger.info(f"Starting test suite setup: {test_suite_name}")
- test_suite.set_up_suite()
- test_suite_result.update_setup(Result.PASS)
- self._logger.info(f"Test suite setup successful: {test_suite_name}")
- except Exception as e:
- self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
- test_suite_result.update_setup(Result.ERROR, e)
-
- else:
- self._execute_test_suite(
- test_suite,
- test_suite_with_cases.test_cases,
- test_suite_result,
- )
- finally:
- try:
- self._logger.set_stage(DtsStage.test_suite_teardown)
- test_suite.tear_down_suite()
- sut_node.kill_cleanup_dpdk_apps()
- test_suite_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
- self._logger.warning(
- f"Test suite '{test_suite_name}' teardown failed, "
- "the next test suite may be affected."
- )
- test_suite_result.update_setup(Result.ERROR, e)
- if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking:
- raise BlockingTestSuiteError(test_suite_name)
-
- def _execute_test_suite(
- self,
- test_suite: TestSuite,
- test_cases: Iterable[type[TestCase]],
- test_suite_result: TestSuiteResult,
- ) -> None:
- """Execute all `test_cases` in `test_suite`.
-
- If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment
- variable is set, in case of a test case failure, the test case will be executed again
- until it passes or it fails that many times in addition of the first failure.
-
- Args:
- test_suite: The test suite object.
- test_cases: The list of test case functions.
- test_suite_result: The test suite level result object associated
- with the current test suite.
- """
- self._logger.set_stage(DtsStage.test_suite)
- for test_case in test_cases:
- test_case_name = test_case.__name__
- test_case_result = test_suite_result.add_test_case(test_case_name)
- all_attempts = SETTINGS.re_run + 1
- attempt_nr = 1
- if not test_case.skip:
- self._run_test_case(test_suite, test_case, test_case_result)
- while not test_case_result and attempt_nr < all_attempts:
- attempt_nr += 1
- self._logger.info(
- f"Re-running FAILED test case '{test_case_name}'. "
- f"Attempt number {attempt_nr} out of {all_attempts}."
- )
- self._run_test_case(test_suite, test_case, test_case_result)
- else:
- self._logger.info(
- f"Test case execution SKIPPED: {test_case_name}. Reason: "
- f"{test_case.skip_reason}"
- )
- test_case_result.update_setup(Result.SKIP)
-
- def _run_test_case(
- self,
- test_suite: TestSuite,
- test_case: type[TestCase],
- test_case_result: TestCaseResult,
- ) -> None:
- """Setup, execute and teardown `test_case_method` from `test_suite`.
-
- Record the result of the setup and the teardown and handle failures.
-
- Args:
- test_suite: The test suite object.
- test_case: The test case function.
- test_case_result: The test case level result object associated
- with the current test case.
- """
- test_case_name = test_case.__name__
-
- try:
- # run set_up function for each case
- test_suite.set_up_test_case()
- test_case_result.update_setup(Result.PASS)
- except SSHTimeoutError as e:
- self._logger.exception(f"Test case setup FAILED: {test_case_name}")
- test_case_result.update_setup(Result.FAIL, e)
- except Exception as e:
- self._logger.exception(f"Test case setup ERROR: {test_case_name}")
- test_case_result.update_setup(Result.ERROR, e)
-
- else:
- # run test case if setup was successful
- self._execute_test_case(test_suite, test_case, test_case_result)
-
- finally:
- try:
- test_suite.tear_down_test_case()
- test_case_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
- self._logger.warning(
- f"Test case '{test_case_name}' teardown failed, "
- f"the next test case may be affected."
- )
- test_case_result.update_teardown(Result.ERROR, e)
- test_case_result.update(Result.ERROR)
-
- def _execute_test_case(
- self,
- test_suite: TestSuite,
- test_case: type[TestCase],
- test_case_result: TestCaseResult,
- ) -> None:
- """Execute `test_case_method` from `test_suite`, record the result and handle failures.
-
- Args:
- test_suite: The test suite object.
- test_case: The test case function.
- test_case_result: The test case level result object associated
- with the current test case.
-
- Raises:
- KeyboardInterrupt: If DTS has been interrupted by the user.
- """
- test_case_name = test_case.__name__
- try:
- self._logger.info(f"Starting test case execution: {test_case_name}")
- # Explicit method binding is required, otherwise mypy complains
- MethodType(test_case, test_suite)()
- test_case_result.update(Result.PASS)
- self._logger.info(f"Test case execution PASSED: {test_case_name}")
-
- except TestCaseVerifyError as e:
- self._logger.exception(f"Test case execution FAILED: {test_case_name}")
- test_case_result.update(Result.FAIL, e)
- except Exception as e:
- self._logger.exception(f"Test case execution ERROR: {test_case_name}")
- test_case_result.update(Result.ERROR, e)
- except KeyboardInterrupt:
- self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
- test_case_result.update(Result.SKIP)
- raise KeyboardInterrupt("Stop DTS")
-
def _exit_dts(self) -> None:
"""Process all errors and exit with the proper exit code."""
self._result.process()
@@ -619,9 +175,3 @@ def _exit_dts(self) -> None:
self._logger.info("DTS execution has ended.")
sys.exit(self._result.get_return_code())
-
- def _init_random_seed(self, conf: TestRunConfiguration) -> None:
- """Initialize the random seed to use for the test run."""
- seed = conf.random_seed or random.randrange(0xFFFF_FFFF)
- self._logger.info(f"Initializing test run with random seed {seed}.")
- random.seed(seed)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 1acb526b64..a59bac71bb 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -25,98 +25,18 @@
import json
from collections.abc import MutableSequence
-from dataclasses import asdict, dataclass, field
from enum import Enum, auto
from pathlib import Path
-from typing import Any, Callable, TypedDict, cast
+from typing import Any, Callable, TypedDict
-from framework.config.node import PortConfig
-from framework.testbed_model.capability import Capability
-
-from .config.test_run import TestRunConfiguration, TestSuiteConfig
+from .config.test_run import TestRunConfiguration
from .exception import DTSError, ErrorSeverity
from .logger import DTSLogger
-from .test_suite import TestCase, TestSuite
from .testbed_model.os_session import OSSessionInfo
from .testbed_model.port import Port
from .testbed_model.sut_node import DPDKBuildInfo
-@dataclass(slots=True, frozen=True)
-class TestSuiteWithCases:
- """A test suite class with test case methods.
-
- An auxiliary class holding a test case class with test case methods. The intended use of this
- class is to hold a subset of test cases (which could be all test cases) because we don't have
- all the data to instantiate the class at the point of inspection. The knowledge of this subset
- is needed in case an error occurs before the class is instantiated and we need to record
- which test cases were blocked by the error.
-
- Attributes:
- test_suite_class: The test suite class.
- test_cases: The test case methods.
- required_capabilities: The combined required capabilities of both the test suite
- and the subset of test cases.
- """
-
- test_suite_class: type[TestSuite]
- test_cases: list[type[TestCase]]
- required_capabilities: set[Capability] = field(default_factory=set, init=False)
-
- def __post_init__(self):
- """Gather the required capabilities of the test suite and all test cases."""
- for test_object in [self.test_suite_class] + self.test_cases:
- self.required_capabilities.update(test_object.required_capabilities)
-
- def create_config(self) -> TestSuiteConfig:
- """Generate a :class:`TestSuiteConfig` from the stored test suite with test cases.
-
- Returns:
- The :class:`TestSuiteConfig` representation.
- """
- return TestSuiteConfig(
- test_suite=self.test_suite_class.__name__,
- test_cases=[test_case.__name__ for test_case in self.test_cases],
- )
-
- def mark_skip_unsupported(self, supported_capabilities: set[Capability]) -> None:
- """Mark the test suite and test cases to be skipped.
-
- The mark is applied if object to be skipped requires any capabilities and at least one of
- them is not among `supported_capabilities`.
-
- Args:
- supported_capabilities: The supported capabilities.
- """
- for test_object in [self.test_suite_class, *self.test_cases]:
- capabilities_not_supported = test_object.required_capabilities - supported_capabilities
- if capabilities_not_supported:
- test_object.skip = True
- capability_str = (
- "capability" if len(capabilities_not_supported) == 1 else "capabilities"
- )
- test_object.skip_reason = (
- f"Required {capability_str} '{capabilities_not_supported}' not found."
- )
- if not self.test_suite_class.skip:
- if all(test_case.skip for test_case in self.test_cases):
- self.test_suite_class.skip = True
-
- self.test_suite_class.skip_reason = (
- "All test cases are marked to be skipped with reasons: "
- f"{' '.join(test_case.skip_reason for test_case in self.test_cases)}"
- )
-
- @property
- def skip(self) -> bool:
- """Skip the test suite if all test cases or the suite itself are to be skipped.
-
- Returns:
- :data:`True` if the test suite should be skipped, :data:`False` otherwise.
- """
- return all(test_case.skip for test_case in self.test_cases) or self.test_suite_class.skip
-
-
class Result(Enum):
"""The possible states that a setup, a teardown or a test case may end up in."""
@@ -463,7 +383,6 @@ class TestRunResult(BaseResult):
"""
_config: TestRunConfiguration
- _test_suites_with_cases: list[TestSuiteWithCases]
_ports: list[Port]
_sut_info: OSSessionInfo | None
_dpdk_build_info: DPDKBuildInfo | None
@@ -476,49 +395,23 @@ def __init__(self, test_run_config: TestRunConfiguration):
"""
super().__init__()
self._config = test_run_config
- self._test_suites_with_cases = []
self._ports = []
self._sut_info = None
self._dpdk_build_info = None
- def add_test_suite(
- self,
- test_suite_with_cases: TestSuiteWithCases,
- ) -> "TestSuiteResult":
+ def add_test_suite(self, test_suite_name: str) -> "TestSuiteResult":
"""Add and return the child result (test suite).
Args:
- test_suite_with_cases: The test suite with test cases.
+ test_suite_name: The test suite name.
Returns:
The test suite's result.
"""
- result = TestSuiteResult(test_suite_with_cases)
+ result = TestSuiteResult(test_suite_name)
self.child_results.append(result)
return result
- @property
- def test_suites_with_cases(self) -> list[TestSuiteWithCases]:
- """The test suites with test cases to be executed in this test run.
-
- The test suites can only be assigned once.
-
- Returns:
- The list of test suites with test cases. If an error occurs between
- the initialization of :class:`TestRunResult` and assigning test cases to the instance,
- return an empty list, representing that we don't know what to execute.
- """
- return self._test_suites_with_cases
-
- @test_suites_with_cases.setter
- def test_suites_with_cases(self, test_suites_with_cases: list[TestSuiteWithCases]) -> None:
- if self._test_suites_with_cases:
- raise ValueError(
- "Attempted to assign test suites to a test run result "
- "which already has test suites."
- )
- self._test_suites_with_cases = test_suites_with_cases
-
@property
def ports(self) -> list[Port]:
"""Get the list of ports associated with this test run."""
@@ -602,24 +495,14 @@ def to_dict(self) -> TestRunResultDict:
compiler_version = self.dpdk_build_info.compiler_version
dpdk_version = self.dpdk_build_info.dpdk_version
- ports = [asdict(port) for port in self.ports]
- for port in ports:
- port["config"] = cast(PortConfig, port["config"]).model_dump()
-
return {
"compiler_version": compiler_version,
"dpdk_version": dpdk_version,
- "ports": ports,
+ "ports": [port.to_dict() for port in self.ports],
"test_suites": [child.to_dict() for child in self.child_results],
"summary": results | self.generate_pass_rate_dict(results),
}
- def _mark_results(self, result) -> None:
- """Mark the test suite results as `result`."""
- for test_suite_with_cases in self._test_suites_with_cases:
- child_result = self.add_test_suite(test_suite_with_cases)
- child_result.update_setup(result)
-
class TestSuiteResult(BaseResult):
"""The test suite specific result.
@@ -631,18 +514,16 @@ class TestSuiteResult(BaseResult):
"""
test_suite_name: str
- _test_suite_with_cases: TestSuiteWithCases
_child_configs: list[str]
- def __init__(self, test_suite_with_cases: TestSuiteWithCases):
+ def __init__(self, test_suite_name: str):
"""Extend the constructor with test suite's config.
Args:
- test_suite_with_cases: The test suite with test cases.
+ test_suite_name: The test suite name.
"""
super().__init__()
- self.test_suite_name = test_suite_with_cases.test_suite_class.__name__
- self._test_suite_with_cases = test_suite_with_cases
+ self.test_suite_name = test_suite_name
def add_test_case(self, test_case_name: str) -> "TestCaseResult":
"""Add and return the child result (test case).
@@ -667,12 +548,6 @@ def to_dict(self) -> TestSuiteResultDict:
"test_cases": [child.to_dict() for child in self.child_results],
}
- def _mark_results(self, result) -> None:
- """Mark the test case results as `result`."""
- for test_case_method in self._test_suite_with_cases.test_cases:
- child_result = self.add_test_case(test_case_method.__name__)
- child_result.update_setup(result)
-
class TestCaseResult(BaseResult, FixtureResult):
r"""The test case specific result.
diff --git a/dts/framework/test_run.py b/dts/framework/test_run.py
new file mode 100644
index 0000000000..add0a62eb9
--- /dev/null
+++ b/dts/framework/test_run.py
@@ -0,0 +1,443 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+r"""Test run module.
+
+The test run is implemented as a finite state machine which maintains a globally accessible
+:class:`~.context.Context` and holds all the execution stages and state as defined in
+:class:`~.status.State`.
+
+The state machine is implemented in :meth:`~TestRun._runner` which can be run by calling
+:meth:`~TestRun.spin`.
+
+The following graph represents all the states and steps of the state machine. Each node represents a
+state labelled with the initials, e.g. ``TR.B`` is represented by :attr:`~.status.Stage.TEST_RUN`
+and :attr:`~.status.InternalState.BEGIN`. States represented by a double green circle are looping
+states. These states are only exited through:
+
+ * **next** which progresses to the next test suite/case.
+ * **end** which indicates that no more test suites/cases are available and
+ the loop is terminated.
+
+Red dashed links represent the path taken when an exception is
+raised in the origin state. If a state does not have one, then the execution progresses as usual.
+When :class:`~.exception.InternalError` is raised in any state, the state machine execution is
+immediately terminated.
+Orange dashed links represent exceptional conditions. Test suites and cases can be ``blocked`` or
+``skipped`` in the following conditions:
+
+ * If a *blocking* test suite fails, the ``blocked`` flag is raised.
+ * If the user sends a ``SIGINT`` signal, the ``blocked`` flag is raised.
+ * If a test suite and/or test case requires a capability unsupported by the test run, then this
+ is ``skipped`` and the state restarts from the beginning.
+
+Finally, test cases **retry** when they fail and DTS is configured to re-run.
+
+.. digraph:: test_run_fsm
+
+ bgcolor=transparent
+ nodesep=0.5
+ ranksep=0.3
+
+ node [fontname="sans-serif" fixedsize="true" width="0.7"]
+ edge [fontname="monospace" color="gray30" fontsize=12]
+ node [shape="circle"] "TR.S" "TR.T" "TS.S" "TS.T" "TC.S" "TC.T"
+
+ node [shape="doublecircle" style="bold" color="darkgreen"] "TR.R" "TS.R" "TC.R"
+
+ node [shape="box" style="filled" color="gray90"] "TR.B" "TR.E"
+ node [style="solid"] "TS.E" "TC.E"
+
+ node [shape="plaintext" fontname="monospace" fontsize=12 fixedsize="false"] "exit"
+
+ "TR.B" -> "TR.S" -> "TR.R"
+ "TR.R":e -> "TR.T":w [taillabel="end" labeldistance=1.5 labelangle=45]
+ "TR.T" -> "TR.E"
+ "TR.E" -> "exit" [style="solid" color="gray30"]
+
+ "TR.R" -> "TS.S" [headlabel="next" labeldistance=3 labelangle=320]
+ "TS.S" -> "TS.R"
+ "TS.R" -> "TS.T" [label="end"]
+ "TS.T" -> "TS.E" -> "TR.R"
+
+ "TS.R" -> "TC.S" [headlabel="next" labeldistance=3 labelangle=320]
+ "TC.S" -> "TC.R" -> "TC.T" -> "TC.E" -> "TS.R":se
+
+
+ edge [fontcolor="orange", color="orange" style="dashed"]
+ "TR.R":sw -> "TS.R":nw [taillabel="next\n(blocked)" labeldistance=13]
+ "TS.R":ne -> "TR.R" [taillabel="end\n(blocked)" labeldistance=7.5 labelangle=345]
+ "TR.R":w -> "TR.R":nw [headlabel="next\n(skipped)" labeldistance=4]
+ "TS.R":e -> "TS.R":e [taillabel="next\n(blocked)\n(skipped)" labelangle=325 labeldistance=7.5]
+ "TC.R":e -> "TC.R":e [taillabel="retry" labelangle=5 labeldistance=2.5]
+
+ edge [fontcolor="crimson" color="crimson"]
+ "TR.S" -> "TR.T"
+ "TS.S":w -> "TS.T":n
+ "TC.S" -> "TC.T"
+
+ node [fontcolor="crimson" color="crimson"]
+ "InternalError" -> "exit":ew
+"""
+
+import random
+from collections import deque
+from collections.abc import Generator, Iterable
+from functools import cached_property
+from pathlib import Path
+from types import MethodType
+from typing import cast
+
+from framework.config.test_run import TestRunConfiguration
+from framework.context import Context, init_ctx
+from framework.exception import (
+ InternalError,
+ SkippedTestException,
+ TestCaseVerifyError,
+)
+from framework.logger import DTSLogger, get_dts_logger
+from framework.settings import SETTINGS
+from framework.status import InternalState, Stage, State
+from framework.test_result import BaseResult, Result, TestCaseResult, TestRunResult, TestSuiteResult
+from framework.test_suite import TestCase, TestSuite
+from framework.testbed_model.capability import (
+ Capability,
+ get_supported_capabilities,
+ test_if_supported,
+)
+from framework.testbed_model.node import Node
+from framework.testbed_model.sut_node import SutNode
+from framework.testbed_model.tg_node import TGNode
+from framework.testbed_model.topology import PortLink, Topology
+
+TestScenario = tuple[type[TestSuite], deque[type[TestCase]]]
+
+
+class TestRun:
+ """Spins a test run."""
+
+ config: TestRunConfiguration
+ logger: DTSLogger
+
+ ctx: Context
+ result: TestRunResult
+ selected_tests: list[TestScenario]
+
+ _state: State
+ _remaining_tests: deque[TestScenario]
+ _max_retries: int
+
+ def __init__(self, config: TestRunConfiguration, nodes: Iterable[Node], result: TestRunResult):
+ """Test run constructor."""
+ self.config = config
+ self.logger = get_dts_logger()
+
+ sut_node = next(n for n in nodes if n.name == config.system_under_test_node)
+ sut_node = cast(SutNode, sut_node) # Config validation must render this valid.
+
+ tg_node = next(n for n in nodes if n.name == config.traffic_generator_node)
+ tg_node = cast(TGNode, tg_node) # Config validation must render this valid.
+
+ topology = Topology.from_port_links(
+ PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
+ for link in self.config.port_topology
+ )
+
+ self.ctx = Context(sut_node, tg_node, topology)
+ init_ctx(self.ctx)
+
+ self.result = result
+ self.selected_tests = list(self.config.filter_tests())
+
+ self._state = State(Stage.TEST_RUN, InternalState.BEGIN)
+ self._max_retries = SETTINGS.re_run
+
+ @cached_property
+ def required_capabilities(self) -> set[Capability]:
+ """The capabilities required to run this test run in its totality."""
+ caps = set()
+
+ for test_suite, test_cases in self.selected_tests:
+ caps.update(test_suite.required_capabilities)
+ for test_case in test_cases:
+ caps.update(test_case.required_capabilities)
+
+ return caps
+
+ def spin(self):
+ """Spin the internal state machine that executes the test run."""
+ self.logger.info(f"Running test run with SUT '{self.ctx.sut_node.name}'.")
+
+ runner = self._runner()
+ while next_state := next(runner, False):
+ previous_state = self._state
+ stage, internal_state = next_state
+ self._state = State(stage, internal_state)
+ self.logger.debug(f"FSM - moving from '{previous_state}' to '{self._state}'")
+
+ def _runner(self) -> Generator[tuple[Stage, InternalState], None, None]: # noqa: C901
+ """Process the current state.
+
+ Yields:
+ The next state.
+
+ Raises:
+ InternalError: If the test run has entered an illegal state or a critical error has
+ occurred.
+ """
+ running = True
+ blocked = False
+
+ remaining_attempts: int
+ remaining_test_cases: deque[type[TestCase]]
+ test_suite: TestSuite
+ test_suite_result: TestSuiteResult
+ test_case: type[TestCase]
+ test_case_result: TestCaseResult
+
+ while running:
+ state = self._state
+ try:
+ match state:
+ case Stage.TEST_RUN, InternalState.BEGIN:
+ yield state[0], InternalState.SETUP
+
+ case Stage.TEST_RUN, InternalState.SETUP:
+ self.update_logger_stage()
+ self.setup()
+ yield state[0], InternalState.RUN
+
+ case Stage.TEST_RUN, InternalState.RUN:
+ self.update_logger_stage()
+ try:
+ test_suite_class, remaining_test_cases = self._remaining_tests.popleft()
+ test_suite = test_suite_class()
+ test_suite_result = self.result.add_test_suite(test_suite.name)
+
+ if blocked:
+ test_suite_result.update_setup(Result.BLOCK)
+ self.logger.error(f"Test suite '{test_suite.name}' was BLOCKED.")
+ # Continue to allow the rest to mark as blocked, no need to setup.
+ yield Stage.TEST_SUITE, InternalState.RUN
+ continue
+
+ test_if_supported(test_suite_class, self.supported_capabilities)
+ self.ctx.local.reset()
+ yield Stage.TEST_SUITE, InternalState.SETUP
+ except IndexError:
+ # No more test suites. We are done here.
+ yield state[0], InternalState.TEARDOWN
+
+ case Stage.TEST_SUITE, InternalState.SETUP:
+ self.update_logger_stage(test_suite.name)
+ test_suite.set_up_suite()
+
+ test_suite_result.update_setup(Result.PASS)
+ yield state[0], InternalState.RUN
+
+ case Stage.TEST_SUITE, InternalState.RUN:
+ if not blocked:
+ self.update_logger_stage(test_suite.name)
+ try:
+ test_case = remaining_test_cases.popleft()
+ test_case_result = test_suite_result.add_test_case(test_case.name)
+
+ if blocked:
+ test_case_result.update_setup(Result.BLOCK)
+ continue
+
+ test_if_supported(test_case, self.supported_capabilities)
+ yield Stage.TEST_CASE, InternalState.SETUP
+ except IndexError:
+ if blocked and test_suite_result.setup_result.result is Result.BLOCK:
+ # Skip teardown if the test case AND suite were blocked.
+ yield state[0], InternalState.END
+ else:
+ # No more test cases. We are done here.
+ yield state[0], InternalState.TEARDOWN
+
+ case Stage.TEST_CASE, InternalState.SETUP:
+ test_suite.set_up_test_case()
+ remaining_attempts = self._max_retries
+
+ test_case_result.update_setup(Result.PASS)
+ yield state[0], InternalState.RUN
+
+ case Stage.TEST_CASE, InternalState.RUN:
+ self.logger.info(f"Running test case '{test_case.name}'.")
+ run_test_case = MethodType(test_case, test_suite)
+ run_test_case()
+
+ test_case_result.update(Result.PASS)
+ self.logger.info(f"Test case '{test_case.name}' execution PASSED.")
+ yield state[0], InternalState.TEARDOWN
+
+ case Stage.TEST_CASE, InternalState.TEARDOWN:
+ test_suite.tear_down_test_case()
+
+ test_case_result.update_teardown(Result.PASS)
+ yield state[0], InternalState.END
+
+ case Stage.TEST_CASE, InternalState.END:
+ yield Stage.TEST_SUITE, InternalState.RUN
+
+ case Stage.TEST_SUITE, InternalState.TEARDOWN:
+ self.update_logger_stage(test_suite.name)
+ test_suite.tear_down_suite()
+ self.ctx.sut_node.kill_cleanup_dpdk_apps()
+
+ test_suite_result.update_teardown(Result.PASS)
+ yield state[0], InternalState.END
+
+ case Stage.TEST_SUITE, InternalState.END:
+ if test_suite_result.get_errors() and test_suite.is_blocking:
+ self.logger.error(
+ f"An error occurred within blocking {test_suite.name}. "
+ "The remaining test suites will be skipped."
+ )
+ blocked = True
+ yield Stage.TEST_RUN, InternalState.RUN
+
+ case Stage.TEST_RUN, InternalState.TEARDOWN:
+ self.update_logger_stage()
+ self.teardown()
+ yield Stage.TEST_RUN, InternalState.END
+
+ case Stage.TEST_RUN, InternalState.END:
+ running = False
+
+ case _:
+ raise InternalError("Illegal state entered. How did I get here?")
+
+ except TestCaseVerifyError as e:
+ self.logger.error(f"Test case '{test_case.name}' execution FAILED: {e}")
+
+ remaining_attempts -= 1
+ if remaining_attempts > 0:
+ self.logger.info(f"Re-attempting. {remaining_attempts} attempts left.")
+ else:
+ test_case_result.update(Result.FAIL, e)
+ yield state[0], InternalState.TEARDOWN
+
+ except SkippedTestException as e:
+ if state[0] is Stage.TEST_RUN:
+ who = "suite"
+ name = test_suite.name
+ result_handler: BaseResult = test_suite_result
+ else:
+ who = "case"
+ name = test_case.name
+ result_handler = test_case_result
+ self.logger.info(f"Test {who} '{name}' execution SKIPPED with reason: {e}")
+ result_handler.update_setup(Result.SKIP)
+
+ except InternalError as e:
+ self.logger.error(
+ "A critical error has occurred. Unrecoverable state reached, shutting down."
+ )
+ # TODO: Handle final test suite result!
+ raise e
+
+ except (KeyboardInterrupt, Exception) as e:
+ match state[0]:
+ case Stage.TEST_RUN:
+ stage_str = "run"
+ case Stage.TEST_SUITE:
+ stage_str = f"suite '{test_suite.name}'"
+ case Stage.TEST_CASE:
+ stage_str = f"case '{test_case.name}'"
+
+ match state[1]:
+ case InternalState.SETUP:
+ state_str = "setup"
+ next_state = InternalState.TEARDOWN
+ case InternalState.RUN:
+ state_str = "execution"
+ next_state = InternalState.TEARDOWN
+ case InternalState.TEARDOWN:
+ state_str = "teardown"
+ next_state = InternalState.END
+
+ if isinstance(e, KeyboardInterrupt):
+ msg = (
+ f"Test {stage_str} {state_str} INTERRUPTED by user! "
+ "Shutting down gracefully."
+ )
+ result = Result.BLOCK
+ ex: Exception | None = None
+ blocked = True
+ else:
+ msg = (
+ "An unexpected error has occurred "
+ f"while running test {stage_str} {state_str}."
+ )
+ result = Result.ERROR
+ ex = e
+
+ match state:
+ case Stage.TEST_RUN, InternalState.SETUP:
+ self.result.update_setup(result, ex)
+ case Stage.TEST_RUN, InternalState.TEARDOWN:
+ self.result.update_teardown(result, ex)
+ case Stage.TEST_SUITE, InternalState.SETUP:
+ test_suite_result.update_setup(result, ex)
+ case Stage.TEST_SUITE, InternalState.TEARDOWN:
+ test_suite_result.update_teardown(result, ex)
+ case Stage.TEST_CASE, InternalState.SETUP:
+ test_case_result.update_setup(result, ex)
+ case Stage.TEST_CASE, InternalState.RUN:
+ test_case_result.update(result, ex)
+ case Stage.TEST_CASE, InternalState.TEARDOWN:
+ test_case_result.update_teardown(result, ex)
+ case _:
+ if ex:
+ raise InternalError(
+ "An error was raised in un uncontrolled state."
+ ) from ex
+
+ self.logger.error(msg)
+
+ if ex:
+ self.logger.exception(ex)
+
+ if state[1] is InternalState.TEARDOWN:
+ self.logger.warning(
+ "The environment may have not been cleaned up correctly. "
+ "The subsequent tests could be affected!"
+ )
+
+ yield state[0], next_state
+
+ def setup(self) -> None:
+ """Setup the test run."""
+ self.logger.info(f"Running on SUT node '{self.ctx.sut_node.name}'.")
+ self.init_random_seed()
+ self._remaining_tests = deque(self.selected_tests)
+
+ self.ctx.sut_node.set_up_test_run(self.config, self.ctx.topology.sut_ports)
+ self.ctx.tg_node.set_up_test_run(self.config, self.ctx.topology.tg_ports)
+
+ self.result.ports = self.ctx.topology.sut_ports + self.ctx.topology.tg_ports
+ self.result.sut_info = self.ctx.sut_node.node_info
+ self.result.dpdk_build_info = self.ctx.sut_node.get_dpdk_build_info()
+
+ self.logger.debug(f"Found capabilities to check: {self.required_capabilities}")
+ self.supported_capabilities = get_supported_capabilities(
+ self.ctx.sut_node, self.ctx.topology, self.required_capabilities
+ )
+
+ def teardown(self) -> None:
+ """Teardown the test run."""
+ self.ctx.sut_node.tear_down_test_run(self.ctx.topology.sut_ports)
+ self.ctx.tg_node.tear_down_test_run(self.ctx.topology.tg_ports)
+
+ def update_logger_stage(self, file_name: str | None = None) -> None:
+ """Update the current stage of the logger."""
+ log_file_path = Path(SETTINGS.output_dir, file_name) if file_name is not None else None
+ self.logger.set_stage(self._state, log_file_path)
+
+ def init_random_seed(self) -> None:
+ """Initialize the random seed to use for the test run."""
+ seed = self.config.random_seed or random.randrange(0xFFFF_FFFF)
+ self.logger.info(f"Initializing with random seed {seed}.")
+ random.seed(seed)
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 7b06ecd715..463fd69cd9 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2024 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""Testbed capabilities.
@@ -53,7 +54,7 @@ def test_scatter_mbuf_2048(self):
from typing_extensions import Self
-from framework.exception import ConfigurationError
+from framework.exception import ConfigurationError, SkippedTestException
from framework.logger import get_dts_logger
from framework.remote_session.testpmd_shell import (
NicCapability,
@@ -221,9 +222,7 @@ def get_supported_capabilities(
)
if cls.capabilities_to_check:
capabilities_to_check_map = cls._get_decorated_capabilities_map()
- with TestPmdShell(
- sut_node, privileged=True, disable_device_start=True
- ) as testpmd_shell:
+ with TestPmdShell() as testpmd_shell:
for (
conditional_capability_fn,
capabilities,
@@ -510,3 +509,20 @@ def get_supported_capabilities(
supported_capabilities.update(callback(sut_node, topology_config))
return supported_capabilities
+
+
+def test_if_supported(test: type[TestProtocol], supported_caps: set[Capability]) -> None:
+ """Test if the given test suite or test case is supported.
+
+ Args:
+ test: The test suite or case.
+ supported_caps: The capabilities that need to be checked against the test.
+
+ Raises:
+ SkippedTestException: If the test hasn't met the requirements.
+ """
+ unsupported_caps = test.required_capabilities - supported_caps
+ if unsupported_caps:
+ capability_str = "capabilities" if len(unsupported_caps) > 1 else "capability"
+ msg = f"Required {capability_str} '{unsupported_caps}' not found."
+ raise SkippedTestException(msg)
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 595836a664..290a3fbd74 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -37,9 +37,11 @@ class TGNode(Node):
must be a way to send traffic without that.
Attributes:
+ config: The traffic generator node configuration.
traffic_generator: The traffic generator running on the node.
"""
+ config: TGNodeConfiguration
traffic_generator: CapturingTrafficGenerator
def __init__(self, node_config: TGNodeConfiguration):
@@ -51,7 +53,6 @@ def __init__(self, node_config: TGNodeConfiguration):
node_config: The TG node's test run configuration.
"""
super().__init__(node_config)
- self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
self._logger.info(f"Created node: {self.name}")
def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
@@ -64,6 +65,7 @@ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable
"""
super().set_up_test_run(test_run_config, ports)
self.main_session.bring_up_link(ports)
+ self.traffic_generator = create_traffic_generator(self, self.config.traffic_generator)
def tear_down_test_run(self, ports: Iterable[Port]) -> None:
"""Extend the test run teardown with the teardown of the traffic generator.
@@ -72,6 +74,7 @@ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
ports: The ports to tear down for the test run.
"""
super().tear_down_test_run(ports)
+ self.traffic_generator.close()
def send_packets_and_capture(
self,
@@ -119,5 +122,4 @@ def close(self) -> None:
This extends the superclass method with TG cleanup.
"""
- self.traffic_generator.close()
super().close()
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 0/7] dts: revamp framework
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (6 preceding siblings ...)
2025-02-03 15:16 ` [RFC PATCH 7/7] dts: revamp runtime internals Luca Vizzarro
@ 2025-02-04 21:08 ` Dean Marx
2025-02-12 16:52 ` Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
8 siblings, 1 reply; 33+ messages in thread
From: Dean Marx @ 2025-02-04 21:08 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
Hi Luca,
I saw in the meeting minutes that the main purpose of this series is
to implement the separation of concern principle better in DTS. Just
wondering, what parts of the current framework did you think needed to
be separated and why? I'm taking an OOP class this semester and we
just started talking in depth about separation of concerns, so if you
wouldn't mind I'd be interested in your thought process. Working on a
review for the series currently as well, should be done relatively
soon.
Thanks,
Dean
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 1/7] dts: add port topology configuration
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
@ 2025-02-07 18:25 ` Nicholas Pratte
2025-02-12 16:47 ` Luca Vizzarro
2025-02-11 18:00 ` Dean Marx
1 sibling, 1 reply; 33+ messages in thread
From: Nicholas Pratte @ 2025-02-07 18:25 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
Hi Luca, nice work! See comments below.
<snip>
>
>
> +class LinkPortIdentifier(NamedTuple):
> + """A tuple linking test run node type to port name."""
> +
> + node_type: Literal["sut", "tg"]
> + port_name: str
> +
> +
> +class PortLinkConfig(FrozenModel):
> + """A link between the ports of the nodes.
> +
> + Can be represented as a string with the following notation:
> +
> + .. code::
> +
> + sut.PORT_0 <-> tg.PORT_0 # explicit node nomination
> + PORT_0 <-> PORT_0 # implicit node nomination. Left is SUT, right is TG.
There are some additional comments below that relate to this piece of
documentation here. If a user is looking at this, maybe it would make
more sense for them to read something like "sut.{string name}" to
better indicate the flexibility they have in naming ports. It might be
possible that someone could get confused when they are reading these
exception messages as they are thrown.
> + """
> +
> + #: The port at the left side of the link.
> + left: LinkPortIdentifier
> + #: The port at the right side of the link.
> + right: LinkPortIdentifier
> +
> + @cached_property
> + def sut_port(self) -> str:
> + """Port name of the SUT node.
> +
> + Raises:
> + InternalError: If a misconfiguration has been allowed to happen.
> + """
> + if self.left.node_type == "sut":
> + return self.left.port_name
> + if self.right.node_type == "sut":
> + return self.right.port_name
> +
> + raise InternalError("Unreachable state reached.")
> +
> + @cached_property
> + def tg_port(self) -> str:
> + """Port name of the TG node.
> +
> + Raises:
> + InternalError: If a misconfiguration has been allowed to happen.
> + """
> + if self.left.node_type == "tg":
> + return self.left.port_name
> + if self.right.node_type == "tg":
> + return self.right.port_name
> +
> + raise InternalError("Unreachable state reached.")
> +
> + @model_validator(mode="before")
> + @classmethod
> + def convert_from_string(cls, data: Any) -> Any:
> + """Convert the string representation of the model into a valid mapping."""
> + if isinstance(data, str):
> + m = re.match(REGEX_FOR_PORT_LINK, data, re.I)
> + assert m is not None, (
> + "The provided link is malformed. Please use the following "
> + "notation: sut.PORT_0 <-> tg.PORT_0"
Per the comment above, this is the exception message I am referring
to. If the example test_run.conf is going to have the default port
names be "Port 0" and "Port 1," just to be more consistent, it might
make more sense to remove the underscored from the exception message.
I could personally see myself making a silly error in misunderstanding
this small inconsistency as I quickly fix my configuration files; it's
happened many times before :D
> + )
> +
> + left = (m.group(1) or "sut").lower(), m.group(2)
> + right = (m.group(3) or "tg").lower(), m.group(4)
> +
> + return {"left": left, "right": right}
> + return data
> +
> + @model_validator(mode="after")
> + def verify_distinct_nodes(self) -> Self:
> + """Verify that each side of the link has distinct nodes."""
> + assert (
> + self.left.node_type != self.right.node_type
> + ), "Linking ports of the same node is unsupported."
> + return self
> +
> +
> class TestRunConfiguration(FrozenModel):
> """The configuration of a test run.
>
> @@ -298,6 +377,8 @@ class TestRunConfiguration(FrozenModel):
> vdevs: list[str] = Field(default_factory=list)
> #: The seed to use for pseudo-random generation.
> random_seed: int | None = None
> + #: The port links between the specified nodes to use.
> + port_topology: list[PortLinkConfig] = Field(max_length=2)
>
> fields_from_settings = model_validator(mode="before")(
> load_fields_from_settings("test_suites", "random_seed")
> diff --git a/dts/framework/runner.py b/dts/framework/runner.py
> index 9f9789cf49..60a885d8e6 100644
> --- a/dts/framework/runner.py
> +++ b/dts/framework/runner.py
> @@ -54,7 +54,7 @@
> TestSuiteWithCases,
> )
> from .test_suite import TestCase, TestSuite
> -from .testbed_model.topology import Topology
> +from .testbed_model.topology import PortLink, Topology
>
>
> class DTSRunner:
> @@ -331,7 +331,13 @@ def _run_test_run(
> test_run_result.update_setup(Result.FAIL, e)
>
> else:
> - self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
> + topology = Topology(
> + PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
> + for link in test_run_config.port_topology
> + )
> + self._run_test_suites(
> + sut_node, tg_node, topology, test_run_result, test_suites_with_cases
> + )
>
> finally:
> try:
> @@ -361,6 +367,7 @@ def _run_test_suites(
> self,
> sut_node: SutNode,
> tg_node: TGNode,
> + topology: Topology,
> test_run_result: TestRunResult,
> test_suites_with_cases: Iterable[TestSuiteWithCases],
> ) -> None:
> @@ -380,11 +387,11 @@ def _run_test_suites(
> Args:
> sut_node: The test run's SUT node.
> tg_node: The test run's TG node.
> + topology: The test run's port topology.
> test_run_result: The test run's result.
> test_suites_with_cases: The test suites with test cases to run.
> """
> end_dpdk_build = False
> - topology = Topology(sut_node.ports, tg_node.ports)
> supported_capabilities = self._get_supported_capabilities(
> sut_node, topology, test_suites_with_cases
> )
> diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
> index bffbc52505..1acb526b64 100644
> --- a/dts/framework/test_result.py
> +++ b/dts/framework/test_result.py
> @@ -28,8 +28,9 @@
> from dataclasses import asdict, dataclass, field
> from enum import Enum, auto
> from pathlib import Path
> -from typing import Any, Callable, TypedDict
> +from typing import Any, Callable, TypedDict, cast
>
> +from framework.config.node import PortConfig
> from framework.testbed_model.capability import Capability
>
> from .config.test_run import TestRunConfiguration, TestSuiteConfig
> @@ -601,10 +602,14 @@ def to_dict(self) -> TestRunResultDict:
> compiler_version = self.dpdk_build_info.compiler_version
> dpdk_version = self.dpdk_build_info.dpdk_version
>
> + ports = [asdict(port) for port in self.ports]
> + for port in ports:
> + port["config"] = cast(PortConfig, port["config"]).model_dump()
> +
> return {
> "compiler_version": compiler_version,
> "dpdk_version": dpdk_version,
> - "ports": [asdict(port) for port in self.ports],
> + "ports": ports,
> "test_suites": [child.to_dict() for child in self.child_results],
> "summary": results | self.generate_pass_rate_dict(results),
> }
> diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
> index 6a7a1f5b6c..7b06ecd715 100644
> --- a/dts/framework/testbed_model/capability.py
> +++ b/dts/framework/testbed_model/capability.py
You might want to look at the docstring for 'TopologyCapability' as,
given your changes to 'Topology Type', the following might need to be
reworded:
"Each test case must be assigned a topology. It could be done explicitly;
the implicit default is :attr:`~.topology.TopologyType.default`, which
this class defines
as equal to :attr:`~.topology.TopologyType.two_links`."
'default' is no longer technically an attribute.
> @@ -362,10 +362,10 @@ def set_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
> the test suite's.
> """
> if inspect.isclass(test_case_or_suite):
> - if self.topology_type is not TopologyType.default:
> + if self.topology_type is not TopologyType.default():
> self.add_to_required(test_case_or_suite)
> for test_case in test_case_or_suite.get_test_cases():
> - if test_case.topology_type.topology_type is TopologyType.default:
> + if test_case.topology_type.topology_type is TopologyType.default():
> # test case topology has not been set, use the one set by the test suite
> self.add_to_required(test_case)
> elif test_case.topology_type > test_case_or_suite.topology_type:
> @@ -428,14 +428,8 @@ def __hash__(self):
> return self.topology_type.__hash__()
>
> def __str__(self):
> - """Easy to read string of class and name of :attr:`topology_type`.
> -
> - Converts :attr:`TopologyType.default` to the actual value.
> - """
> - name = self.topology_type.name
> - if self.topology_type is TopologyType.default:
> - name = TopologyType.get_from_value(self.topology_type.value).name
> - return f"{type(self.topology_type).__name__}.{name}"
> + """Easy to read string of class and name of :attr:`topology_type`."""
> + return f"{type(self.topology_type).__name__}.{self.topology_type.name}"
>
> def __repr__(self):
> """Easy to read string of class and name of :attr:`topology_type`."""
> @@ -450,7 +444,7 @@ class TestProtocol(Protocol):
> #: The reason for skipping the test case or suite.
> skip_reason: ClassVar[str] = ""
> #: The topology type of the test case or suite.
> - topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
> + topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default())
> #: The capabilities the test case or suite requires in order to be executed.
> required_capabilities: ClassVar[set[Capability]] = set()
>
> @@ -466,7 +460,7 @@ def get_test_cases(cls) -> list[type["TestCase"]]:
>
> def requires(
> *nic_capabilities: NicCapability,
> - topology_type: TopologyType = TopologyType.default,
> + topology_type: TopologyType = TopologyType.default(),
> ) -> Callable[[type[TestProtocol]], type[TestProtocol]]:
> """A decorator that adds the required capabilities to a test case or test suite.
>
> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> index e53a321499..0acd746073 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -14,6 +14,7 @@
> """
>
> from abc import ABC
> +from functools import cached_property
>
> from framework.config.node import (
> OS,
> @@ -86,6 +87,11 @@ def _init_ports(self) -> None:
> self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
> self.main_session.update_ports(self.ports)
>
> + @cached_property
> + def ports_by_name(self) -> dict[str, Port]:
> + """Ports mapped by the name assigned at configuration."""
> + return {port.name: port for port in self.ports}
> +
> def set_up_test_run(
> self,
> test_run_config: TestRunConfiguration,
> diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
> index 7177da3371..8014d4a100 100644
> --- a/dts/framework/testbed_model/port.py
> +++ b/dts/framework/testbed_model/port.py
> @@ -1,6 +1,7 @@
> # SPDX-License-Identifier: BSD-3-Clause
> # Copyright(c) 2022 University of New Hampshire
> # Copyright(c) 2023 PANTHEON.tech s.r.o.
> +# Copyright(c) 2025 Arm Limited
>
> """NIC port model.
>
> @@ -13,19 +14,6 @@
> from framework.config.node import PortConfig
>
>
> -@dataclass(slots=True, frozen=True)
> -class PortIdentifier:
> - """The port identifier.
> -
> - Attributes:
> - node: The node where the port resides.
> - pci: The PCI address of the port on `node`.
> - """
> -
> - node: str
> - pci: str
> -
> -
> @dataclass(slots=True)
> class Port:
> """Physical port on a node.
> @@ -36,20 +24,13 @@ class Port:
> and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
>
> Attributes:
> - identifier: The PCI address of the port on a node.
> - os_driver: The operating system driver name when the operating system controls the port,
> - e.g.: ``i40e``.
> - os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
> - peer: The identifier of a port this port is connected with.
> - The `peer` is on a different node.
> + config: The port's configuration.
> mac_address: The MAC address of the port.
> logical_name: The logical name of the port. Must be discovered.
> """
>
> - identifier: PortIdentifier
> - os_driver: str
> - os_driver_for_dpdk: str
> - peer: PortIdentifier
> + _node: str
> + config: PortConfig
> mac_address: str = ""
> logical_name: str = ""
>
> @@ -60,33 +41,20 @@ def __init__(self, node_name: str, config: PortConfig):
> node_name: The name of the port's node.
> config: The test run configuration of the port.
> """
> - self.identifier = PortIdentifier(
> - node=node_name,
> - pci=config.pci,
> - )
> - self.os_driver = config.os_driver
> - self.os_driver_for_dpdk = config.os_driver_for_dpdk
> - self.peer = PortIdentifier(node=config.peer_node, pci=config.peer_pci)
> + self._node = node_name
> + self.config = config
>
> @property
> def node(self) -> str:
> """The node where the port resides."""
> - return self.identifier.node
> + return self._node
> +
> + @property
> + def name(self) -> str:
> + """The name of the port."""
> + return self.config.name
>
> @property
> def pci(self) -> str:
> """The PCI address of the port."""
> - return self.identifier.pci
> -
> -
> -@dataclass(slots=True, frozen=True)
> -class PortLink:
> - """The physical, cabled connection between the ports.
> -
> - Attributes:
> - sut_port: The port on the SUT node connected to `tg_port`.
> - tg_port: The port on the TG node connected to `sut_port`.
> - """
> -
> - sut_port: Port
> - tg_port: Port
> + return self.config.pci
> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
> index 483733cede..440b5a059b 100644
> --- a/dts/framework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -515,7 +515,7 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
> return
>
> for port in self.ports:
> - driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
> + driver = port.config.os_driver_for_dpdk if for_dpdk else port.config.os_driver
> self.main_session.send_command(
> f"{self.path_to_devbind_script} -b {driver} --force {port.pci}",
> privileged=True,
> diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
> index caee9b22ea..814c3f3fe4 100644
> --- a/dts/framework/testbed_model/topology.py
> +++ b/dts/framework/testbed_model/topology.py
> @@ -1,5 +1,6 @@
> # SPDX-License-Identifier: BSD-3-Clause
> # Copyright(c) 2024 PANTHEON.tech s.r.o.
> +# Copyright(c) 2025 Arm Limited
>
> """Testbed topology representation.
>
> @@ -7,14 +8,9 @@
> The link information then implies what type of topology is available.
> """
>
> -from dataclasses import dataclass
> -from os import environ
> -from typing import TYPE_CHECKING, Iterable
> -
> -if TYPE_CHECKING or environ.get("DTS_DOC_BUILD"):
> - from enum import Enum as NoAliasEnum
> -else:
> - from aenum import NoAliasEnum
> +from collections.abc import Iterator
> +from enum import Enum
> +from typing import NamedTuple
>
> from framework.config.node import PortConfig
> from framework.exception import ConfigurationError
> @@ -22,7 +18,7 @@
> from .port import Port
>
>
> -class TopologyType(int, NoAliasEnum):
> +class TopologyType(int, Enum):
> """Supported topology types."""
>
> #: A topology with no Traffic Generator.
> @@ -31,34 +27,20 @@ class TopologyType(int, NoAliasEnum):
> one_link = 1
> #: A topology with two physical links between the Sut node and the TG node.
> two_links = 2
> - #: The default topology required by test cases if not specified otherwise.
> - default = 2
>
> @classmethod
> - def get_from_value(cls, value: int) -> "TopologyType":
> - r"""Get the corresponding instance from value.
> + def default(cls) -> "TopologyType":
> + """The default topology required by test cases if not specified otherwise."""
> + return cls.two_links
>
> - :class:`~enum.Enum`\s that don't allow aliases don't know which instance should be returned
> - as there could be multiple valid instances. Except for the :attr:`default` value,
> - :class:`TopologyType` is a regular :class:`~enum.Enum`.
> - When getting an instance from value, we're not interested in the default,
> - since we already know the value, allowing us to remove the ambiguity.
>
> - Args:
> - value: The value of the requested enum.
> +class PortLink(NamedTuple):
> + """The physical, cabled connection between the ports."""
>
> - Raises:
> - ConfigurationError: If an unsupported link topology is supplied.
> - """
> - match value:
> - case 0:
> - return TopologyType.no_link
> - case 1:
> - return TopologyType.one_link
> - case 2:
> - return TopologyType.two_links
> - case _:
> - raise ConfigurationError("More than two links in a topology are not supported.")
> + #: The port on the SUT node connected to `tg_port`.
> + sut_port: Port
> + #: The port on the TG node connected to `sut_port`.
> + tg_port: Port
>
>
> class Topology:
> @@ -89,55 +71,43 @@ class Topology:
> sut_port_egress: Port
> tg_port_ingress: Port
>
> - def __init__(self, sut_ports: Iterable[Port], tg_ports: Iterable[Port]):
> - """Create the topology from `sut_ports` and `tg_ports`.
> + def __init__(self, port_links: Iterator[PortLink]):
> + """Create the topology from `port_links`.
>
> Args:
> - sut_ports: The SUT node's ports.
> - tg_ports: The TG node's ports.
> + port_links: The test run's required port links.
> +
> + Raises:
> + ConfigurationError: If an unsupported link topology is supplied.
> """
> - port_links = []
> - for sut_port in sut_ports:
> - for tg_port in tg_ports:
> - if (sut_port.identifier, sut_port.peer) == (
> - tg_port.peer,
> - tg_port.identifier,
> - ):
> - port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port))
> -
> - self.type = TopologyType.get_from_value(len(port_links))
> dummy_port = Port(
> "",
> PortConfig(
> + name="dummy",
> pci="0000:00:00.0",
> os_driver_for_dpdk="",
> os_driver="",
> - peer_node="",
> - peer_pci="0000:00:00.0",
> ),
> )
> +
> + self.type = TopologyType.no_link
> self.tg_port_egress = dummy_port
> self.sut_port_ingress = dummy_port
> self.sut_port_egress = dummy_port
> self.tg_port_ingress = dummy_port
> - if self.type > TopologyType.no_link:
> - self.tg_port_egress = port_links[0].tg_port
> - self.sut_port_ingress = port_links[0].sut_port
> +
> + if port_link := next(port_links, None):
> + self.type = TopologyType.one_link
> + self.tg_port_egress = port_link.tg_port
> + self.sut_port_ingress = port_link.sut_port
> self.sut_port_egress = self.sut_port_ingress
> self.tg_port_ingress = self.tg_port_egress
> - if self.type > TopologyType.one_link:
> - self.sut_port_egress = port_links[1].sut_port
> - self.tg_port_ingress = port_links[1].tg_port
>
> + if port_link := next(port_links, None):
> + self.type = TopologyType.two_links
> + self.sut_port_egress = port_link.sut_port
> + self.tg_port_ingress = port_link.tg_port
>
> -@dataclass(slots=True, frozen=True)
> -class PortLink:
> - """The physical, cabled connection between the ports.
> -
> - Attributes:
> - sut_port: The port on the SUT node connected to `tg_port`.
> - tg_port: The port on the TG node connected to `sut_port`.
> - """
> -
> - sut_port: Port
> - tg_port: Port
> + if next(port_links, None) is not None:
> + msg = "More than two links in a topology are not supported."
> + raise ConfigurationError(msg)
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index 66f37a8813..d6f4c11d58 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -32,7 +32,13 @@
> _REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
> _REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
> REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
> -REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
> +REGEX_FOR_BASE64_ENCODING: str = r"[-a-zA-Z0-9+\\/]*={0,3}"
> +REGEX_FOR_IDENTIFIER: str = r"\w+(?:[\w -]*\w+)?"
> +REGEX_FOR_PORT_LINK: str = (
> + rf"(?:(sut|tg)\.)?({REGEX_FOR_IDENTIFIER})" # left side
> + r"\s+<->\s+"
> + rf"(?:(sut|tg)\.)?({REGEX_FOR_IDENTIFIER})" # right side
> +)
>
>
> def expand_range(range_str: str) -> list[int]:
> diff --git a/dts/nodes.example.yaml b/dts/nodes.example.yaml
> index 454d97ab5d..6140dd9b7e 100644
> --- a/dts/nodes.example.yaml
> +++ b/dts/nodes.example.yaml
> @@ -9,18 +9,14 @@
> user: dtsuser
> os: linux
> ports:
> - # sets up the physical link between "SUT 1"@0000:00:08.0 and "TG 1"@0000:00:08.0
> - - pci: "0000:00:08.0"
> + - name: Port 0
> + pci: "0000:00:08.0"
> os_driver_for_dpdk: vfio-pci # OS driver that DPDK will use
> os_driver: i40e # OS driver to bind when the tests are not running
> - peer_node: "TG 1"
> - peer_pci: "0000:00:08.0"
> - # sets up the physical link between "SUT 1"@0000:00:08.1 and "TG 1"@0000:00:08.1
> - - pci: "0000:00:08.1"
> + - name: Port 1
> + pci: "0000:00:08.1"
> os_driver_for_dpdk: vfio-pci
> os_driver: i40e
> - peer_node: "TG 1"
> - peer_pci: "0000:00:08.1"
> hugepages_2mb: # optional; if removed, will use system hugepage configuration
> number_of: 256
> force_first_numa: false
> @@ -34,18 +30,14 @@
> user: dtsuser
> os: linux
> ports:
> - # sets up the physical link between "TG 1"@0000:00:08.0 and "SUT 1"@0000:00:08.0
> - - pci: "0000:00:08.0"
> + - name: Port 0
> + pci: "0000:00:08.0"
> os_driver_for_dpdk: rdma
> os_driver: rdma
> - peer_node: "SUT 1"
> - peer_pci: "0000:00:08.0"
> - # sets up the physical link between "SUT 1"@0000:00:08.0 and "TG 1"@0000:00:08.0
> - - pci: "0000:00:08.1"
> + - name: Port 1
> + pci: "0000:00:08.1"
> os_driver_for_dpdk: rdma
> os_driver: rdma
> - peer_node: "SUT 1"
> - peer_pci: "0000:00:08.1"
> hugepages_2mb: # optional; if removed, will use system hugepage configuration
> number_of: 256
> force_first_numa: false
> diff --git a/dts/test_runs.example.yaml b/dts/test_runs.example.yaml
> index 5cc167ebe1..821d6918d0 100644
> --- a/dts/test_runs.example.yaml
> +++ b/dts/test_runs.example.yaml
> @@ -32,3 +32,6 @@
> system_under_test_node: "SUT 1"
> # Traffic generator node to use for this execution environment
> traffic_generator_node: "TG 1"
> + port_topology:
> + - sut.Port 0 <-> tg.Port 0 # explicit link
> + - Port 1 <-> Port 1 # implicit link, left side is always SUT, right side is always TG.
This suggestions might be a bit strange, but perhaps you would want to
add underscores for 'Port 0' and 'Port 1', given that some of the
framework documentation you wrote in 'LinkPortIdentifier' uses
underscores. Not sure how relevant that is, but maybe it would
mitigate confusion if either the 'sut_port' or 'tg_port' cached
properties throw an exception to the user.
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 2/7] dts: isolate test specification to config
2025-02-03 15:16 ` [RFC PATCH 2/7] dts: isolate test specification to config Luca Vizzarro
@ 2025-02-10 19:09 ` Nicholas Pratte
2025-02-11 18:11 ` Dean Marx
1 sibling, 0 replies; 33+ messages in thread
From: Nicholas Pratte @ 2025-02-10 19:09 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
This makes sense to me!
Reviewed-by: Nicholas Pratte <npratte@iol.unh.edu>
On Mon, Feb 3, 2025 at 10:17 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> In an effort to improve separation of concerns, make the TestRunConfig
> class responsible for processing the configured test suites. Moreover,
> give TestSuiteConfig a facility to yield references to the selected test
> cases.
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
> ---
> dts/framework/config/__init__.py | 84 +++++++++++---------------------
> dts/framework/config/test_run.py | 59 +++++++++++++++++-----
> 2 files changed, 76 insertions(+), 67 deletions(-)
>
> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> index f8ac2c0d18..7761a8b56f 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -27,9 +27,8 @@
> and makes it thread safe should we ever want to move in that direction.
> """
>
> -from functools import cached_property
> from pathlib import Path
> -from typing import Annotated, Any, Literal, NamedTuple, TypeVar, cast
> +from typing import Annotated, Any, Literal, TypeVar, cast
>
> import yaml
> from pydantic import Field, TypeAdapter, ValidationError, field_validator, model_validator
> @@ -46,18 +45,6 @@
> )
> from .test_run import TestRunConfiguration
>
> -
> -class TestRunWithNodesConfiguration(NamedTuple):
> - """Tuple containing the configuration of the test run and its associated nodes."""
> -
> - #:
> - test_run_config: TestRunConfiguration
> - #:
> - sut_node_config: SutNodeConfiguration
> - #:
> - tg_node_config: TGNodeConfiguration
> -
> -
> TestRunsConfig = Annotated[list[TestRunConfiguration], Field(min_length=1)]
>
> NodesConfig = Annotated[list[NodeConfigurationTypes], Field(min_length=1)]
> @@ -71,40 +58,6 @@ class Configuration(FrozenModel):
> #: Node configurations.
> nodes: NodesConfig
>
> - @cached_property
> - def test_runs_with_nodes(self) -> list[TestRunWithNodesConfiguration]:
> - """List of test runs with the associated nodes."""
> - test_runs_with_nodes = []
> -
> - for test_run_no, test_run in enumerate(self.test_runs):
> - sut_node_name = test_run.system_under_test_node
> - sut_node = next(filter(lambda n: n.name == sut_node_name, self.nodes), None)
> -
> - assert sut_node is not None, (
> - f"test_runs.{test_run_no}.sut_node_config.node_name "
> - f"({test_run.system_under_test_node}) is not a valid node name"
> - )
> - assert isinstance(sut_node, SutNodeConfiguration), (
> - f"test_runs.{test_run_no}.sut_node_config.node_name is a valid node name, "
> - "but it is not a valid SUT node"
> - )
> -
> - tg_node_name = test_run.traffic_generator_node
> - tg_node = next(filter(lambda n: n.name == tg_node_name, self.nodes), None)
> -
> - assert tg_node is not None, (
> - f"test_runs.{test_run_no}.tg_node_name "
> - f"({test_run.traffic_generator_node}) is not a valid node name"
> - )
> - assert isinstance(tg_node, TGNodeConfiguration), (
> - f"test_runs.{test_run_no}.tg_node_name is a valid node name, "
> - "but it is not a valid TG node"
> - )
> -
> - test_runs_with_nodes.append(TestRunWithNodesConfiguration(test_run, sut_node, tg_node))
> -
> - return test_runs_with_nodes
> -
> @field_validator("nodes")
> @classmethod
> def validate_node_names(cls, nodes: list[NodeConfiguration]) -> list[NodeConfiguration]:
> @@ -164,14 +117,33 @@ def validate_port_links(self) -> Self:
> return self
>
> @model_validator(mode="after")
> - def validate_test_runs_with_nodes(self) -> Self:
> - """Validate the test runs to nodes associations.
> -
> - This validator relies on the cached property `test_runs_with_nodes` to run for the first
> - time in this call, therefore triggering the assertions if needed.
> - """
> - if self.test_runs_with_nodes:
> - pass
> + def validate_test_runs_against_nodes(self) -> Self:
> + """Validate the test runs to nodes associations."""
> + for test_run_no, test_run in enumerate(self.test_runs):
> + sut_node_name = test_run.system_under_test_node
> + sut_node = next((n for n in self.nodes if n.name == sut_node_name), None)
> +
> + assert sut_node is not None, (
> + f"Test run {test_run_no}.system_under_test_node "
> + f"({sut_node_name}) is not a valid node name."
> + )
> + assert isinstance(sut_node, SutNodeConfiguration), (
> + f"Test run {test_run_no}.system_under_test_node is a valid node name, "
> + "but it is not a valid SUT node."
> + )
> +
> + tg_node_name = test_run.traffic_generator_node
> + tg_node = next((n for n in self.nodes if n.name == tg_node_name), None)
> +
> + assert tg_node is not None, (
> + f"Test run {test_run_no}.traffic_generator_name "
> + f"({tg_node_name}) is not a valid node name."
> + )
> + assert isinstance(tg_node, TGNodeConfiguration), (
> + f"Test run {test_run_no}.traffic_generator_name is a valid node name, "
> + "but it is not a valid TG node."
> + )
> +
> return self
>
>
> diff --git a/dts/framework/config/test_run.py b/dts/framework/config/test_run.py
> index 2092da725e..9ea898b15c 100644
> --- a/dts/framework/config/test_run.py
> +++ b/dts/framework/config/test_run.py
> @@ -11,6 +11,8 @@
>
> import re
> import tarfile
> +from collections import deque
> +from collections.abc import Iterable
> from enum import auto, unique
> from functools import cached_property
> from pathlib import Path, PurePath
> @@ -25,7 +27,7 @@
> from .common import FrozenModel, load_fields_from_settings
>
> if TYPE_CHECKING:
> - from framework.test_suite import TestSuiteSpec
> + from framework.test_suite import TestCase, TestSuite, TestSuiteSpec
>
>
> @unique
> @@ -233,6 +235,21 @@ def test_suite_spec(self) -> "TestSuiteSpec":
> ), f"{self.test_suite_name} is not a valid test suite module name."
> return test_suite_spec
>
> + @cached_property
> + def test_cases(self) -> list[type["TestCase"]]:
> + """The objects of the selected test cases."""
> + available_test_cases = {t.name: t for t in self.test_suite_spec.class_obj.get_test_cases()}
> + selected_test_cases = []
> +
> + for requested_test_case in self.test_cases_names:
> + assert requested_test_case in available_test_cases, (
> + f"{requested_test_case} is not a valid test case "
> + f"of test suite {self.test_suite_name}."
> + )
> + selected_test_cases.append(available_test_cases[requested_test_case])
> +
> + return selected_test_cases or list(available_test_cases.values())
> +
> @model_validator(mode="before")
> @classmethod
> def convert_from_string(cls, data: Any) -> Any:
> @@ -246,17 +263,11 @@ def convert_from_string(cls, data: Any) -> Any:
> def validate_names(self) -> Self:
> """Validate the supplied test suite and test cases names.
>
> - This validator relies on the cached property `test_suite_spec` to run for the first
> - time in this call, therefore triggering the assertions if needed.
> + This validator relies on the cached properties `test_suite_spec` and `test_cases` to run for
> + the first time in this call, therefore triggering the assertions if needed.
> """
> - available_test_cases = map(
> - lambda t: t.name, self.test_suite_spec.class_obj.get_test_cases()
> - )
> - for requested_test_case in self.test_cases_names:
> - assert requested_test_case in available_test_cases, (
> - f"{requested_test_case} is not a valid test case "
> - f"of test suite {self.test_suite_name}."
> - )
> + if self.test_cases:
> + pass
>
> return self
>
> @@ -383,3 +394,29 @@ class TestRunConfiguration(FrozenModel):
> fields_from_settings = model_validator(mode="before")(
> load_fields_from_settings("test_suites", "random_seed")
> )
> +
> + def filter_tests(
> + self,
> + ) -> Iterable[tuple[type["TestSuite"], deque[type["TestCase"]]]]:
> + """Filter test suites and cases selected for execution."""
> + from framework.test_suite import TestCaseType
> +
> + test_suites = [TestSuiteConfig(test_suite="smoke_tests")]
> +
> + if self.skip_smoke_tests:
> + test_suites = self.test_suites
> + else:
> + test_suites += self.test_suites
> +
> + return (
> + (
> + t.test_suite_spec.class_obj,
> + deque(
> + tt
> + for tt in t.test_cases
> + if (tt.test_type is TestCaseType.FUNCTIONAL and self.func)
> + or (tt.test_type is TestCaseType.PERFORMANCE and self.perf)
> + ),
> + )
> + for t in test_suites
> + )
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 3/7] dts: revamp Topology model
2025-02-03 15:16 ` [RFC PATCH 3/7] dts: revamp Topology model Luca Vizzarro
@ 2025-02-10 19:42 ` Nicholas Pratte
2025-02-11 18:18 ` Dean Marx
1 sibling, 0 replies; 33+ messages in thread
From: Nicholas Pratte @ 2025-02-10 19:42 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
Reviewed-by: Nicholas Pratte <npratte@iol.unh.edu>
On Mon, Feb 3, 2025 at 10:17 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> Change the Topology model to add further flexility in its usage as a
> standalone entry point to test suites.
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
> ---
> dts/framework/testbed_model/topology.py | 85 +++++++++++++------------
> 1 file changed, 45 insertions(+), 40 deletions(-)
>
> diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
> index 814c3f3fe4..cf5c2c28ba 100644
> --- a/dts/framework/testbed_model/topology.py
> +++ b/dts/framework/testbed_model/topology.py
> @@ -9,10 +9,12 @@
> """
>
> from collections.abc import Iterator
> +from dataclasses import dataclass
> from enum import Enum
> from typing import NamedTuple
>
> -from framework.config.node import PortConfig
> +from typing_extensions import Self
> +
> from framework.exception import ConfigurationError
>
> from .port import Port
> @@ -43,35 +45,32 @@ class PortLink(NamedTuple):
> tg_port: Port
>
>
> +@dataclass(frozen=True)
> class Topology:
> """Testbed topology.
>
> The topology contains ports processed into ingress and egress ports.
> - If there are no ports on a node, dummy ports (ports with no actual values) are stored.
> - If there is only one link available, the ports of this link are stored
> + If there are no ports on a node, accesses to :attr:`~Topology.tg_port_egress` and alike will
> + raise an exception. If there is only one link available, the ports of this link are stored
> as both ingress and egress ports.
>
> - The dummy ports shouldn't be used. It's up to :class:`~framework.runner.DTSRunner`
> - to ensure no test case or suite requiring actual links is executed
> - when the topology prohibits it and up to the developers to make sure that test cases
> - not requiring any links don't use any ports. Otherwise, the underlying methods
> - using the ports will fail.
> + It's up to :class:`~framework.test_run.TestRun` to ensure no test case or suite requiring actual
> + links is executed when the topology prohibits it and up to the developers to make sure that test
> + cases not requiring any links don't use any ports. Otherwise, the underlying methods using the
> + ports will fail.
>
> Attributes:
> type: The type of the topology.
> - tg_port_egress: The egress port of the TG node.
> - sut_port_ingress: The ingress port of the SUT node.
> - sut_port_egress: The egress port of the SUT node.
> - tg_port_ingress: The ingress port of the TG node.
> + sut_ports: The SUT ports.
> + tg_ports: The TG ports.
> """
>
> type: TopologyType
> - tg_port_egress: Port
> - sut_port_ingress: Port
> - sut_port_egress: Port
> - tg_port_ingress: Port
> + sut_ports: list[Port]
> + tg_ports: list[Port]
>
> - def __init__(self, port_links: Iterator[PortLink]):
> + @classmethod
> + def from_port_links(cls, port_links: Iterator[PortLink]) -> Self:
> """Create the topology from `port_links`.
>
> Args:
> @@ -80,34 +79,40 @@ def __init__(self, port_links: Iterator[PortLink]):
> Raises:
> ConfigurationError: If an unsupported link topology is supplied.
> """
> - dummy_port = Port(
> - "",
> - PortConfig(
> - name="dummy",
> - pci="0000:00:00.0",
> - os_driver_for_dpdk="",
> - os_driver="",
> - ),
> - )
> -
> - self.type = TopologyType.no_link
> - self.tg_port_egress = dummy_port
> - self.sut_port_ingress = dummy_port
> - self.sut_port_egress = dummy_port
> - self.tg_port_ingress = dummy_port
> + type = TopologyType.no_link
>
> if port_link := next(port_links, None):
> - self.type = TopologyType.one_link
> - self.tg_port_egress = port_link.tg_port
> - self.sut_port_ingress = port_link.sut_port
> - self.sut_port_egress = self.sut_port_ingress
> - self.tg_port_ingress = self.tg_port_egress
> + type = TopologyType.one_link
> + sut_ports = [port_link.sut_port]
> + tg_ports = [port_link.tg_port]
>
> if port_link := next(port_links, None):
> - self.type = TopologyType.two_links
> - self.sut_port_egress = port_link.sut_port
> - self.tg_port_ingress = port_link.tg_port
> + type = TopologyType.two_links
> + sut_ports.append(port_link.sut_port)
> + tg_ports.append(port_link.tg_port)
>
> if next(port_links, None) is not None:
> msg = "More than two links in a topology are not supported."
> raise ConfigurationError(msg)
> +
> + return cls(type, sut_ports, tg_ports)
> +
> + @property
> + def tg_port_egress(self) -> Port:
> + """The egress port of the TG node."""
> + return self.tg_ports[0]
> +
> + @property
> + def sut_port_ingress(self) -> Port:
> + """The ingress port of the SUT node."""
> + return self.sut_ports[0]
> +
> + @property
> + def sut_port_egress(self) -> Port:
> + """The egress port of the SUT node."""
> + return self.sut_ports[1 if self.type is TopologyType.two_links else 0]
> +
> + @property
> + def tg_port_ingress(self) -> Port:
> + """The ingress port of the TG node."""
> + return self.tg_ports[1 if self.type is TopologyType.two_links else 0]
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 1/7] dts: add port topology configuration
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
2025-02-07 18:25 ` Nicholas Pratte
@ 2025-02-11 18:00 ` Dean Marx
2025-02-12 16:47 ` Luca Vizzarro
1 sibling, 1 reply; 33+ messages in thread
From: Dean Marx @ 2025-02-11 18:00 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
On Mon, Feb 3, 2025 at 10:17 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> The current configuration makes the user re-specify the port links for
> each port in an unintuitive and repetitive way. Moreover, this design
> does not give the user to opportunity to map the port topology as
> desired.
>
> This change adds a port_topology field in the test runs, so that the
> user can use map topologies for each run as required. Moreover it
> simplies the process to link ports by defining a user friendly notation.
>
> Bugzilla ID: 1478
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
<snip>
> system_under_test_node: "SUT 1"
> # Traffic generator node to use for this execution environment
> traffic_generator_node: "TG 1"
> + port_topology:
> + - sut.Port 0 <-> tg.Port 0 # explicit link
> + - Port 1 <-> Port 1 # implicit link, left side is always SUT, right side is always TG.
Looks good to me, one small thing I think might be helpful to the user
is adding a section in the explicit link comment explaining that "sut"
and "tg" are the only options for the port_topology nodes, unlike the
port name which can be changed in the nodes.yaml file. This way the
user doesn't have to learn this through the framework, or trial and
error (especially since the ports have a name section as well.)
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 2/7] dts: isolate test specification to config
2025-02-03 15:16 ` [RFC PATCH 2/7] dts: isolate test specification to config Luca Vizzarro
2025-02-10 19:09 ` Nicholas Pratte
@ 2025-02-11 18:11 ` Dean Marx
1 sibling, 0 replies; 33+ messages in thread
From: Dean Marx @ 2025-02-11 18:11 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
On Mon, Feb 3, 2025 at 10:17 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> In an effort to improve separation of concerns, make the TestRunConfig
> class responsible for processing the configured test suites. Moreover,
> give TestSuiteConfig a facility to yield references to the selected test
> cases.
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 3/7] dts: revamp Topology model
2025-02-03 15:16 ` [RFC PATCH 3/7] dts: revamp Topology model Luca Vizzarro
2025-02-10 19:42 ` Nicholas Pratte
@ 2025-02-11 18:18 ` Dean Marx
1 sibling, 0 replies; 33+ messages in thread
From: Dean Marx @ 2025-02-11 18:18 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
On Mon, Feb 3, 2025 at 10:17 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> Change the Topology model to add further flexility in its usage as a
> standalone entry point to test suites.
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 4/7] dts: improve Port model
2025-02-03 15:16 ` [RFC PATCH 4/7] dts: improve Port model Luca Vizzarro
@ 2025-02-11 18:56 ` Dean Marx
0 siblings, 0 replies; 33+ messages in thread
From: Dean Marx @ 2025-02-11 18:56 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
On Mon, Feb 3, 2025 at 10:17 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> Make Port models standalone, so that they can be directly manipulated by
> the test suites as needed. Moreover, let them handle themselves and
> retrieve the logical name and mac address in autonomy.
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 5/7] dts: add runtime status
2025-02-03 15:16 ` [RFC PATCH 5/7] dts: add runtime status Luca Vizzarro
@ 2025-02-11 19:45 ` Dean Marx
2025-02-12 18:50 ` Nicholas Pratte
1 sibling, 0 replies; 33+ messages in thread
From: Dean Marx @ 2025-02-11 19:45 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
On Mon, Feb 3, 2025 at 10:17 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> Add a new module which defines the global runtime status of DTS and the
> distinct execution stages and steps.
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Looks great!
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 6/7] dts: add global runtime context
2025-02-03 15:16 ` [RFC PATCH 6/7] dts: add global runtime context Luca Vizzarro
@ 2025-02-11 20:26 ` Dean Marx
0 siblings, 0 replies; 33+ messages in thread
From: Dean Marx @ 2025-02-11 20:26 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
On Mon, Feb 3, 2025 at 10:18 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> Add a new context module which holds the runtime context. The new
> context will describe the current scenario and aid underlying classes
> used by the test suites to automatically infer their parameters. This
> futher simplies the test writing process as the test writer won't need
> to be concerned about the nodes, and can directly use the tools, e.g.
> TestPmdShell, as needed which implement context awareness.
>
> Bugzilla ID: 1461
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 7/7] dts: revamp runtime internals
2025-02-03 15:16 ` [RFC PATCH 7/7] dts: revamp runtime internals Luca Vizzarro
@ 2025-02-11 20:50 ` Dean Marx
0 siblings, 0 replies; 33+ messages in thread
From: Dean Marx @ 2025-02-11 20:50 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
On Mon, Feb 3, 2025 at 10:18 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> Enforce separation of concerns by letting test runs being isolated
> through a new TestRun class and respective module. This also means that
> any actions taken on the nodes must be handled exclusively by the test
> run. An example being the creation and destruction of the traffic
> generator. TestSuiteWithCases is now redundant as the configuration is
> able to provide all the details about the test run's own test suites.
> Any other runtime state which concerns the test runs, now belongs to
> their class.
>
> Finally, as the test run execution is isolated, all the runtime
> internals are held in the new class. Internals which have been
> completely reworked into a finite state machine (FSM), to simplify the
> use and understanding of the different execution states, while
> rendering the process of handling errors less repetitive and easier.
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 0/7] dts: revamp framework
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
` (7 preceding siblings ...)
2025-02-04 21:08 ` [RFC PATCH 0/7] dts: revamp framework Dean Marx
@ 2025-02-12 16:45 ` Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 1/7] dts: add port topology configuration Luca Vizzarro
` (7 more replies)
8 siblings, 8 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:45 UTC (permalink / raw)
To: dev
Cc: Nicholas Pratte, Dean Marx, Luca Vizzarro, Patrick Robb, Paul Szczepanek
Hi there,
sending some more final code, hence dropping the RFC.
v2:
- applied suggested changes to the port topology config
- re-implemented FSM from a generator function to a more traditional
approach where each state is actually is self-contained in its own
class.
- dropped the status enums (as a consequence of the above)
- fixed some docstrings
Best,
Luca
Luca Vizzarro (7):
dts: add port topology configuration
dts: isolate test specification to config
dts: revamp Topology model
dts: improve Port model
dts: add global runtime context
dts: revamp runtime internals
dts: remove node distinction
doc/api/dts/framework.context.rst | 8 +
doc/api/dts/framework.remote_session.dpdk.rst | 8 +
doc/api/dts/framework.remote_session.rst | 1 +
doc/api/dts/framework.test_run.rst | 8 +
doc/api/dts/index.rst | 2 +
doc/guides/conf.py | 3 +-
dts/framework/config/__init__.py | 138 ++--
dts/framework/config/node.py | 96 +--
dts/framework/config/test_run.py | 214 +++++-
dts/framework/context.py | 120 ++++
dts/framework/exception.py | 33 +-
dts/framework/logger.py | 26 +-
.../sut_node.py => remote_session/dpdk.py} | 462 ++++++-------
dts/framework/remote_session/dpdk_shell.py | 59 +-
.../single_active_interactive_shell.py | 12 +-
dts/framework/remote_session/testpmd_shell.py | 27 +-
dts/framework/runner.py | 547 +--------------
dts/framework/test_result.py | 138 +---
dts/framework/test_run.py | 641 ++++++++++++++++++
dts/framework/test_suite.py | 80 ++-
dts/framework/testbed_model/capability.py | 70 +-
dts/framework/testbed_model/linux_session.py | 47 +-
dts/framework/testbed_model/node.py | 100 +--
dts/framework/testbed_model/os_session.py | 16 +-
dts/framework/testbed_model/port.py | 97 ++-
dts/framework/testbed_model/tg_node.py | 105 ---
dts/framework/testbed_model/topology.py | 165 ++---
.../traffic_generator/__init__.py | 8 +-
.../testbed_model/traffic_generator/scapy.py | 12 +-
.../traffic_generator/traffic_generator.py | 9 +-
dts/framework/utils.py | 8 +-
dts/nodes.example.yaml | 24 +-
dts/test_runs.example.yaml | 4 +
dts/tests/TestSuite_blocklist.py | 6 +-
dts/tests/TestSuite_checksum_offload.py | 14 +-
dts/tests/TestSuite_dual_vlan.py | 6 +-
dts/tests/TestSuite_dynamic_config.py | 8 +-
dts/tests/TestSuite_dynamic_queue_conf.py | 1 -
dts/tests/TestSuite_hello_world.py | 2 +-
dts/tests/TestSuite_l2fwd.py | 9 +-
dts/tests/TestSuite_mac_filter.py | 10 +-
dts/tests/TestSuite_mtu.py | 17 +-
dts/tests/TestSuite_pmd_buffer_scatter.py | 9 +-
...stSuite_port_restart_config_persistency.py | 8 +-
dts/tests/TestSuite_promisc_support.py | 8 +-
dts/tests/TestSuite_smoke_tests.py | 9 +-
dts/tests/TestSuite_softnic.py | 6 +-
dts/tests/TestSuite_uni_pkt.py | 14 +-
dts/tests/TestSuite_vlan.py | 8 +-
49 files changed, 1726 insertions(+), 1697 deletions(-)
create mode 100644 doc/api/dts/framework.context.rst
create mode 100644 doc/api/dts/framework.remote_session.dpdk.rst
create mode 100644 doc/api/dts/framework.test_run.rst
create mode 100644 dts/framework/context.py
rename dts/framework/{testbed_model/sut_node.py => remote_session/dpdk.py} (59%)
create mode 100644 dts/framework/test_run.py
delete mode 100644 dts/framework/testbed_model/tg_node.py
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 1/7] dts: add port topology configuration
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
@ 2025-02-12 16:45 ` Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 2/7] dts: isolate test specification to config Luca Vizzarro
` (6 subsequent siblings)
7 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:45 UTC (permalink / raw)
To: dev
Cc: Nicholas Pratte, Dean Marx, Luca Vizzarro, Paul Szczepanek, Patrick Robb
The current configuration makes the user re-specify the port links for
each port in an unintuitive and repetitive way. Moreover, this design
does not give the user to opportunity to map the port topology as
desired.
This change adds a port_topology field in the test runs, so that the
user can use map topologies for each run as required. Moreover it
simplifies the process to link ports by defining a user friendly notation.
Bugzilla ID: 1478
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
dts/framework/config/__init__.py | 54 ++++++++----
dts/framework/config/node.py | 25 ++++--
dts/framework/config/test_run.py | 85 +++++++++++++++++-
dts/framework/runner.py | 13 ++-
dts/framework/test_result.py | 9 +-
dts/framework/testbed_model/capability.py | 22 ++---
dts/framework/testbed_model/node.py | 6 ++
dts/framework/testbed_model/port.py | 58 +++----------
dts/framework/testbed_model/sut_node.py | 2 +-
dts/framework/testbed_model/topology.py | 100 ++++++++--------------
dts/framework/utils.py | 8 +-
dts/nodes.example.yaml | 24 ++----
dts/test_runs.example.yaml | 4 +
13 files changed, 238 insertions(+), 172 deletions(-)
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 08712d2384..79f8141ef6 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -118,24 +118,46 @@ def validate_node_names(self) -> Self:
return self
@model_validator(mode="after")
- def validate_ports(self) -> Self:
- """Validate that the ports are all linked to valid ones."""
- port_links: dict[tuple[str, str], Literal[False] | tuple[int, int]] = {
- (node.name, port.pci): False for node in self.nodes for port in node.ports
+ def validate_port_links(self) -> Self:
+ """Validate that all the test runs' port links are valid."""
+ existing_port_links: dict[tuple[str, str], Literal[False] | tuple[str, str]] = {
+ (node.name, port.name): False for node in self.nodes for port in node.ports
}
- for node_no, node in enumerate(self.nodes):
- for port_no, port in enumerate(node.ports):
- peer_port_identifier = (port.peer_node, port.peer_pci)
- peer_port = port_links.get(peer_port_identifier, None)
- assert (
- peer_port is not None
- ), f"invalid peer port specified for nodes.{node_no}.ports.{port_no}"
- assert peer_port is False, (
- f"the peer port specified for nodes.{node_no}.ports.{port_no} "
- f"is already linked to nodes.{peer_port[0]}.ports.{peer_port[1]}"
- )
- port_links[peer_port_identifier] = (node_no, port_no)
+ defined_port_links = [
+ (test_run_idx, test_run, link_idx, link)
+ for test_run_idx, test_run in enumerate(self.test_runs)
+ for link_idx, link in enumerate(test_run.port_topology)
+ ]
+ for test_run_idx, test_run, link_idx, link in defined_port_links:
+ sut_node_port_peer = existing_port_links.get(
+ (test_run.system_under_test_node, link.sut_port), None
+ )
+ assert sut_node_port_peer is not None, (
+ "Invalid SUT node port specified for link "
+ f"test_runs.{test_run_idx}.port_topology.{link_idx}."
+ )
+
+ assert sut_node_port_peer is False or sut_node_port_peer == link.right, (
+ f"The SUT node port for link test_runs.{test_run_idx}.port_topology.{link_idx} is "
+ f"already linked to port {sut_node_port_peer[0]}.{sut_node_port_peer[1]}."
+ )
+
+ tg_node_port_peer = existing_port_links.get(
+ (test_run.traffic_generator_node, link.tg_port), None
+ )
+ assert tg_node_port_peer is not None, (
+ "Invalid TG node port specified for link "
+ f"test_runs.{test_run_idx}.port_topology.{link_idx}."
+ )
+
+ assert tg_node_port_peer is False or sut_node_port_peer == link.left, (
+ f"The TG node port for link test_runs.{test_run_idx}.port_topology.{link_idx} is "
+ f"already linked to port {tg_node_port_peer[0]}.{tg_node_port_peer[1]}."
+ )
+
+ existing_port_links[link.left] = link.right
+ existing_port_links[link.right] = link.left
return self
diff --git a/dts/framework/config/node.py b/dts/framework/config/node.py
index a7ace514d9..97e0285912 100644
--- a/dts/framework/config/node.py
+++ b/dts/framework/config/node.py
@@ -12,9 +12,10 @@
from enum import Enum, auto, unique
from typing import Annotated, Literal
-from pydantic import Field
+from pydantic import Field, model_validator
+from typing_extensions import Self
-from framework.utils import REGEX_FOR_PCI_ADDRESS, StrEnum
+from framework.utils import REGEX_FOR_IDENTIFIER, REGEX_FOR_PCI_ADDRESS, StrEnum
from .common import FrozenModel
@@ -51,16 +52,14 @@ class HugepageConfiguration(FrozenModel):
class PortConfig(FrozenModel):
r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s."""
+ #: An identifier for the port. May contain letters, digits, underscores, hyphens and spaces.
+ name: str = Field(pattern=REGEX_FOR_IDENTIFIER)
#: The PCI address of the port.
pci: str = Field(pattern=REGEX_FOR_PCI_ADDRESS)
#: The driver that the kernel should bind this device to for DPDK to use it.
os_driver_for_dpdk: str = Field(examples=["vfio-pci", "mlx5_core"])
#: The operating system driver name when the operating system controls the port.
os_driver: str = Field(examples=["i40e", "ice", "mlx5_core"])
- #: The name of the peer node this port is connected to.
- peer_node: str
- #: The PCI address of the peer port connected to this port.
- peer_pci: str = Field(pattern=REGEX_FOR_PCI_ADDRESS)
class TrafficGeneratorConfig(FrozenModel):
@@ -94,7 +93,7 @@ class NodeConfiguration(FrozenModel):
r"""The configuration of :class:`~framework.testbed_model.node.Node`\s."""
#: The name of the :class:`~framework.testbed_model.node.Node`.
- name: str
+ name: str = Field(pattern=REGEX_FOR_IDENTIFIER)
#: The hostname of the :class:`~framework.testbed_model.node.Node`. Can also be an IP address.
hostname: str
#: The name of the user used to connect to the :class:`~framework.testbed_model.node.Node`.
@@ -108,6 +107,18 @@ class NodeConfiguration(FrozenModel):
#: The ports that can be used in testing.
ports: list[PortConfig] = Field(min_length=1)
+ @model_validator(mode="after")
+ def verify_unique_port_names(self) -> Self:
+ """Verify that there are no ports with the same name."""
+ used_port_names: dict[str, int] = {}
+ for idx, port in enumerate(self.ports):
+ assert port.name not in used_port_names, (
+ f"Cannot use port name '{port.name}' for ports.{idx}. "
+ f"This was already used in ports.{used_port_names[port.name]}."
+ )
+ used_port_names[port.name] = idx
+ return self
+
class DPDKConfiguration(FrozenModel):
"""Configuration of the DPDK EAL parameters."""
diff --git a/dts/framework/config/test_run.py b/dts/framework/config/test_run.py
index 2fa8d7a782..3d73fb31bb 100644
--- a/dts/framework/config/test_run.py
+++ b/dts/framework/config/test_run.py
@@ -9,16 +9,18 @@
The root model of a test run configuration is :class:`TestRunConfiguration`.
"""
+import re
import tarfile
from enum import auto, unique
from functools import cached_property
from pathlib import Path, PurePath
-from typing import Any, Literal
+from typing import Any, Literal, NamedTuple
from pydantic import Field, field_validator, model_validator
from typing_extensions import TYPE_CHECKING, Self
-from framework.utils import StrEnum
+from framework.exception import InternalError
+from framework.utils import REGEX_FOR_PORT_LINK, StrEnum
from .common import FrozenModel, load_fields_from_settings
@@ -271,6 +273,83 @@ def fetch_all_test_suites() -> list[TestSuiteConfig]:
]
+class LinkPortIdentifier(NamedTuple):
+ """A tuple linking test run node type to port name."""
+
+ node_type: Literal["sut", "tg"]
+ port_name: str
+
+
+class PortLinkConfig(FrozenModel):
+ """A link between the ports of the nodes.
+
+ Can be represented as a string with the following notation:
+
+ .. code::
+
+ sut.{port name} <-> tg.{port name} # explicit node nomination
+ {port name} <-> {port name} # implicit node nomination. Left is SUT, right is TG.
+ """
+
+ #: The port at the left side of the link.
+ left: LinkPortIdentifier
+ #: The port at the right side of the link.
+ right: LinkPortIdentifier
+
+ @cached_property
+ def sut_port(self) -> str:
+ """Port name of the SUT node.
+
+ Raises:
+ InternalError: If a misconfiguration has been allowed to happen.
+ """
+ if self.left.node_type == "sut":
+ return self.left.port_name
+ if self.right.node_type == "sut":
+ return self.right.port_name
+
+ raise InternalError("Unreachable state reached.")
+
+ @cached_property
+ def tg_port(self) -> str:
+ """Port name of the TG node.
+
+ Raises:
+ InternalError: If a misconfiguration has been allowed to happen.
+ """
+ if self.left.node_type == "tg":
+ return self.left.port_name
+ if self.right.node_type == "tg":
+ return self.right.port_name
+
+ raise InternalError("Unreachable state reached.")
+
+ @model_validator(mode="before")
+ @classmethod
+ def convert_from_string(cls, data: Any) -> Any:
+ """Convert the string representation of the model into a valid mapping."""
+ if isinstance(data, str):
+ m = re.match(REGEX_FOR_PORT_LINK, data, re.I)
+ assert m is not None, (
+ "The provided link is malformed. Please use the following "
+ "notation: sut.{port name} <-> tg.{port name}"
+ )
+
+ left = (m.group(1) or "sut").lower(), m.group(2)
+ right = (m.group(3) or "tg").lower(), m.group(4)
+
+ return {"left": left, "right": right}
+ return data
+
+ @model_validator(mode="after")
+ def verify_distinct_nodes(self) -> Self:
+ """Verify that each side of the link has distinct nodes."""
+ assert (
+ self.left.node_type != self.right.node_type
+ ), "Linking ports of the same node is unsupported."
+ return self
+
+
class TestRunConfiguration(FrozenModel):
"""The configuration of a test run.
@@ -296,6 +375,8 @@ class TestRunConfiguration(FrozenModel):
vdevs: list[str] = Field(default_factory=list)
#: The seed to use for pseudo-random generation.
random_seed: int | None = None
+ #: The port links between the specified nodes to use.
+ port_topology: list[PortLinkConfig] = Field(max_length=2)
fields_from_settings = model_validator(mode="before")(
load_fields_from_settings("test_suites", "random_seed")
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index 9f9789cf49..60a885d8e6 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -54,7 +54,7 @@
TestSuiteWithCases,
)
from .test_suite import TestCase, TestSuite
-from .testbed_model.topology import Topology
+from .testbed_model.topology import PortLink, Topology
class DTSRunner:
@@ -331,7 +331,13 @@ def _run_test_run(
test_run_result.update_setup(Result.FAIL, e)
else:
- self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
+ topology = Topology(
+ PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
+ for link in test_run_config.port_topology
+ )
+ self._run_test_suites(
+ sut_node, tg_node, topology, test_run_result, test_suites_with_cases
+ )
finally:
try:
@@ -361,6 +367,7 @@ def _run_test_suites(
self,
sut_node: SutNode,
tg_node: TGNode,
+ topology: Topology,
test_run_result: TestRunResult,
test_suites_with_cases: Iterable[TestSuiteWithCases],
) -> None:
@@ -380,11 +387,11 @@ def _run_test_suites(
Args:
sut_node: The test run's SUT node.
tg_node: The test run's TG node.
+ topology: The test run's port topology.
test_run_result: The test run's result.
test_suites_with_cases: The test suites with test cases to run.
"""
end_dpdk_build = False
- topology = Topology(sut_node.ports, tg_node.ports)
supported_capabilities = self._get_supported_capabilities(
sut_node, topology, test_suites_with_cases
)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index bffbc52505..1acb526b64 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -28,8 +28,9 @@
from dataclasses import asdict, dataclass, field
from enum import Enum, auto
from pathlib import Path
-from typing import Any, Callable, TypedDict
+from typing import Any, Callable, TypedDict, cast
+from framework.config.node import PortConfig
from framework.testbed_model.capability import Capability
from .config.test_run import TestRunConfiguration, TestSuiteConfig
@@ -601,10 +602,14 @@ def to_dict(self) -> TestRunResultDict:
compiler_version = self.dpdk_build_info.compiler_version
dpdk_version = self.dpdk_build_info.dpdk_version
+ ports = [asdict(port) for port in self.ports]
+ for port in ports:
+ port["config"] = cast(PortConfig, port["config"]).model_dump()
+
return {
"compiler_version": compiler_version,
"dpdk_version": dpdk_version,
- "ports": [asdict(port) for port in self.ports],
+ "ports": ports,
"test_suites": [child.to_dict() for child in self.child_results],
"summary": results | self.generate_pass_rate_dict(results),
}
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index ddfa3853df..e1215f9703 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -281,8 +281,8 @@ class TopologyCapability(Capability):
"""A wrapper around :class:`~.topology.TopologyType`.
Each test case must be assigned a topology. It could be done explicitly;
- the implicit default is :attr:`~.topology.TopologyType.default`, which this class defines
- as equal to :attr:`~.topology.TopologyType.two_links`.
+ the implicit default is given by :meth:`~.topology.TopologyType.default`, which this class
+ returns :attr:`~.topology.TopologyType.two_links`.
Test case topology may be set by setting the topology for the whole suite.
The priority in which topology is set is as follows:
@@ -358,10 +358,10 @@ def set_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
the test suite's.
"""
if inspect.isclass(test_case_or_suite):
- if self.topology_type is not TopologyType.default:
+ if self.topology_type is not TopologyType.default():
self.add_to_required(test_case_or_suite)
for test_case in test_case_or_suite.get_test_cases():
- if test_case.topology_type.topology_type is TopologyType.default:
+ if test_case.topology_type.topology_type is TopologyType.default():
# test case topology has not been set, use the one set by the test suite
self.add_to_required(test_case)
elif test_case.topology_type > test_case_or_suite.topology_type:
@@ -424,14 +424,8 @@ def __hash__(self):
return self.topology_type.__hash__()
def __str__(self):
- """Easy to read string of class and name of :attr:`topology_type`.
-
- Converts :attr:`TopologyType.default` to the actual value.
- """
- name = self.topology_type.name
- if self.topology_type is TopologyType.default:
- name = TopologyType.get_from_value(self.topology_type.value).name
- return f"{type(self.topology_type).__name__}.{name}"
+ """Easy to read string of class and name of :attr:`topology_type`."""
+ return f"{type(self.topology_type).__name__}.{self.topology_type.name}"
def __repr__(self):
"""Easy to read string of class and name of :attr:`topology_type`."""
@@ -446,7 +440,7 @@ class TestProtocol(Protocol):
#: The reason for skipping the test case or suite.
skip_reason: ClassVar[str] = ""
#: The topology type of the test case or suite.
- topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
+ topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default())
#: The capabilities the test case or suite requires in order to be executed.
required_capabilities: ClassVar[set[Capability]] = set()
@@ -462,7 +456,7 @@ def get_test_cases(cls) -> list[type["TestCase"]]:
def requires(
*nic_capabilities: NicCapability,
- topology_type: TopologyType = TopologyType.default,
+ topology_type: TopologyType = TopologyType.default(),
) -> Callable[[type[TestProtocol]], type[TestProtocol]]:
"""A decorator that adds the required capabilities to a test case or test suite.
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e53a321499..0acd746073 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -14,6 +14,7 @@
"""
from abc import ABC
+from functools import cached_property
from framework.config.node import (
OS,
@@ -86,6 +87,11 @@ def _init_ports(self) -> None:
self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
self.main_session.update_ports(self.ports)
+ @cached_property
+ def ports_by_name(self) -> dict[str, Port]:
+ """Ports mapped by the name assigned at configuration."""
+ return {port.name: port for port in self.ports}
+
def set_up_test_run(
self,
test_run_config: TestRunConfiguration,
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 7177da3371..8014d4a100 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2022 University of New Hampshire
# Copyright(c) 2023 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""NIC port model.
@@ -13,19 +14,6 @@
from framework.config.node import PortConfig
-@dataclass(slots=True, frozen=True)
-class PortIdentifier:
- """The port identifier.
-
- Attributes:
- node: The node where the port resides.
- pci: The PCI address of the port on `node`.
- """
-
- node: str
- pci: str
-
-
@dataclass(slots=True)
class Port:
"""Physical port on a node.
@@ -36,20 +24,13 @@ class Port:
and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
Attributes:
- identifier: The PCI address of the port on a node.
- os_driver: The operating system driver name when the operating system controls the port,
- e.g.: ``i40e``.
- os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
- peer: The identifier of a port this port is connected with.
- The `peer` is on a different node.
+ config: The port's configuration.
mac_address: The MAC address of the port.
logical_name: The logical name of the port. Must be discovered.
"""
- identifier: PortIdentifier
- os_driver: str
- os_driver_for_dpdk: str
- peer: PortIdentifier
+ _node: str
+ config: PortConfig
mac_address: str = ""
logical_name: str = ""
@@ -60,33 +41,20 @@ def __init__(self, node_name: str, config: PortConfig):
node_name: The name of the port's node.
config: The test run configuration of the port.
"""
- self.identifier = PortIdentifier(
- node=node_name,
- pci=config.pci,
- )
- self.os_driver = config.os_driver
- self.os_driver_for_dpdk = config.os_driver_for_dpdk
- self.peer = PortIdentifier(node=config.peer_node, pci=config.peer_pci)
+ self._node = node_name
+ self.config = config
@property
def node(self) -> str:
"""The node where the port resides."""
- return self.identifier.node
+ return self._node
+
+ @property
+ def name(self) -> str:
+ """The name of the port."""
+ return self.config.name
@property
def pci(self) -> str:
"""The PCI address of the port."""
- return self.identifier.pci
-
-
-@dataclass(slots=True, frozen=True)
-class PortLink:
- """The physical, cabled connection between the ports.
-
- Attributes:
- sut_port: The port on the SUT node connected to `tg_port`.
- tg_port: The port on the TG node connected to `sut_port`.
- """
-
- sut_port: Port
- tg_port: Port
+ return self.config.pci
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 483733cede..440b5a059b 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -515,7 +515,7 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
return
for port in self.ports:
- driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
+ driver = port.config.os_driver_for_dpdk if for_dpdk else port.config.os_driver
self.main_session.send_command(
f"{self.path_to_devbind_script} -b {driver} --force {port.pci}",
privileged=True,
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index caee9b22ea..814c3f3fe4 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2024 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""Testbed topology representation.
@@ -7,14 +8,9 @@
The link information then implies what type of topology is available.
"""
-from dataclasses import dataclass
-from os import environ
-from typing import TYPE_CHECKING, Iterable
-
-if TYPE_CHECKING or environ.get("DTS_DOC_BUILD"):
- from enum import Enum as NoAliasEnum
-else:
- from aenum import NoAliasEnum
+from collections.abc import Iterator
+from enum import Enum
+from typing import NamedTuple
from framework.config.node import PortConfig
from framework.exception import ConfigurationError
@@ -22,7 +18,7 @@
from .port import Port
-class TopologyType(int, NoAliasEnum):
+class TopologyType(int, Enum):
"""Supported topology types."""
#: A topology with no Traffic Generator.
@@ -31,34 +27,20 @@ class TopologyType(int, NoAliasEnum):
one_link = 1
#: A topology with two physical links between the Sut node and the TG node.
two_links = 2
- #: The default topology required by test cases if not specified otherwise.
- default = 2
@classmethod
- def get_from_value(cls, value: int) -> "TopologyType":
- r"""Get the corresponding instance from value.
+ def default(cls) -> "TopologyType":
+ """The default topology required by test cases if not specified otherwise."""
+ return cls.two_links
- :class:`~enum.Enum`\s that don't allow aliases don't know which instance should be returned
- as there could be multiple valid instances. Except for the :attr:`default` value,
- :class:`TopologyType` is a regular :class:`~enum.Enum`.
- When getting an instance from value, we're not interested in the default,
- since we already know the value, allowing us to remove the ambiguity.
- Args:
- value: The value of the requested enum.
+class PortLink(NamedTuple):
+ """The physical, cabled connection between the ports."""
- Raises:
- ConfigurationError: If an unsupported link topology is supplied.
- """
- match value:
- case 0:
- return TopologyType.no_link
- case 1:
- return TopologyType.one_link
- case 2:
- return TopologyType.two_links
- case _:
- raise ConfigurationError("More than two links in a topology are not supported.")
+ #: The port on the SUT node connected to `tg_port`.
+ sut_port: Port
+ #: The port on the TG node connected to `sut_port`.
+ tg_port: Port
class Topology:
@@ -89,55 +71,43 @@ class Topology:
sut_port_egress: Port
tg_port_ingress: Port
- def __init__(self, sut_ports: Iterable[Port], tg_ports: Iterable[Port]):
- """Create the topology from `sut_ports` and `tg_ports`.
+ def __init__(self, port_links: Iterator[PortLink]):
+ """Create the topology from `port_links`.
Args:
- sut_ports: The SUT node's ports.
- tg_ports: The TG node's ports.
+ port_links: The test run's required port links.
+
+ Raises:
+ ConfigurationError: If an unsupported link topology is supplied.
"""
- port_links = []
- for sut_port in sut_ports:
- for tg_port in tg_ports:
- if (sut_port.identifier, sut_port.peer) == (
- tg_port.peer,
- tg_port.identifier,
- ):
- port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port))
-
- self.type = TopologyType.get_from_value(len(port_links))
dummy_port = Port(
"",
PortConfig(
+ name="dummy",
pci="0000:00:00.0",
os_driver_for_dpdk="",
os_driver="",
- peer_node="",
- peer_pci="0000:00:00.0",
),
)
+
+ self.type = TopologyType.no_link
self.tg_port_egress = dummy_port
self.sut_port_ingress = dummy_port
self.sut_port_egress = dummy_port
self.tg_port_ingress = dummy_port
- if self.type > TopologyType.no_link:
- self.tg_port_egress = port_links[0].tg_port
- self.sut_port_ingress = port_links[0].sut_port
+
+ if port_link := next(port_links, None):
+ self.type = TopologyType.one_link
+ self.tg_port_egress = port_link.tg_port
+ self.sut_port_ingress = port_link.sut_port
self.sut_port_egress = self.sut_port_ingress
self.tg_port_ingress = self.tg_port_egress
- if self.type > TopologyType.one_link:
- self.sut_port_egress = port_links[1].sut_port
- self.tg_port_ingress = port_links[1].tg_port
+ if port_link := next(port_links, None):
+ self.type = TopologyType.two_links
+ self.sut_port_egress = port_link.sut_port
+ self.tg_port_ingress = port_link.tg_port
-@dataclass(slots=True, frozen=True)
-class PortLink:
- """The physical, cabled connection between the ports.
-
- Attributes:
- sut_port: The port on the SUT node connected to `tg_port`.
- tg_port: The port on the TG node connected to `sut_port`.
- """
-
- sut_port: Port
- tg_port: Port
+ if next(port_links, None) is not None:
+ msg = "More than two links in a topology are not supported."
+ raise ConfigurationError(msg)
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 66f37a8813..d6f4c11d58 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -32,7 +32,13 @@
_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
_REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
-REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
+REGEX_FOR_BASE64_ENCODING: str = r"[-a-zA-Z0-9+\\/]*={0,3}"
+REGEX_FOR_IDENTIFIER: str = r"\w+(?:[\w -]*\w+)?"
+REGEX_FOR_PORT_LINK: str = (
+ rf"(?:(sut|tg)\.)?({REGEX_FOR_IDENTIFIER})" # left side
+ r"\s+<->\s+"
+ rf"(?:(sut|tg)\.)?({REGEX_FOR_IDENTIFIER})" # right side
+)
def expand_range(range_str: str) -> list[int]:
diff --git a/dts/nodes.example.yaml b/dts/nodes.example.yaml
index 454d97ab5d..891a5c6d92 100644
--- a/dts/nodes.example.yaml
+++ b/dts/nodes.example.yaml
@@ -9,18 +9,14 @@
user: dtsuser
os: linux
ports:
- # sets up the physical link between "SUT 1"@0000:00:08.0 and "TG 1"@0000:00:08.0
- - pci: "0000:00:08.0"
+ - name: port-0
+ pci: "0000:00:08.0"
os_driver_for_dpdk: vfio-pci # OS driver that DPDK will use
os_driver: i40e # OS driver to bind when the tests are not running
- peer_node: "TG 1"
- peer_pci: "0000:00:08.0"
- # sets up the physical link between "SUT 1"@0000:00:08.1 and "TG 1"@0000:00:08.1
- - pci: "0000:00:08.1"
+ - name: port-1
+ pci: "0000:00:08.1"
os_driver_for_dpdk: vfio-pci
os_driver: i40e
- peer_node: "TG 1"
- peer_pci: "0000:00:08.1"
hugepages_2mb: # optional; if removed, will use system hugepage configuration
number_of: 256
force_first_numa: false
@@ -34,18 +30,14 @@
user: dtsuser
os: linux
ports:
- # sets up the physical link between "TG 1"@0000:00:08.0 and "SUT 1"@0000:00:08.0
- - pci: "0000:00:08.0"
+ - name: port-0
+ pci: "0000:00:08.0"
os_driver_for_dpdk: rdma
os_driver: rdma
- peer_node: "SUT 1"
- peer_pci: "0000:00:08.0"
- # sets up the physical link between "SUT 1"@0000:00:08.0 and "TG 1"@0000:00:08.0
- - pci: "0000:00:08.1"
+ - name: port-1
+ pci: "0000:00:08.1"
os_driver_for_dpdk: rdma
os_driver: rdma
- peer_node: "SUT 1"
- peer_pci: "0000:00:08.1"
hugepages_2mb: # optional; if removed, will use system hugepage configuration
number_of: 256
force_first_numa: false
diff --git a/dts/test_runs.example.yaml b/dts/test_runs.example.yaml
index 5cc167ebe1..5c78cf6c56 100644
--- a/dts/test_runs.example.yaml
+++ b/dts/test_runs.example.yaml
@@ -32,3 +32,7 @@
system_under_test_node: "SUT 1"
# Traffic generator node to use for this execution environment
traffic_generator_node: "TG 1"
+ port_topology:
+ - sut.port-0 <-> tg.port-0 # explicit link. `sut` and `tg` are special identifiers that refer
+ # to the respective test run's configured nodes.
+ - port-1 <-> port-1 # implicit link, left side is always SUT, right side is always TG.
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 2/7] dts: isolate test specification to config
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 1/7] dts: add port topology configuration Luca Vizzarro
@ 2025-02-12 16:45 ` Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 3/7] dts: revamp Topology model Luca Vizzarro
` (5 subsequent siblings)
7 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:45 UTC (permalink / raw)
To: dev
Cc: Nicholas Pratte, Dean Marx, Luca Vizzarro, Paul Szczepanek, Patrick Robb
In an effort to improve separation of concerns, make the TestRunConfig
class responsible for processing the configured test suites. Moreover,
give TestSuiteConfig a facility to yield references to the selected test
cases.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
Reviewed-by: Nicholas Pratte <npratte@iol.unh.edu>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
---
dts/framework/config/__init__.py | 84 +++++++++++---------------------
dts/framework/config/test_run.py | 59 +++++++++++++++++-----
2 files changed, 76 insertions(+), 67 deletions(-)
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 79f8141ef6..273a5cc3a7 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -27,9 +27,8 @@
and makes it thread safe should we ever want to move in that direction.
"""
-from functools import cached_property
from pathlib import Path
-from typing import Annotated, Any, Literal, NamedTuple, TypeVar, cast
+from typing import Annotated, Any, Literal, TypeVar, cast
import yaml
from pydantic import Field, TypeAdapter, ValidationError, model_validator
@@ -45,18 +44,6 @@
)
from .test_run import TestRunConfiguration
-
-class TestRunWithNodesConfiguration(NamedTuple):
- """Tuple containing the configuration of the test run and its associated nodes."""
-
- #:
- test_run_config: TestRunConfiguration
- #:
- sut_node_config: SutNodeConfiguration
- #:
- tg_node_config: TGNodeConfiguration
-
-
TestRunsConfig = Annotated[list[TestRunConfiguration], Field(min_length=1)]
NodesConfig = Annotated[list[NodeConfigurationTypes], Field(min_length=1)]
@@ -70,40 +57,6 @@ class Configuration(FrozenModel):
#: Node configurations.
nodes: NodesConfig
- @cached_property
- def test_runs_with_nodes(self) -> list[TestRunWithNodesConfiguration]:
- """List of test runs with the associated nodes."""
- test_runs_with_nodes = []
-
- for test_run_no, test_run in enumerate(self.test_runs):
- sut_node_name = test_run.system_under_test_node
- sut_node = next(filter(lambda n: n.name == sut_node_name, self.nodes), None)
-
- assert sut_node is not None, (
- f"test_runs.{test_run_no}.sut_node_config.node_name "
- f"({test_run.system_under_test_node}) is not a valid node name"
- )
- assert isinstance(sut_node, SutNodeConfiguration), (
- f"test_runs.{test_run_no}.sut_node_config.node_name is a valid node name, "
- "but it is not a valid SUT node"
- )
-
- tg_node_name = test_run.traffic_generator_node
- tg_node = next(filter(lambda n: n.name == tg_node_name, self.nodes), None)
-
- assert tg_node is not None, (
- f"test_runs.{test_run_no}.tg_node_name "
- f"({test_run.traffic_generator_node}) is not a valid node name"
- )
- assert isinstance(tg_node, TGNodeConfiguration), (
- f"test_runs.{test_run_no}.tg_node_name is a valid node name, "
- "but it is not a valid TG node"
- )
-
- test_runs_with_nodes.append(TestRunWithNodesConfiguration(test_run, sut_node, tg_node))
-
- return test_runs_with_nodes
-
@model_validator(mode="after")
def validate_node_names(self) -> Self:
"""Validate that the node names are unique."""
@@ -162,14 +115,33 @@ def validate_port_links(self) -> Self:
return self
@model_validator(mode="after")
- def validate_test_runs_with_nodes(self) -> Self:
- """Validate the test runs to nodes associations.
-
- This validator relies on the cached property `test_runs_with_nodes` to run for the first
- time in this call, therefore triggering the assertions if needed.
- """
- if self.test_runs_with_nodes:
- pass
+ def validate_test_runs_against_nodes(self) -> Self:
+ """Validate the test runs to nodes associations."""
+ for test_run_no, test_run in enumerate(self.test_runs):
+ sut_node_name = test_run.system_under_test_node
+ sut_node = next((n for n in self.nodes if n.name == sut_node_name), None)
+
+ assert sut_node is not None, (
+ f"Test run {test_run_no}.system_under_test_node "
+ f"({sut_node_name}) is not a valid node name."
+ )
+ assert isinstance(sut_node, SutNodeConfiguration), (
+ f"Test run {test_run_no}.system_under_test_node is a valid node name, "
+ "but it is not a valid SUT node."
+ )
+
+ tg_node_name = test_run.traffic_generator_node
+ tg_node = next((n for n in self.nodes if n.name == tg_node_name), None)
+
+ assert tg_node is not None, (
+ f"Test run {test_run_no}.traffic_generator_name "
+ f"({tg_node_name}) is not a valid node name."
+ )
+ assert isinstance(tg_node, TGNodeConfiguration), (
+ f"Test run {test_run_no}.traffic_generator_name is a valid node name, "
+ "but it is not a valid TG node."
+ )
+
return self
diff --git a/dts/framework/config/test_run.py b/dts/framework/config/test_run.py
index 3d73fb31bb..eef01d0340 100644
--- a/dts/framework/config/test_run.py
+++ b/dts/framework/config/test_run.py
@@ -11,6 +11,8 @@
import re
import tarfile
+from collections import deque
+from collections.abc import Iterable
from enum import auto, unique
from functools import cached_property
from pathlib import Path, PurePath
@@ -25,7 +27,7 @@
from .common import FrozenModel, load_fields_from_settings
if TYPE_CHECKING:
- from framework.test_suite import TestSuiteSpec
+ from framework.test_suite import TestCase, TestSuite, TestSuiteSpec
@unique
@@ -231,6 +233,21 @@ def test_suite_spec(self) -> "TestSuiteSpec":
), f"{self.test_suite_name} is not a valid test suite module name."
return test_suite_spec
+ @cached_property
+ def test_cases(self) -> list[type["TestCase"]]:
+ """The objects of the selected test cases."""
+ available_test_cases = {t.name: t for t in self.test_suite_spec.class_obj.get_test_cases()}
+ selected_test_cases = []
+
+ for requested_test_case in self.test_cases_names:
+ assert requested_test_case in available_test_cases, (
+ f"{requested_test_case} is not a valid test case "
+ f"of test suite {self.test_suite_name}."
+ )
+ selected_test_cases.append(available_test_cases[requested_test_case])
+
+ return selected_test_cases or list(available_test_cases.values())
+
@model_validator(mode="before")
@classmethod
def convert_from_string(cls, data: Any) -> Any:
@@ -244,17 +261,11 @@ def convert_from_string(cls, data: Any) -> Any:
def validate_names(self) -> Self:
"""Validate the supplied test suite and test cases names.
- This validator relies on the cached property `test_suite_spec` to run for the first
- time in this call, therefore triggering the assertions if needed.
+ This validator relies on the cached properties `test_suite_spec` and `test_cases` to run for
+ the first time in this call, therefore triggering the assertions if needed.
"""
- available_test_cases = map(
- lambda t: t.name, self.test_suite_spec.class_obj.get_test_cases()
- )
- for requested_test_case in self.test_cases_names:
- assert requested_test_case in available_test_cases, (
- f"{requested_test_case} is not a valid test case "
- f"of test suite {self.test_suite_name}."
- )
+ if self.test_cases:
+ pass
return self
@@ -381,3 +392,29 @@ class TestRunConfiguration(FrozenModel):
fields_from_settings = model_validator(mode="before")(
load_fields_from_settings("test_suites", "random_seed")
)
+
+ def filter_tests(
+ self,
+ ) -> Iterable[tuple[type["TestSuite"], deque[type["TestCase"]]]]:
+ """Filter test suites and cases selected for execution."""
+ from framework.test_suite import TestCaseType
+
+ test_suites = [TestSuiteConfig(test_suite="smoke_tests")]
+
+ if self.skip_smoke_tests:
+ test_suites = self.test_suites
+ else:
+ test_suites += self.test_suites
+
+ return (
+ (
+ t.test_suite_spec.class_obj,
+ deque(
+ tt
+ for tt in t.test_cases
+ if (tt.test_type is TestCaseType.FUNCTIONAL and self.func)
+ or (tt.test_type is TestCaseType.PERFORMANCE and self.perf)
+ ),
+ )
+ for t in test_suites
+ )
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 3/7] dts: revamp Topology model
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 1/7] dts: add port topology configuration Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 2/7] dts: isolate test specification to config Luca Vizzarro
@ 2025-02-12 16:45 ` Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 4/7] dts: improve Port model Luca Vizzarro
` (4 subsequent siblings)
7 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:45 UTC (permalink / raw)
To: dev
Cc: Nicholas Pratte, Dean Marx, Luca Vizzarro, Paul Szczepanek, Patrick Robb
Change the Topology model to add further flexility in its usage as a
standalone entry point to test suites.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
Reviewed-by: Nicholas Pratte <npratte@iol.unh.edu>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
---
dts/framework/testbed_model/topology.py | 85 +++++++++++++------------
1 file changed, 45 insertions(+), 40 deletions(-)
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 814c3f3fe4..cf5c2c28ba 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -9,10 +9,12 @@
"""
from collections.abc import Iterator
+from dataclasses import dataclass
from enum import Enum
from typing import NamedTuple
-from framework.config.node import PortConfig
+from typing_extensions import Self
+
from framework.exception import ConfigurationError
from .port import Port
@@ -43,35 +45,32 @@ class PortLink(NamedTuple):
tg_port: Port
+@dataclass(frozen=True)
class Topology:
"""Testbed topology.
The topology contains ports processed into ingress and egress ports.
- If there are no ports on a node, dummy ports (ports with no actual values) are stored.
- If there is only one link available, the ports of this link are stored
+ If there are no ports on a node, accesses to :attr:`~Topology.tg_port_egress` and alike will
+ raise an exception. If there is only one link available, the ports of this link are stored
as both ingress and egress ports.
- The dummy ports shouldn't be used. It's up to :class:`~framework.runner.DTSRunner`
- to ensure no test case or suite requiring actual links is executed
- when the topology prohibits it and up to the developers to make sure that test cases
- not requiring any links don't use any ports. Otherwise, the underlying methods
- using the ports will fail.
+ It's up to :class:`~framework.test_run.TestRun` to ensure no test case or suite requiring actual
+ links is executed when the topology prohibits it and up to the developers to make sure that test
+ cases not requiring any links don't use any ports. Otherwise, the underlying methods using the
+ ports will fail.
Attributes:
type: The type of the topology.
- tg_port_egress: The egress port of the TG node.
- sut_port_ingress: The ingress port of the SUT node.
- sut_port_egress: The egress port of the SUT node.
- tg_port_ingress: The ingress port of the TG node.
+ sut_ports: The SUT ports.
+ tg_ports: The TG ports.
"""
type: TopologyType
- tg_port_egress: Port
- sut_port_ingress: Port
- sut_port_egress: Port
- tg_port_ingress: Port
+ sut_ports: list[Port]
+ tg_ports: list[Port]
- def __init__(self, port_links: Iterator[PortLink]):
+ @classmethod
+ def from_port_links(cls, port_links: Iterator[PortLink]) -> Self:
"""Create the topology from `port_links`.
Args:
@@ -80,34 +79,40 @@ def __init__(self, port_links: Iterator[PortLink]):
Raises:
ConfigurationError: If an unsupported link topology is supplied.
"""
- dummy_port = Port(
- "",
- PortConfig(
- name="dummy",
- pci="0000:00:00.0",
- os_driver_for_dpdk="",
- os_driver="",
- ),
- )
-
- self.type = TopologyType.no_link
- self.tg_port_egress = dummy_port
- self.sut_port_ingress = dummy_port
- self.sut_port_egress = dummy_port
- self.tg_port_ingress = dummy_port
+ type = TopologyType.no_link
if port_link := next(port_links, None):
- self.type = TopologyType.one_link
- self.tg_port_egress = port_link.tg_port
- self.sut_port_ingress = port_link.sut_port
- self.sut_port_egress = self.sut_port_ingress
- self.tg_port_ingress = self.tg_port_egress
+ type = TopologyType.one_link
+ sut_ports = [port_link.sut_port]
+ tg_ports = [port_link.tg_port]
if port_link := next(port_links, None):
- self.type = TopologyType.two_links
- self.sut_port_egress = port_link.sut_port
- self.tg_port_ingress = port_link.tg_port
+ type = TopologyType.two_links
+ sut_ports.append(port_link.sut_port)
+ tg_ports.append(port_link.tg_port)
if next(port_links, None) is not None:
msg = "More than two links in a topology are not supported."
raise ConfigurationError(msg)
+
+ return cls(type, sut_ports, tg_ports)
+
+ @property
+ def tg_port_egress(self) -> Port:
+ """The egress port of the TG node."""
+ return self.tg_ports[0]
+
+ @property
+ def sut_port_ingress(self) -> Port:
+ """The ingress port of the SUT node."""
+ return self.sut_ports[0]
+
+ @property
+ def sut_port_egress(self) -> Port:
+ """The egress port of the SUT node."""
+ return self.sut_ports[1 if self.type is TopologyType.two_links else 0]
+
+ @property
+ def tg_port_ingress(self) -> Port:
+ """The ingress port of the TG node."""
+ return self.tg_ports[1 if self.type is TopologyType.two_links else 0]
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 4/7] dts: improve Port model
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
` (2 preceding siblings ...)
2025-02-12 16:45 ` [PATCH v2 3/7] dts: revamp Topology model Luca Vizzarro
@ 2025-02-12 16:45 ` Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 5/7] dts: add global runtime context Luca Vizzarro
` (3 subsequent siblings)
7 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:45 UTC (permalink / raw)
To: dev
Cc: Nicholas Pratte, Dean Marx, Luca Vizzarro, Paul Szczepanek, Patrick Robb
Make Port models standalone, so that they can be directly manipulated by
the test suites as needed. Moreover, let them handle themselves and
retrieve the logical name and mac address in autonomy.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
---
dts/framework/testbed_model/linux_session.py | 47 +++++++++-------
dts/framework/testbed_model/node.py | 25 ++++-----
dts/framework/testbed_model/os_session.py | 16 +++---
dts/framework/testbed_model/port.py | 57 ++++++++++++--------
dts/framework/testbed_model/sut_node.py | 48 +++++++++--------
dts/framework/testbed_model/tg_node.py | 24 +++++++--
6 files changed, 127 insertions(+), 90 deletions(-)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index 99c29b9b1e..7c2b110c99 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -10,6 +10,8 @@
"""
import json
+from collections.abc import Iterable
+from functools import cached_property
from typing import TypedDict
from typing_extensions import NotRequired
@@ -149,31 +151,40 @@ def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: boo
self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
- def update_ports(self, ports: list[Port]) -> None:
- """Overrides :meth:`~.os_session.OSSession.update_ports`."""
- self._logger.debug("Gathering port info.")
- for port in ports:
- assert port.node == self.name, "Attempted to gather port info on the wrong node"
+ def get_port_info(self, pci_address: str) -> tuple[str, str]:
+ """Overrides :meth:`~.os_session.OSSession.get_port_info`.
- port_info_list = self._get_lshw_info()
- for port in ports:
- for port_info in port_info_list:
- if f"pci@{port.pci}" == port_info.get("businfo"):
- self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
- self._update_port_attr(port, port_info.get("serial"), "mac_address")
- port_info_list.remove(port_info)
- break
- else:
- self._logger.warning(f"No port at pci address {port.pci} found.")
-
- def bring_up_link(self, ports: list[Port]) -> None:
+ Raises:
+ ConfigurationError: If the port could not be found.
+ """
+ self._logger.debug(f"Gathering info for port {pci_address}.")
+
+ bus_info = f"pci@{pci_address}"
+ port = next(port for port in self._lshw_net_info if port.get("businfo") == bus_info)
+ if port is None:
+ raise ConfigurationError(f"Port {pci_address} could not be found on the node.")
+
+ logical_name = port.get("logicalname") or ""
+ if not logical_name:
+ self._logger.warning(f"Port {pci_address} does not have a valid logical name.")
+ # raise ConfigurationError(f"Port {pci_address} does not have a valid logical name.")
+
+ mac_address = port.get("serial") or ""
+ if not mac_address:
+ self._logger.warning(f"Port {pci_address} does not have a valid mac address.")
+ # raise ConfigurationError(f"Port {pci_address} does not have a valid mac address.")
+
+ return logical_name, mac_address
+
+ def bring_up_link(self, ports: Iterable[Port]) -> None:
"""Overrides :meth:`~.os_session.OSSession.bring_up_link`."""
for port in ports:
self.send_command(
f"ip link set dev {port.logical_name} up", privileged=True, verify=True
)
- def _get_lshw_info(self) -> list[LshwOutput]:
+ @cached_property
+ def _lshw_net_info(self) -> list[LshwOutput]:
output = self.send_command("lshw -quiet -json -C network", verify=True)
return json.loads(output.stdout)
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 0acd746073..1a4c825ed2 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -14,16 +14,14 @@
"""
from abc import ABC
+from collections.abc import Iterable
from functools import cached_property
from framework.config.node import (
OS,
NodeConfiguration,
)
-from framework.config.test_run import (
- DPDKBuildConfiguration,
- TestRunConfiguration,
-)
+from framework.config.test_run import TestRunConfiguration
from framework.exception import ConfigurationError
from framework.logger import DTSLogger, get_dts_logger
@@ -81,22 +79,14 @@ def __init__(self, node_config: NodeConfiguration):
self._logger.info(f"Connected to node: {self.name}")
self._get_remote_cpus()
self._other_sessions = []
- self._init_ports()
-
- def _init_ports(self) -> None:
- self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
- self.main_session.update_ports(self.ports)
+ self.ports = [Port(self, port_config) for port_config in self.config.ports]
@cached_property
def ports_by_name(self) -> dict[str, Port]:
"""Ports mapped by the name assigned at configuration."""
return {port.name: port for port in self.ports}
- def set_up_test_run(
- self,
- test_run_config: TestRunConfiguration,
- dpdk_build_config: DPDKBuildConfiguration,
- ) -> None:
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
"""Test run setup steps.
Configure hugepages on all DTS node types. Additional steps can be added by
@@ -105,15 +95,18 @@ def set_up_test_run(
Args:
test_run_config: A test run configuration according to which
the setup steps will be taken.
- dpdk_build_config: The build configuration of DPDK.
+ ports: The ports to set up for the test run.
"""
self._setup_hugepages()
- def tear_down_test_run(self) -> None:
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
"""Test run teardown steps.
There are currently no common execution teardown steps common to all DTS node types.
Additional steps can be added by extending the method in subclasses with the use of super().
+
+ Args:
+ ports: The ports to tear down for the test run.
"""
def create_session(self, name: str) -> OSSession:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index f3789fcf75..3c7b2a4f47 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -516,20 +516,18 @@ def get_arch_info(self) -> str:
"""
@abstractmethod
- def update_ports(self, ports: list[Port]) -> None:
- """Get additional information about ports from the operating system and update them.
+ def get_port_info(self, pci_address: str) -> tuple[str, str]:
+ """Get port information.
- The additional information is:
-
- * Logical name (e.g. ``enp7s0``) if applicable,
- * Mac address.
+ Returns:
+ A tuple containing the logical name and MAC address respectively.
- Args:
- ports: The ports to update.
+ Raises:
+ ConfigurationError: If the port could not be found.
"""
@abstractmethod
- def bring_up_link(self, ports: list[Port]) -> None:
+ def bring_up_link(self, ports: Iterable[Port]) -> None:
"""Send operating system specific command for bringing up link on node interfaces.
Args:
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 8014d4a100..f638120eeb 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -9,45 +9,42 @@
drivers and address.
"""
-from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Final
from framework.config.node import PortConfig
+if TYPE_CHECKING:
+ from .node import Node
+
-@dataclass(slots=True)
class Port:
"""Physical port on a node.
- The ports are identified by the node they're on and their PCI addresses. The port on the other
- side of the connection is also captured here.
- Each port is serviced by a driver, which may be different for the operating system (`os_driver`)
- and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
-
Attributes:
+ node: The port's node.
config: The port's configuration.
mac_address: The MAC address of the port.
- logical_name: The logical name of the port. Must be discovered.
+ logical_name: The logical name of the port.
+ bound_for_dpdk: :data:`True` if the port is bound to the driver for DPDK.
"""
- _node: str
- config: PortConfig
- mac_address: str = ""
- logical_name: str = ""
+ node: Final["Node"]
+ config: Final[PortConfig]
+ mac_address: Final[str]
+ logical_name: Final[str]
+ bound_for_dpdk: bool
- def __init__(self, node_name: str, config: PortConfig):
- """Initialize the port from `node_name` and `config`.
+ def __init__(self, node: "Node", config: PortConfig):
+ """Initialize the port from `node` and `config`.
Args:
- node_name: The name of the port's node.
+ node: The port's node.
config: The test run configuration of the port.
"""
- self._node = node_name
+ self.node = node
self.config = config
-
- @property
- def node(self) -> str:
- """The node where the port resides."""
- return self._node
+ self.logical_name, self.mac_address = node.main_session.get_port_info(config.pci)
+ self.bound_for_dpdk = False
@property
def name(self) -> str:
@@ -58,3 +55,21 @@ def name(self) -> str:
def pci(self) -> str:
"""The PCI address of the port."""
return self.config.pci
+
+ def configure_mtu(self, mtu: int):
+ """Configure the port's MTU value.
+
+ Args:
+ mtu: Desired MTU value.
+ """
+ return self.node.main_session.configure_port_mtu(mtu, self)
+
+ def to_dict(self) -> dict[str, Any]:
+ """Convert to a dictionary."""
+ return {
+ "node_name": self.node.name,
+ "name": self.name,
+ "pci": self.pci,
+ "mac_address": self.mac_address,
+ "logical_name": self.logical_name,
+ }
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 440b5a059b..9007d89b1c 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -13,6 +13,7 @@
import os
import time
+from collections.abc import Iterable
from dataclasses import dataclass
from pathlib import Path, PurePath
@@ -33,6 +34,7 @@
from framework.exception import ConfigurationError, RemoteFileNotFoundError
from framework.params.eal import EalParams
from framework.remote_session.remote_session import CommandResult
+from framework.testbed_model.port import Port
from framework.utils import MesonArgs, TarCompressionFormat
from .cpu import LogicalCore, LogicalCoreList
@@ -86,7 +88,6 @@ class SutNode(Node):
_node_info: OSSessionInfo | None
_compiler_version: str | None
_path_to_devbind_script: PurePath | None
- _ports_bound_to_dpdk: bool
def __init__(self, node_config: SutNodeConfiguration):
"""Extend the constructor with SUT node specifics.
@@ -196,11 +197,7 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
"""
return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
- def set_up_test_run(
- self,
- test_run_config: TestRunConfiguration,
- dpdk_build_config: DPDKBuildConfiguration,
- ) -> None:
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
"""Extend the test run setup with vdev config and DPDK build set up.
This method extends the setup process by configuring virtual devices and preparing the DPDK
@@ -209,22 +206,25 @@ def set_up_test_run(
Args:
test_run_config: A test run configuration according to which
the setup steps will be taken.
- dpdk_build_config: The build configuration of DPDK.
+ ports: The ports to set up for the test run.
"""
- super().set_up_test_run(test_run_config, dpdk_build_config)
+ super().set_up_test_run(test_run_config, ports)
for vdev in test_run_config.vdevs:
self.virtual_devices.append(VirtualDevice(vdev))
- self._set_up_dpdk(dpdk_build_config)
+ self._set_up_dpdk(test_run_config.dpdk_config, ports)
+
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
+ """Extend the test run teardown with virtual device teardown and DPDK teardown.
- def tear_down_test_run(self) -> None:
- """Extend the test run teardown with virtual device teardown and DPDK teardown."""
- super().tear_down_test_run()
+ Args:
+ ports: The ports to tear down for the test run.
+ """
+ super().tear_down_test_run(ports)
self.virtual_devices = []
- self._tear_down_dpdk()
+ self._tear_down_dpdk(ports)
def _set_up_dpdk(
- self,
- dpdk_build_config: DPDKBuildConfiguration,
+ self, dpdk_build_config: DPDKBuildConfiguration, ports: Iterable[Port]
) -> None:
"""Set up DPDK the SUT node and bind ports.
@@ -234,6 +234,7 @@ def _set_up_dpdk(
Args:
dpdk_build_config: A DPDK build configuration to test.
+ ports: The ports to use for DPDK.
"""
match dpdk_build_config.dpdk_location:
case RemoteDPDKTreeLocation(dpdk_tree=dpdk_tree):
@@ -254,16 +255,16 @@ def _set_up_dpdk(
self._configure_dpdk_build(build_options)
self._build_dpdk()
- self.bind_ports_to_driver()
+ self.bind_ports_to_driver(ports)
- def _tear_down_dpdk(self) -> None:
+ def _tear_down_dpdk(self, ports: Iterable[Port]) -> None:
"""Reset DPDK variables and bind port driver to the OS driver."""
self._env_vars = {}
self.__remote_dpdk_tree_path = None
self._remote_dpdk_build_dir = None
self._dpdk_version = None
self.compiler_version = None
- self.bind_ports_to_driver(for_dpdk=False)
+ self.bind_ports_to_driver(ports, for_dpdk=False)
def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
"""Set the path to the remote DPDK source tree based on the provided DPDK location.
@@ -504,21 +505,22 @@ def run_dpdk_app(
f"{app_path} {eal_params}", timeout, privileged=True, verify=True
)
- def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
+ def bind_ports_to_driver(self, ports: Iterable[Port], for_dpdk: bool = True) -> None:
"""Bind all ports on the SUT to a driver.
Args:
+ ports: The ports to act on.
for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
If :data:`False`, binds to os_driver.
"""
- if self._ports_bound_to_dpdk == for_dpdk:
- return
+ for port in ports:
+ if port.bound_for_dpdk == for_dpdk:
+ continue
- for port in self.ports:
driver = port.config.os_driver_for_dpdk if for_dpdk else port.config.os_driver
self.main_session.send_command(
f"{self.path_to_devbind_script} -b {driver} --force {port.pci}",
privileged=True,
verify=True,
)
- self._ports_bound_to_dpdk = for_dpdk
+ port.bound_for_dpdk = for_dpdk
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 8ab9ccb438..595836a664 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -9,9 +9,12 @@
A TG node is where the TG runs.
"""
+from collections.abc import Iterable
+
from scapy.packet import Packet
from framework.config.node import TGNodeConfiguration
+from framework.config.test_run import TestRunConfiguration
from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
PacketFilteringConfig,
)
@@ -51,9 +54,24 @@ def __init__(self, node_config: TGNodeConfiguration):
self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
self._logger.info(f"Created node: {self.name}")
- def _init_ports(self) -> None:
- super()._init_ports()
- self.main_session.bring_up_link(self.ports)
+ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
+ """Extend the test run setup with the setup of the traffic generator.
+
+ Args:
+ test_run_config: A test run configuration according to which
+ the setup steps will be taken.
+ ports: The ports to set up for the test run.
+ """
+ super().set_up_test_run(test_run_config, ports)
+ self.main_session.bring_up_link(ports)
+
+ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
+ """Extend the test run teardown with the teardown of the traffic generator.
+
+ Args:
+ ports: The ports to tear down for the test run.
+ """
+ super().tear_down_test_run(ports)
def send_packets_and_capture(
self,
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 5/7] dts: add global runtime context
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
` (3 preceding siblings ...)
2025-02-12 16:45 ` [PATCH v2 4/7] dts: improve Port model Luca Vizzarro
@ 2025-02-12 16:45 ` Luca Vizzarro
2025-02-12 19:45 ` Nicholas Pratte
2025-02-12 16:45 ` [PATCH v2 6/7] dts: revamp runtime internals Luca Vizzarro
` (2 subsequent siblings)
7 siblings, 1 reply; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:45 UTC (permalink / raw)
To: dev
Cc: Nicholas Pratte, Dean Marx, Luca Vizzarro, Paul Szczepanek, Patrick Robb
Add a new context module which holds the runtime context. The new
context will describe the current scenario and aid underlying classes
used by the test suites to automatically infer their parameters. This
further simplifies the test writing process as the test writer won't need
to be concerned about the nodes, and can directly use the tools
implementing context awareness, e.g. TestPmdShell, as needed.
Bugzilla ID: 1461
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
---
doc/api/dts/framework.context.rst | 8 ++
doc/api/dts/index.rst | 1 +
dts/framework/context.py | 117 ++++++++++++++++++
dts/framework/remote_session/dpdk_shell.py | 53 +++-----
.../single_active_interactive_shell.py | 14 +--
dts/framework/remote_session/testpmd_shell.py | 27 ++--
dts/framework/test_suite.py | 73 +++++------
dts/tests/TestSuite_blocklist.py | 6 +-
dts/tests/TestSuite_checksum_offload.py | 14 +--
dts/tests/TestSuite_dual_vlan.py | 6 +-
dts/tests/TestSuite_dynamic_config.py | 8 +-
dts/tests/TestSuite_dynamic_queue_conf.py | 1 -
dts/tests/TestSuite_hello_world.py | 2 +-
dts/tests/TestSuite_l2fwd.py | 9 +-
dts/tests/TestSuite_mac_filter.py | 10 +-
dts/tests/TestSuite_mtu.py | 17 +--
dts/tests/TestSuite_pmd_buffer_scatter.py | 9 +-
...stSuite_port_restart_config_persistency.py | 8 +-
dts/tests/TestSuite_promisc_support.py | 8 +-
dts/tests/TestSuite_smoke_tests.py | 3 +-
dts/tests/TestSuite_softnic.py | 4 +-
dts/tests/TestSuite_uni_pkt.py | 14 +--
dts/tests/TestSuite_vlan.py | 8 +-
23 files changed, 247 insertions(+), 173 deletions(-)
create mode 100644 doc/api/dts/framework.context.rst
create mode 100644 dts/framework/context.py
diff --git a/doc/api/dts/framework.context.rst b/doc/api/dts/framework.context.rst
new file mode 100644
index 0000000000..a8c8b5022e
--- /dev/null
+++ b/doc/api/dts/framework.context.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+context - DTS execution context
+===========================================================
+
+.. automodule:: framework.context
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
index 534512dc17..90092014d2 100644
--- a/doc/api/dts/index.rst
+++ b/doc/api/dts/index.rst
@@ -29,6 +29,7 @@ Modules
framework.test_suite
framework.test_result
framework.settings
+ framework.context
framework.logger
framework.parser
framework.utils
diff --git a/dts/framework/context.py b/dts/framework/context.py
new file mode 100644
index 0000000000..03eaf63b88
--- /dev/null
+++ b/dts/framework/context.py
@@ -0,0 +1,117 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+"""Runtime contexts."""
+
+import functools
+from dataclasses import MISSING, dataclass, field, fields
+from typing import TYPE_CHECKING, ParamSpec
+
+from framework.exception import InternalError
+from framework.settings import SETTINGS
+from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
+from framework.testbed_model.topology import Topology
+
+if TYPE_CHECKING:
+ from framework.testbed_model.sut_node import SutNode
+ from framework.testbed_model.tg_node import TGNode
+
+P = ParamSpec("P")
+
+
+@dataclass
+class LocalContext:
+ """Updatable context local to test suites and cases.
+
+ Attributes:
+ lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
+ use. The default will select one lcore for each of two cores on one socket, in ascending
+ order of core ids.
+ ascending_cores: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
+ sort in descending order.
+ append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
+ timeout: The timeout used for the SSH channel that is dedicated to this interactive
+ shell. This timeout is for collecting output, so if reading from the buffer
+ and no output is gathered within the timeout, an exception is thrown.
+ """
+
+ lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = field(
+ default_factory=LogicalCoreCount
+ )
+ ascending_cores: bool = True
+ append_prefix_timestamp: bool = True
+ timeout: float = SETTINGS.timeout
+
+ def reset(self) -> None:
+ """Reset the local context to the default values."""
+ for _field in fields(LocalContext):
+ default = (
+ _field.default_factory()
+ if _field.default_factory is not MISSING
+ else _field.default
+ )
+
+ assert (
+ default is not MISSING
+ ), "{LocalContext.__name__} must have defaults on all fields!"
+
+ setattr(self, _field.name, default)
+
+
+@dataclass(frozen=True)
+class Context:
+ """Runtime context."""
+
+ sut_node: "SutNode"
+ tg_node: "TGNode"
+ topology: Topology
+ local: LocalContext = field(default_factory=LocalContext)
+
+
+__current_ctx: Context | None = None
+
+
+def get_ctx() -> Context:
+ """Retrieve the current runtime context.
+
+ Raises:
+ InternalError: If there is no context.
+ """
+ if __current_ctx:
+ return __current_ctx
+
+ raise InternalError("Attempted to retrieve context that has not been initialized yet.")
+
+
+def init_ctx(ctx: Context) -> None:
+ """Initialize context."""
+ global __current_ctx
+ __current_ctx = ctx
+
+
+def filter_cores(
+ specifier: LogicalCoreCount | LogicalCoreList, ascending_cores: bool | None = None
+):
+ """Decorates functions that require a temporary update to the lcore specifier."""
+
+ def decorator(func):
+ @functools.wraps(func)
+ def wrapper(*args: P.args, **kwargs: P.kwargs):
+ local_ctx = get_ctx().local
+
+ old_specifier = local_ctx.lcore_filter_specifier
+ local_ctx.lcore_filter_specifier = specifier
+ if ascending_cores is not None:
+ old_ascending_cores = local_ctx.ascending_cores
+ local_ctx.ascending_cores = ascending_cores
+
+ try:
+ return func(*args, **kwargs)
+ finally:
+ local_ctx.lcore_filter_specifier = old_specifier
+ if ascending_cores is not None:
+ local_ctx.ascending_cores = old_ascending_cores
+
+ return wrapper
+
+ return decorator
diff --git a/dts/framework/remote_session/dpdk_shell.py b/dts/framework/remote_session/dpdk_shell.py
index c11d9ab81c..b55deb7fa0 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -9,54 +9,45 @@
from abc import ABC
from pathlib import PurePath
+from framework.context import get_ctx
from framework.params.eal import EalParams
from framework.remote_session.single_active_interactive_shell import (
SingleActiveInteractiveShell,
)
-from framework.settings import SETTINGS
-from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
+from framework.testbed_model.cpu import LogicalCoreList
from framework.testbed_model.sut_node import SutNode
def compute_eal_params(
- sut_node: SutNode,
params: EalParams | None = None,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
) -> EalParams:
"""Compute EAL parameters based on the node's specifications.
Args:
- sut_node: The SUT node to compute the values for.
params: If :data:`None`, a new object is created and returned. Otherwise `params.lcore_list`
is modified according to `lcore_filter_specifier`. A DPDK file prefix is also added. If
`params.ports` is :data:`None`, then `sut_node.ports` is assigned to it.
- lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
- use. The default will select one lcore for each of two cores on one socket, in ascending
- order of core ids.
- ascending_cores: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
- sort in descending order.
- append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
"""
+ ctx = get_ctx()
+
if params is None:
params = EalParams()
if params.lcore_list is None:
params.lcore_list = LogicalCoreList(
- sut_node.filter_lcores(lcore_filter_specifier, ascending_cores)
+ ctx.sut_node.filter_lcores(ctx.local.lcore_filter_specifier, ctx.local.ascending_cores)
)
prefix = params.prefix
- if append_prefix_timestamp:
- prefix = f"{prefix}_{sut_node.dpdk_timestamp}"
- prefix = sut_node.main_session.get_dpdk_file_prefix(prefix)
+ if ctx.local.append_prefix_timestamp:
+ prefix = f"{prefix}_{ctx.sut_node.dpdk_timestamp}"
+ prefix = ctx.sut_node.main_session.get_dpdk_file_prefix(prefix)
if prefix:
- sut_node.dpdk_prefix_list.append(prefix)
+ ctx.sut_node.dpdk_prefix_list.append(prefix)
params.prefix = prefix
if params.allowed_ports is None:
- params.allowed_ports = sut_node.ports
+ params.allowed_ports = ctx.topology.sut_ports
return params
@@ -74,29 +65,15 @@ class DPDKShell(SingleActiveInteractiveShell, ABC):
def __init__(
self,
- node: SutNode,
+ name: str | None = None,
privileged: bool = True,
- timeout: float = SETTINGS.timeout,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
app_params: EalParams = EalParams(),
- name: str | None = None,
) -> None:
- """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`.
-
- Adds the `lcore_filter_specifier`, `ascending_cores` and `append_prefix_timestamp` arguments
- which are then used to compute the EAL parameters based on the node's configuration.
- """
- app_params = compute_eal_params(
- node,
- app_params,
- lcore_filter_specifier,
- ascending_cores,
- append_prefix_timestamp,
- )
+ """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`."""
+ app_params = compute_eal_params(app_params)
+ node = get_ctx().sut_node
- super().__init__(node, privileged, timeout, app_params, name)
+ super().__init__(node, name, privileged, app_params)
def _update_real_path(self, path: PurePath) -> None:
"""Extends :meth:`~.interactive_shell.InteractiveShell._update_real_path`.
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index cfe5baec14..2eec2f698a 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -27,6 +27,7 @@
from paramiko import Channel, channel
from typing_extensions import Self
+from framework.context import get_ctx
from framework.exception import (
InteractiveCommandExecutionError,
InteractiveSSHSessionDeadError,
@@ -34,7 +35,6 @@
)
from framework.logger import DTSLogger, get_dts_logger
from framework.params import Params
-from framework.settings import SETTINGS
from framework.testbed_model.node import Node
from framework.utils import MultiInheritanceBaseClass
@@ -90,10 +90,9 @@ class SingleActiveInteractiveShell(MultiInheritanceBaseClass, ABC):
def __init__(
self,
node: Node,
+ name: str | None = None,
privileged: bool = False,
- timeout: float = SETTINGS.timeout,
app_params: Params = Params(),
- name: str | None = None,
**kwargs,
) -> None:
"""Create an SSH channel during initialization.
@@ -103,13 +102,10 @@ def __init__(
Args:
node: The node on which to run start the interactive shell.
- privileged: Enables the shell to run as superuser.
- timeout: The timeout used for the SSH channel that is dedicated to this interactive
- shell. This timeout is for collecting output, so if reading from the buffer
- and no output is gathered within the timeout, an exception is thrown.
- app_params: The command line parameters to be passed to the application on startup.
name: Name for the interactive shell to use for logging. This name will be appended to
the name of the underlying node which it is running on.
+ privileged: Enables the shell to run as superuser.
+ app_params: The command line parameters to be passed to the application on startup.
**kwargs: Any additional arguments if any.
"""
self._node = node
@@ -118,7 +114,7 @@ def __init__(
self._logger = get_dts_logger(f"{node.name}.{name}")
self._app_params = app_params
self._privileged = privileged
- self._timeout = timeout
+ self._timeout = get_ctx().local.timeout
# Ensure path is properly formatted for the host
self._update_real_path(self.path)
super().__init__(**kwargs)
diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
index 287886ec44..672ecd970f 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -32,6 +32,9 @@
TypeAlias,
)
+from framework.context import get_ctx
+from framework.testbed_model.topology import TopologyType
+
if TYPE_CHECKING or environ.get("DTS_DOC_BUILD"):
from enum import Enum as NoAliasEnum
else:
@@ -40,13 +43,11 @@
from typing_extensions import Self, Unpack
from framework.exception import InteractiveCommandExecutionError, InternalError
-from framework.params.testpmd import SimpleForwardingModes, TestPmdParams
+from framework.params.testpmd import PortTopology, SimpleForwardingModes, TestPmdParams
from framework.params.types import TestPmdParamsDict
from framework.parser import ParserFn, TextParser
from framework.remote_session.dpdk_shell import DPDKShell
from framework.settings import SETTINGS
-from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
-from framework.testbed_model.sut_node import SutNode
from framework.utils import REGEX_FOR_MAC_ADDRESS, StrEnum
P = ParamSpec("P")
@@ -1532,26 +1533,14 @@ class TestPmdShell(DPDKShell):
def __init__(
self,
- node: SutNode,
- privileged: bool = True,
- timeout: float = SETTINGS.timeout,
- lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
- ascending_cores: bool = True,
- append_prefix_timestamp: bool = True,
name: str | None = None,
+ privileged: bool = True,
**app_params: Unpack[TestPmdParamsDict],
) -> None:
"""Overrides :meth:`~.dpdk_shell.DPDKShell.__init__`. Changes app_params to kwargs."""
- super().__init__(
- node,
- privileged,
- timeout,
- lcore_filter_specifier,
- ascending_cores,
- append_prefix_timestamp,
- TestPmdParams(**app_params),
- name,
- )
+ if "port_topology" not in app_params and get_ctx().topology.type is TopologyType.one_link:
+ app_params["port_topology"] = PortTopology.loop
+ super().__init__(name, privileged, TestPmdParams(**app_params))
self.ports_started = not self._app_params.disable_device_start
self._ports = None
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 3d168d522b..b9b527e40d 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -24,7 +24,7 @@
from ipaddress import IPv4Interface, IPv6Interface, ip_interface
from pkgutil import iter_modules
from types import ModuleType
-from typing import ClassVar, Protocol, TypeVar, Union, cast
+from typing import TYPE_CHECKING, ClassVar, Protocol, TypeVar, Union, cast
from scapy.layers.inet import IP
from scapy.layers.l2 import Ether
@@ -32,9 +32,6 @@
from typing_extensions import Self
from framework.testbed_model.capability import TestProtocol
-from framework.testbed_model.port import Port
-from framework.testbed_model.sut_node import SutNode
-from framework.testbed_model.tg_node import TGNode
from framework.testbed_model.topology import Topology
from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
PacketFilteringConfig,
@@ -44,6 +41,9 @@
from .logger import DTSLogger, get_dts_logger
from .utils import get_packet_summaries, to_pascal_case
+if TYPE_CHECKING:
+ from framework.context import Context
+
class TestSuite(TestProtocol):
"""The base class with building blocks needed by most test cases.
@@ -69,33 +69,19 @@ class TestSuite(TestProtocol):
The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
properly choose the IP addresses and other configuration that must be tailored to the testbed.
-
- Attributes:
- sut_node: The SUT node where the test suite is running.
- tg_node: The TG node where the test suite is running.
"""
- sut_node: SutNode
- tg_node: TGNode
#: Whether the test suite is blocking. A failure of a blocking test suite
#: will block the execution of all subsequent test suites in the current test run.
is_blocking: ClassVar[bool] = False
+ _ctx: "Context"
_logger: DTSLogger
- _sut_port_ingress: Port
- _sut_port_egress: Port
_sut_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
_sut_ip_address_egress: Union[IPv4Interface, IPv6Interface]
- _tg_port_ingress: Port
- _tg_port_egress: Port
_tg_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
_tg_ip_address_egress: Union[IPv4Interface, IPv6Interface]
- def __init__(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- ):
+ def __init__(self):
"""Initialize the test suite testbed information and basic configuration.
Find links between ports and set up default IP addresses to be used when
@@ -106,18 +92,25 @@ def __init__(
tg_node: The TG node where the test suite will run.
topology: The topology where the test suite will run.
"""
- self.sut_node = sut_node
- self.tg_node = tg_node
+ from framework.context import get_ctx
+
+ self._ctx = get_ctx()
self._logger = get_dts_logger(self.__class__.__name__)
- self._tg_port_egress = topology.tg_port_egress
- self._sut_port_ingress = topology.sut_port_ingress
- self._sut_port_egress = topology.sut_port_egress
- self._tg_port_ingress = topology.tg_port_ingress
self._sut_ip_address_ingress = ip_interface("192.168.100.2/24")
self._sut_ip_address_egress = ip_interface("192.168.101.2/24")
self._tg_ip_address_egress = ip_interface("192.168.100.3/24")
self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
+ @property
+ def name(self) -> str:
+ """The name of the test suite class."""
+ return type(self).__name__
+
+ @property
+ def topology(self) -> Topology:
+ """The current topology in use."""
+ return self._ctx.topology
+
@classmethod
def get_test_cases(cls) -> list[type["TestCase"]]:
"""A list of all the available test cases."""
@@ -254,10 +247,10 @@ def send_packets_and_capture(
A list of received packets.
"""
packets = self._adjust_addresses(packets)
- return self.tg_node.send_packets_and_capture(
+ return self._ctx.tg_node.send_packets_and_capture(
packets,
- self._tg_port_egress,
- self._tg_port_ingress,
+ self._ctx.topology.tg_port_egress,
+ self._ctx.topology.tg_port_ingress,
filter_config,
duration,
)
@@ -272,7 +265,7 @@ def send_packets(
packets: Packets to send.
"""
packets = self._adjust_addresses(packets)
- self.tg_node.send_packets(packets, self._tg_port_egress)
+ self._ctx.tg_node.send_packets(packets, self._ctx.topology.tg_port_egress)
def get_expected_packets(
self,
@@ -352,15 +345,15 @@ def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> li
# only be the Ether src/dst.
if "src" not in packet.fields:
packet.src = (
- self._sut_port_egress.mac_address
+ self.topology.sut_port_egress.mac_address
if expected
- else self._tg_port_egress.mac_address
+ else self.topology.tg_port_egress.mac_address
)
if "dst" not in packet.fields:
packet.dst = (
- self._tg_port_ingress.mac_address
+ self.topology.tg_port_ingress.mac_address
if expected
- else self._sut_port_ingress.mac_address
+ else self.topology.sut_port_ingress.mac_address
)
# update l3 addresses
@@ -400,10 +393,10 @@ def verify(self, condition: bool, failure_description: str) -> None:
def _fail_test_case_verify(self, failure_description: str) -> None:
self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
- for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ for command_res in self._ctx.sut_node.main_session.remote_session.history[-10:]:
self._logger.debug(command_res.command)
self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
- for command_res in self.tg_node.main_session.remote_session.history[-10:]:
+ for command_res in self._ctx.tg_node.main_session.remote_session.history[-10:]:
self._logger.debug(command_res.command)
raise TestCaseVerifyError(failure_description)
@@ -517,14 +510,14 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
self._logger.debug("Looking at the Ether layer.")
self._logger.debug(
f"Comparing received dst mac '{received_packet.dst}' "
- f"with expected '{self._tg_port_ingress.mac_address}'."
+ f"with expected '{self.topology.tg_port_ingress.mac_address}'."
)
- if received_packet.dst != self._tg_port_ingress.mac_address:
+ if received_packet.dst != self.topology.tg_port_ingress.mac_address:
return False
- expected_src_mac = self._tg_port_egress.mac_address
+ expected_src_mac = self.topology.tg_port_egress.mac_address
if l3:
- expected_src_mac = self._sut_port_egress.mac_address
+ expected_src_mac = self.topology.sut_port_egress.mac_address
self._logger.debug(
f"Comparing received src mac '{received_packet.src}' "
f"with expected '{expected_src_mac}'."
diff --git a/dts/tests/TestSuite_blocklist.py b/dts/tests/TestSuite_blocklist.py
index b9e9cd1d1a..ce7da1cc8f 100644
--- a/dts/tests/TestSuite_blocklist.py
+++ b/dts/tests/TestSuite_blocklist.py
@@ -18,7 +18,7 @@ class TestBlocklist(TestSuite):
def verify_blocklisted_ports(self, ports_to_block: list[Port]):
"""Runs testpmd with the given ports blocklisted and verifies the ports."""
- with TestPmdShell(self.sut_node, allowed_ports=[], blocked_ports=ports_to_block) as testpmd:
+ with TestPmdShell(allowed_ports=[], blocked_ports=ports_to_block) as testpmd:
allowlisted_ports = {port.device_name for port in testpmd.show_port_info_all()}
blocklisted_ports = {port.pci for port in ports_to_block}
@@ -49,7 +49,7 @@ def one_port_blocklisted(self):
Verify:
That the port was successfully blocklisted.
"""
- self.verify_blocklisted_ports(self.sut_node.ports[:1])
+ self.verify_blocklisted_ports(self.topology.sut_ports[:1])
@func_test
def all_but_one_port_blocklisted(self):
@@ -60,4 +60,4 @@ def all_but_one_port_blocklisted(self):
Verify:
That all specified ports were successfully blocklisted.
"""
- self.verify_blocklisted_ports(self.sut_node.ports[:-1])
+ self.verify_blocklisted_ports(self.topology.sut_ports[:-1])
diff --git a/dts/tests/TestSuite_checksum_offload.py b/dts/tests/TestSuite_checksum_offload.py
index a8bb6a71f7..b38d73421b 100644
--- a/dts/tests/TestSuite_checksum_offload.py
+++ b/dts/tests/TestSuite_checksum_offload.py
@@ -128,7 +128,7 @@ def test_insert_checksums(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP() / Raw(payload),
Ether(dst=mac_id) / IPv6(src="::1") / TCP() / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -160,7 +160,7 @@ def test_no_insert_checksums(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP() / Raw(payload),
Ether(dst=mac_id) / IPv6(src="::1") / TCP() / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
testpmd.start()
@@ -190,7 +190,7 @@ def test_l4_rx_checksum(self) -> None:
Ether(dst=mac_id) / IP() / UDP(chksum=0xF),
Ether(dst=mac_id) / IP() / TCP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -223,7 +223,7 @@ def test_l3_rx_checksum(self) -> None:
Ether(dst=mac_id) / IP(chksum=0xF) / UDP(),
Ether(dst=mac_id) / IP(chksum=0xF) / TCP(),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -260,7 +260,7 @@ def test_validate_rx_checksum(self) -> None:
Ether(dst=mac_id) / IPv6(src="::1") / UDP(chksum=0xF),
Ether(dst=mac_id) / IPv6(src="::1") / TCP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -299,7 +299,7 @@ def test_vlan_checksum(self) -> None:
Ether(dst=mac_id) / Dot1Q(vlan=1) / IPv6(src="::1") / UDP(chksum=0xF) / Raw(payload),
Ether(dst=mac_id) / Dot1Q(vlan=1) / IPv6(src="::1") / TCP(chksum=0xF) / Raw(payload),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
self.setup_hw_offload(testpmd=testpmd)
@@ -333,7 +333,7 @@ def test_validate_sctp_checksum(self) -> None:
Ether(dst=mac_id) / IP() / SCTP(),
Ether(dst=mac_id) / IP() / SCTP(chksum=0xF),
]
- with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
+ with TestPmdShell(enable_rx_cksum=True) as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.csum)
testpmd.set_verbose(level=1)
testpmd.csum_set_hw(layers=ChecksumOffloadOptions.sctp)
diff --git a/dts/tests/TestSuite_dual_vlan.py b/dts/tests/TestSuite_dual_vlan.py
index bdbee7e8d1..6af503528d 100644
--- a/dts/tests/TestSuite_dual_vlan.py
+++ b/dts/tests/TestSuite_dual_vlan.py
@@ -193,7 +193,7 @@ def insert_second_vlan(self) -> None:
Packets are received.
Packet contains two VLAN tags.
"""
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.tx_vlan_set(port=self.tx_port, enable=True, vlan=self.vlan_insert_tag)
testpmd.start()
recv = self.send_packet_and_capture(
@@ -229,7 +229,7 @@ def all_vlan_functions(self) -> None:
/ Dot1Q(vlan=self.inner_vlan_tag)
/ Raw(b"X" * 20)
)
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.start()
recv = self.send_packet_and_capture(send_pkt)
self.verify(len(recv) > 0, "Unmodified packet was not received.")
@@ -269,7 +269,7 @@ def maintains_priority(self) -> None:
/ Dot1Q(vlan=self.inner_vlan_tag, prio=2)
/ Raw(b"X" * 20)
)
- with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
+ with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
testpmd.start()
recv = self.send_packet_and_capture(pkt)
self.verify(len(recv) > 0, "Did not receive any packets when testing VLAN priority.")
diff --git a/dts/tests/TestSuite_dynamic_config.py b/dts/tests/TestSuite_dynamic_config.py
index 5a33f6f3c2..a4bee2e90b 100644
--- a/dts/tests/TestSuite_dynamic_config.py
+++ b/dts/tests/TestSuite_dynamic_config.py
@@ -88,7 +88,7 @@ def test_default_mode(self) -> None:
and sends two packets; one matching source MAC address and one unknown.
Verifies that both are received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
is_promisc = testpmd.show_port_info(0).is_promiscuous_mode_enabled
self.verify(is_promisc, "Promiscuous mode was not enabled by default.")
testpmd.start()
@@ -106,7 +106,7 @@ def test_disable_promisc(self) -> None:
and sends two packets; one matching source MAC address and one unknown.
Verifies that only the matching address packet is received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
mac = testpmd.show_port_info(0).mac_address
self.send_packet_and_verify(should_receive=True, mac_address=str(mac))
@@ -120,7 +120,7 @@ def test_disable_promisc_broadcast(self) -> None:
and sends two packets; one matching source MAC address and one broadcast.
Verifies that both packets are received.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
mac = testpmd.show_port_info(0).mac_address
self.send_packet_and_verify(should_receive=True, mac_address=str(mac))
@@ -134,7 +134,7 @@ def test_disable_promisc_multicast(self) -> None:
and sends two packets; one matching source MAC address and one multicast.
Verifies that the multicast packet is only received once allmulticast mode is enabled.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
testpmd.set_multicast_all(on=False)
# 01:00:5E:00:00:01 is the first of the multicast MAC range of addresses
diff --git a/dts/tests/TestSuite_dynamic_queue_conf.py b/dts/tests/TestSuite_dynamic_queue_conf.py
index e55716f545..344dd540eb 100644
--- a/dts/tests/TestSuite_dynamic_queue_conf.py
+++ b/dts/tests/TestSuite_dynamic_queue_conf.py
@@ -84,7 +84,6 @@ def wrap(self: "TestDynamicQueueConf", is_rx_testing: bool) -> None:
queues_to_config.add(random.randint(1, self.number_of_queues - 1))
unchanged_queues = set(range(self.number_of_queues)) - queues_to_config
with TestPmdShell(
- self.sut_node,
port_topology=PortTopology.chained,
rx_queues=self.number_of_queues,
tx_queues=self.number_of_queues,
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 031b94de4d..141f2bc4c9 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -23,6 +23,6 @@ def test_hello_world(self) -> None:
Verify:
The testpmd session throws no errors.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
self.log("Hello World!")
diff --git a/dts/tests/TestSuite_l2fwd.py b/dts/tests/TestSuite_l2fwd.py
index 0f6ff18907..0555d75ed8 100644
--- a/dts/tests/TestSuite_l2fwd.py
+++ b/dts/tests/TestSuite_l2fwd.py
@@ -7,6 +7,7 @@
The forwarding test is performed with several packets being sent at once.
"""
+from framework.context import filter_cores
from framework.params.testpmd import EthPeer, SimpleForwardingModes
from framework.remote_session.testpmd_shell import TestPmdShell
from framework.test_suite import TestSuite, func_test
@@ -33,6 +34,7 @@ def set_up_suite(self) -> None:
"""
self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
+ @filter_cores(LogicalCoreCount(cores_per_socket=4))
@func_test
def l2fwd_integrity(self) -> None:
"""Test the L2 forwarding integrity.
@@ -44,11 +46,12 @@ def l2fwd_integrity(self) -> None:
"""
queues = [1, 2, 4, 8]
+ self.topology.sut_ports[0]
+ self.topology.tg_ports[0]
+
with TestPmdShell(
- self.sut_node,
- lcore_filter_specifier=LogicalCoreCount(cores_per_socket=4),
forward_mode=SimpleForwardingModes.mac,
- eth_peer=[EthPeer(1, self.tg_node.ports[1].mac_address)],
+ eth_peer=[EthPeer(1, self.topology.tg_port_ingress.mac_address)],
disable_device_start=True,
) as shell:
for queues_num in queues:
diff --git a/dts/tests/TestSuite_mac_filter.py b/dts/tests/TestSuite_mac_filter.py
index 11e4b595c7..e6c55d3ec6 100644
--- a/dts/tests/TestSuite_mac_filter.py
+++ b/dts/tests/TestSuite_mac_filter.py
@@ -101,10 +101,10 @@ def test_add_remove_mac_addresses(self) -> None:
Remove the fake mac address from the PMD's address pool.
Send a packet with the fake mac address to the PMD. (Should not receive)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.set_promisc(0, enable=False)
testpmd.start()
- mac_address = self._sut_port_ingress.mac_address
+ mac_address = self.topology.sut_port_ingress.mac_address
# Send a packet with NIC default mac address
self.send_packet_and_verify(mac_address=mac_address, should_receive=True)
@@ -137,9 +137,9 @@ def test_invalid_address(self) -> None:
Determine the device's mac address pool size, and fill the pool with fake addresses.
Attempt to add another fake mac address, overloading the address pool. (Should fail)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
- mac_address = self._sut_port_ingress.mac_address
+ mac_address = self.topology.sut_port_ingress.mac_address
try:
testpmd.set_mac_addr(0, "00:00:00:00:00:00", add=True)
self.verify(False, "Invalid mac address added.")
@@ -191,7 +191,7 @@ def test_multicast_filter(self) -> None:
Remove the fake multicast address from the PMDs multicast address filter.
Send a packet with the fake multicast address to the PMD. (Should not receive)
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.start()
testpmd.set_promisc(0, enable=False)
multicast_address = "01:00:5E:00:00:00"
diff --git a/dts/tests/TestSuite_mtu.py b/dts/tests/TestSuite_mtu.py
index 4b59515bae..63e570ba03 100644
--- a/dts/tests/TestSuite_mtu.py
+++ b/dts/tests/TestSuite_mtu.py
@@ -51,8 +51,8 @@ def set_up_suite(self) -> None:
Set traffic generator MTU lengths to a size greater than scope of all
test cases.
"""
- self.tg_node.main_session.configure_port_mtu(JUMBO_MTU + 200, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(JUMBO_MTU + 200, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(JUMBO_MTU + 200)
+ self.topology.tg_port_ingress.configure_mtu(JUMBO_MTU + 200)
def send_packet_and_verify(self, pkt_size: int, should_receive: bool) -> None:
"""Generate, send a packet, and assess its behavior based on a given packet size.
@@ -156,11 +156,7 @@ def test_runtime_mtu_updating_and_forwarding(self) -> None:
Verify that standard MTU packets forward, in addition to packets within the limits of
an MTU size set during runtime.
"""
- with TestPmdShell(
- self.sut_node,
- tx_offloads=0x8000,
- mbuf_size=[JUMBO_MTU + 200],
- ) as testpmd:
+ with TestPmdShell(tx_offloads=0x8000, mbuf_size=[JUMBO_MTU + 200]) as testpmd:
testpmd.set_port_mtu_all(1500, verify=True)
testpmd.start()
self.assess_mtu_boundary(testpmd, 1500)
@@ -201,7 +197,6 @@ def test_cli_mtu_forwarding_for_std_packets(self) -> None:
MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -230,7 +225,6 @@ def test_cli_jumbo_forwarding_for_jumbo_mtu(self) -> None:
Verify that all packets are forwarded after pre-runtime MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -259,7 +253,6 @@ def test_cli_mtu_std_packets_for_jumbo_mtu(self) -> None:
MTU modification.
"""
with TestPmdShell(
- self.sut_node,
tx_offloads=0x8000,
mbuf_size=[JUMBO_MTU + 200],
mbcache=200,
@@ -277,5 +270,5 @@ def tear_down_suite(self) -> None:
Teardown:
Set the MTU size of the traffic generator back to the standard 1518 byte size.
"""
- self.tg_node.main_session.configure_port_mtu(STANDARD_MTU, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(STANDARD_MTU, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(STANDARD_MTU)
+ self.topology.tg_port_ingress.configure_mtu(STANDARD_MTU)
diff --git a/dts/tests/TestSuite_pmd_buffer_scatter.py b/dts/tests/TestSuite_pmd_buffer_scatter.py
index a8c111eea7..5e23f28bc6 100644
--- a/dts/tests/TestSuite_pmd_buffer_scatter.py
+++ b/dts/tests/TestSuite_pmd_buffer_scatter.py
@@ -58,8 +58,8 @@ def set_up_suite(self) -> None:
Increase the MTU of both ports on the traffic generator to 9000
to support larger packet sizes.
"""
- self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(9000)
+ self.topology.tg_port_ingress.configure_mtu(9000)
def scatter_pktgen_send_packet(self, pkt_size: int) -> list[Packet]:
"""Generate and send a packet to the SUT then capture what is forwarded back.
@@ -110,7 +110,6 @@ def pmd_scatter(self, mb_size: int, enable_offload: bool = False) -> None:
Start testpmd and run functional test with preset `mb_size`.
"""
with TestPmdShell(
- self.sut_node,
forward_mode=SimpleForwardingModes.mac,
mbcache=200,
mbuf_size=[mb_size],
@@ -147,5 +146,5 @@ def tear_down_suite(self) -> None:
Teardown:
Set the MTU of the tg_node back to a more standard size of 1500.
"""
- self.tg_node.main_session.configure_port_mtu(1500, self._tg_port_egress)
- self.tg_node.main_session.configure_port_mtu(1500, self._tg_port_ingress)
+ self.topology.tg_port_egress.configure_mtu(1500)
+ self.topology.tg_port_ingress.configure_mtu(1500)
diff --git a/dts/tests/TestSuite_port_restart_config_persistency.py b/dts/tests/TestSuite_port_restart_config_persistency.py
index ad42c6c2e6..42ea221586 100644
--- a/dts/tests/TestSuite_port_restart_config_persistency.py
+++ b/dts/tests/TestSuite_port_restart_config_persistency.py
@@ -61,8 +61,8 @@ def port_configuration_persistence(self) -> None:
Verify:
The configuration persists after the port is restarted.
"""
- with TestPmdShell(self.sut_node, disable_device_start=True) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell(disable_device_start=True) as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_port_mtu(port_id=port_id, mtu=STANDARD_MTU, verify=True)
self.restart_port_and_verify(port_id, testpmd, "MTU")
@@ -90,8 +90,8 @@ def flow_ctrl_port_configuration_persistence(self) -> None:
Verify:
The configuration persists after the port is restarted.
"""
- with TestPmdShell(self.sut_node, disable_device_start=True) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell(disable_device_start=True) as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
flow_ctrl = TestPmdPortFlowCtrl(rx=True)
testpmd.set_flow_control(port=port_id, flow_ctrl=flow_ctrl)
self.restart_port_and_verify(port_id, testpmd, "flow_ctrl")
diff --git a/dts/tests/TestSuite_promisc_support.py b/dts/tests/TestSuite_promisc_support.py
index a3ea2461f0..445f6e1d69 100644
--- a/dts/tests/TestSuite_promisc_support.py
+++ b/dts/tests/TestSuite_promisc_support.py
@@ -38,10 +38,8 @@ def test_promisc_packets(self) -> None:
"""
packet = [Ether(dst=self.ALTERNATIVE_MAC_ADDRESS) / IP() / Raw(load=b"\x00" * 64)]
- with TestPmdShell(
- self.sut_node,
- ) as testpmd:
- for port_id in range(len(self.sut_node.ports)):
+ with TestPmdShell() as testpmd:
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_promisc(port=port_id, enable=True, verify=True)
testpmd.start()
@@ -51,7 +49,7 @@ def test_promisc_packets(self) -> None:
testpmd.stop()
- for port_id in range(len(self.sut_node.ports)):
+ for port_id, _ in enumerate(self.topology.sut_ports):
testpmd.set_promisc(port=port_id, enable=False, verify=True)
testpmd.start()
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 7ed266dac0..8a5799c684 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -46,6 +46,7 @@ def set_up_suite(self) -> None:
Setup:
Set the build directory path and a list of NICs in the SUT node.
"""
+ self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir
self.nics_in_node = self.sut_node.config.ports
@@ -104,7 +105,7 @@ def test_devices_listed_in_testpmd(self) -> None:
Test:
List all devices found in testpmd and verify the configured devices are among them.
"""
- with TestPmdShell(self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
dev_list = [str(x) for x in testpmd.get_devices()]
for nic in self.nics_in_node:
self.verify(
diff --git a/dts/tests/TestSuite_softnic.py b/dts/tests/TestSuite_softnic.py
index 07480db392..370fd6b419 100644
--- a/dts/tests/TestSuite_softnic.py
+++ b/dts/tests/TestSuite_softnic.py
@@ -32,6 +32,7 @@ def set_up_suite(self) -> None:
Setup:
Generate the random packets that will be sent and create the softnic config files.
"""
+ self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
self.cli_file = self.prepare_softnic_files()
@@ -105,9 +106,8 @@ def softnic(self) -> None:
"""
with TestPmdShell(
- self.sut_node,
vdevs=[VirtualDevice(f"net_softnic0,firmware={self.cli_file},cpu_id=1,conn_port=8086")],
- eth_peer=[EthPeer(1, self.tg_node.ports[1].mac_address)],
+ eth_peer=[EthPeer(1, self.topology.tg_port_ingress.mac_address)],
port_topology=None,
) as shell:
shell.start()
diff --git a/dts/tests/TestSuite_uni_pkt.py b/dts/tests/TestSuite_uni_pkt.py
index 0898187675..656a69b0f1 100644
--- a/dts/tests/TestSuite_uni_pkt.py
+++ b/dts/tests/TestSuite_uni_pkt.py
@@ -85,7 +85,7 @@ def test_l2_packet_detect(self) -> None:
mac_id = "00:00:00:00:00:01"
packet_list = [Ether(dst=mac_id, type=0x88F7) / Raw(), Ether(dst=mac_id) / ARP() / Raw()]
flag_list = [RtePTypes.L2_ETHER_TIMESYNC, RtePTypes.L2_ETHER_ARP]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -118,7 +118,7 @@ def test_l3_l4_packet_detect(self) -> None:
RtePTypes.L4_ICMP,
RtePTypes.L4_FRAG | RtePTypes.L3_IPV4_EXT_UNKNOWN | RtePTypes.L2_ETHER,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -147,7 +147,7 @@ def test_ipv6_l4_packet_detect(self) -> None:
RtePTypes.L4_TCP,
RtePTypes.L3_IPV6_EXT_UNKNOWN,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -182,7 +182,7 @@ def test_l3_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_IP | RtePTypes.INNER_L4_ICMP,
RtePTypes.TUNNEL_IP | RtePTypes.INNER_L3_IPV6_EXT_UNKNOWN | RtePTypes.INNER_L4_FRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -215,7 +215,7 @@ def test_gre_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_SCTP,
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_ICMP,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -250,7 +250,7 @@ def test_nsh_packet_detect(self) -> None:
RtePTypes.L2_ETHER_NSH | RtePTypes.L3_IPV4_EXT_UNKNOWN | RtePTypes.L4_SCTP,
RtePTypes.L2_ETHER_NSH | RtePTypes.L3_IPV6_EXT_UNKNOWN | RtePTypes.L4_NONFRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
@func_test
@@ -295,6 +295,6 @@ def test_vxlan_tunnel_packet_detect(self) -> None:
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_ICMP,
RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L3_IPV6_EXT_UNKNOWN | RtePTypes.INNER_L4_FRAG,
]
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.rx_vxlan(4789, 0, True)
self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index c67520baef..d2a9e614d4 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -124,7 +124,7 @@ def test_vlan_receipt_no_stripping(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.start()
self.send_vlan_packet_and_verify(True, strip=False, vlan_id=1)
@@ -137,7 +137,7 @@ def test_vlan_receipt_stripping(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.set_vlan_strip(port=0, enable=True)
testpmd.start()
@@ -150,7 +150,7 @@ def test_vlan_no_receipt(self) -> None:
Test:
Create an interactive testpmd shell and verify a VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.start()
self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
@@ -162,7 +162,7 @@ def test_vlan_header_insertion(self) -> None:
Test:
Create an interactive testpmd shell and verify a non-VLAN packet.
"""
- with TestPmdShell(node=self.sut_node) as testpmd:
+ with TestPmdShell() as testpmd:
testpmd.set_forward_mode(SimpleForwardingModes.mac)
testpmd.set_promisc(port=0, enable=False)
testpmd.stop_all_ports()
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 6/7] dts: revamp runtime internals
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
` (4 preceding siblings ...)
2025-02-12 16:45 ` [PATCH v2 5/7] dts: add global runtime context Luca Vizzarro
@ 2025-02-12 16:45 ` Luca Vizzarro
2025-02-12 16:46 ` [PATCH v2 7/7] dts: remove node distinction Luca Vizzarro
2025-02-12 16:47 ` [PATCH v2 0/7] dts: revamp framework Luca Vizzarro
7 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:45 UTC (permalink / raw)
To: dev
Cc: Nicholas Pratte, Dean Marx, Luca Vizzarro, Paul Szczepanek, Patrick Robb
Enforce separation of concerns by letting test runs being isolated
through a new TestRun class and respective module. This also means that
any actions taken on the nodes must be handled exclusively by the test
run. An example being the creation and destruction of the traffic
generator. TestSuiteWithCases is now redundant as the configuration is
able to provide all the details about the test run's own test suites.
Any other runtime state which concerns the test runs, now belongs to
their class.
Finally, as the test run execution is isolated, all the runtime
internals are held in the new class. Internals which have been
completely reworked into a finite state machine (FSM), to simplify the
use and understanding of the different execution states, while
rendering the process of handling errors less repetitive and easier.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
doc/api/dts/framework.test_run.rst | 8 +
doc/api/dts/index.rst | 1 +
doc/guides/conf.py | 3 +-
dts/framework/exception.py | 33 +-
dts/framework/logger.py | 26 +-
dts/framework/runner.py | 556 +------------------
dts/framework/test_result.py | 143 +----
dts/framework/test_run.py | 640 ++++++++++++++++++++++
dts/framework/test_suite.py | 2 +-
dts/framework/testbed_model/capability.py | 24 +-
dts/framework/testbed_model/tg_node.py | 6 +-
11 files changed, 729 insertions(+), 713 deletions(-)
create mode 100644 doc/api/dts/framework.test_run.rst
create mode 100644 dts/framework/test_run.py
diff --git a/doc/api/dts/framework.test_run.rst b/doc/api/dts/framework.test_run.rst
new file mode 100644
index 0000000000..8147320ed9
--- /dev/null
+++ b/doc/api/dts/framework.test_run.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+test_run - Test Run Execution
+===========================================================
+
+.. automodule:: framework.test_run
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
index 90092014d2..33b05953d2 100644
--- a/doc/api/dts/index.rst
+++ b/doc/api/dts/index.rst
@@ -26,6 +26,7 @@ Modules
:maxdepth: 1
framework.runner
+ framework.test_run
framework.test_suite
framework.test_result
framework.settings
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 50cbc7f612..b71f0f47e2 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -59,7 +59,8 @@
# DTS API docs additional configuration
if environ.get('DTS_DOC_BUILD'):
- extensions = ['sphinx.ext.napoleon', 'sphinx.ext.autodoc']
+ extensions = ['sphinx.ext.napoleon', 'sphinx.ext.autodoc', 'sphinx.ext.graphviz']
+ graphviz_output_format = "svg"
# Pydantic models require autodoc_pydantic for the right formatting. Add if installed.
try:
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index d967ede09b..47e3fac05c 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -205,28 +205,27 @@ class TestCaseVerifyError(DTSError):
severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
-class BlockingTestSuiteError(DTSError):
- """A failure in a blocking test suite."""
+class InternalError(DTSError):
+ """An internal error or bug has occurred in DTS."""
#:
- severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
- _suite_name: str
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.INTERNAL_ERR
- def __init__(self, suite_name: str) -> None:
- """Define the meaning of the first argument.
- Args:
- suite_name: The blocking test suite.
- """
- self._suite_name = suite_name
+class SkippedTestException(DTSError):
+ """An exception raised when a test suite or case has been skipped."""
- def __str__(self) -> str:
- """Add some context to the string representation."""
- return f"Blocking suite {self._suite_name} failed."
+ #:
+ severity: ClassVar[ErrorSeverity] = ErrorSeverity.NO_ERR
+ def __init__(self, reason: str) -> None:
+ """Constructor.
-class InternalError(DTSError):
- """An internal error or bug has occurred in DTS."""
+ Args:
+ reason: The reason for the test being skipped.
+ """
+ self._reason = reason
- #:
- severity: ClassVar[ErrorSeverity] = ErrorSeverity.INTERNAL_ERR
+ def __str__(self) -> str:
+ """Stringify the exception."""
+ return self._reason
diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index d2b8e37da4..f43b442bc9 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -13,37 +13,15 @@
"""
import logging
-from enum import auto
from logging import FileHandler, StreamHandler
from pathlib import Path
from typing import ClassVar
-from .utils import StrEnum
-
date_fmt = "%Y/%m/%d %H:%M:%S"
stream_fmt = "%(asctime)s - %(stage)s - %(name)s - %(levelname)s - %(message)s"
dts_root_logger_name = "dts"
-class DtsStage(StrEnum):
- """The DTS execution stage."""
-
- #:
- pre_run = auto()
- #:
- test_run_setup = auto()
- #:
- test_suite_setup = auto()
- #:
- test_suite = auto()
- #:
- test_suite_teardown = auto()
- #:
- test_run_teardown = auto()
- #:
- post_run = auto()
-
-
class DTSLogger(logging.Logger):
"""The DTS logger class.
@@ -55,7 +33,7 @@ class DTSLogger(logging.Logger):
a new stage switch occurs. This is useful mainly for logging per test suite.
"""
- _stage: ClassVar[DtsStage] = DtsStage.pre_run
+ _stage: ClassVar[str] = "pre_run"
_extra_file_handlers: list[FileHandler] = []
def __init__(self, *args, **kwargs):
@@ -110,7 +88,7 @@ def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
self._add_file_handlers(Path(output_dir, self.name))
- def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None:
+ def set_stage(self, stage: str, log_file_path: Path | None = None) -> None:
"""Set the DTS execution stage and optionally log to files.
Set the DTS execution stage of the DTSLog class and optionally add
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index 60a885d8e6..90aeb63cfb 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -6,27 +6,15 @@
"""Test suite runner module.
-The module is responsible for running DTS in a series of stages:
-
- #. Test run stage,
- #. DPDK build stage,
- #. Test suite stage,
- #. Test case stage.
-
-The test run stage sets up the environment before running test suites.
-The test suite stage sets up steps common to all test cases
-and the test case stage runs test cases individually.
+The module is responsible for preparing DTS and running the test runs.
"""
import os
-import random
import sys
-from pathlib import Path
-from types import MethodType
-from typing import Iterable
from framework.config.common import ValidationContext
-from framework.testbed_model.capability import Capability, get_supported_capabilities
+from framework.test_run import TestRun
+from framework.testbed_model.node import Node
from framework.testbed_model.sut_node import SutNode
from framework.testbed_model.tg_node import TGNode
@@ -38,51 +26,20 @@
SutNodeConfiguration,
TGNodeConfiguration,
)
-from .config.test_run import (
- TestRunConfiguration,
- TestSuiteConfig,
-)
-from .exception import BlockingTestSuiteError, SSHTimeoutError, TestCaseVerifyError
-from .logger import DTSLogger, DtsStage, get_dts_logger
+from .logger import DTSLogger, get_dts_logger
from .settings import SETTINGS
from .test_result import (
DTSResult,
Result,
- TestCaseResult,
- TestRunResult,
- TestSuiteResult,
- TestSuiteWithCases,
)
-from .test_suite import TestCase, TestSuite
-from .testbed_model.topology import PortLink, Topology
class DTSRunner:
- r"""Test suite runner class.
-
- The class is responsible for running tests on testbeds defined in the test run configuration.
- Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult`
- or one of its subclasses. The test case results are also recorded.
-
- If an error occurs, the current stage is aborted, the error is recorded, everything in
- the inner stages is marked as blocked and the run continues in the next iteration
- of the same stage. The return code is the highest `severity` of all
- :class:`~.framework.exception.DTSError`\s.
-
- Example:
- An error occurs in a test suite setup. The current test suite is aborted,
- all its test cases are marked as blocked and the run continues
- with the next test suite. If the errored test suite was the last one in the
- given test run, the next test run begins.
- """
+ """Test suite runner class."""
_configuration: Configuration
_logger: DTSLogger
_result: DTSResult
- _test_suite_class_prefix: str
- _test_suite_module_prefix: str
- _func_test_case_regex: str
- _perf_test_case_regex: str
def __init__(self):
"""Initialize the instance with configuration, logger, result and string constants."""
@@ -92,94 +49,45 @@ def __init__(self):
os.makedirs(SETTINGS.output_dir)
self._logger.add_dts_root_logger_handlers(SETTINGS.verbose, SETTINGS.output_dir)
self._result = DTSResult(SETTINGS.output_dir, self._logger)
- self._test_suite_class_prefix = "Test"
- self._test_suite_module_prefix = "tests.TestSuite_"
- self._func_test_case_regex = r"test_(?!perf_)"
- self._perf_test_case_regex = r"test_perf_"
def run(self) -> None:
- """Run all test runs from the test run configuration.
-
- Before running test suites, test runs are first set up.
- The test runs defined in the test run configuration are iterated over.
- The test runs define which tests to run and where to run them.
-
- The test suites are set up for each test run and each discovered
- test case within the test suite is set up, executed and torn down. After all test cases
- have been executed, the test suite is torn down and the next test suite will be run. Once
- all test suites have been run, the next test run will be tested.
-
- In order to properly mark test suites and test cases as blocked in case of a failure,
- we need to have discovered which test suites and test cases to run before any failures
- happen. The discovery happens at the earliest point at the start of each test run.
-
- All the nested steps look like this:
-
- #. Test run setup
-
- #. Test suite setup
-
- #. Test case setup
- #. Test case logic
- #. Test case teardown
+ """Run DTS.
- #. Test suite teardown
-
- #. Test run teardown
-
- The test cases are filtered according to the specification in the test run configuration and
- the :option:`--test-suite` command line argument or
- the :envvar:`DTS_TESTCASES` environment variable.
+ Prepare all the nodes ahead of the test runs execution,
+ which are subsequently run as configured.
"""
- sut_nodes: dict[str, SutNode] = {}
- tg_nodes: dict[str, TGNode] = {}
+ nodes: list[Node] = []
try:
# check the python version of the server that runs dts
self._check_dts_python_version()
self._result.update_setup(Result.PASS)
+ for node_config in self._configuration.nodes:
+ node: Node
+
+ match node_config:
+ case SutNodeConfiguration():
+ node = SutNode(node_config)
+ case TGNodeConfiguration():
+ node = TGNode(node_config)
+
+ nodes.append(node)
+
# for all test run sections
- for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
- test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
- self._logger.set_stage(DtsStage.test_run_setup)
- self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
- self._init_random_seed(test_run_config)
+ for test_run_config in self._configuration.test_runs:
test_run_result = self._result.add_test_run(test_run_config)
- # we don't want to modify the original config, so create a copy
- test_run_test_suites = test_run_config.test_suites
- if not test_run_config.skip_smoke_tests:
- test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
- try:
- test_suites_with_cases = self._get_test_suites_with_cases(
- test_run_test_suites, test_run_config.func, test_run_config.perf
- )
- test_run_result.test_suites_with_cases = test_suites_with_cases
- except Exception as e:
- self._logger.exception(
- f"Invalid test suite configuration found: " f"{test_run_test_suites}."
- )
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- self._connect_nodes_and_run_test_run(
- sut_nodes,
- tg_nodes,
- sut_node_config,
- tg_node_config,
- test_run_config,
- test_run_result,
- test_suites_with_cases,
- )
+ test_run = TestRun(test_run_config, nodes, test_run_result)
+ test_run.spin()
except Exception as e:
self._logger.exception("An unexpected error has occurred.")
self._result.add_error(e)
- raise
+ # raise
finally:
try:
- self._logger.set_stage(DtsStage.post_run)
- for node in (sut_nodes | tg_nodes).values():
+ self._logger.set_stage("post_run")
+ for node in nodes:
node.close()
self._result.update_teardown(Result.PASS)
except Exception as e:
@@ -205,412 +113,6 @@ def _check_dts_python_version(self) -> None:
)
self._logger.warning("Please use Python >= 3.10 instead.")
- def _get_test_suites_with_cases(
- self,
- test_suite_configs: list[TestSuiteConfig],
- func: bool,
- perf: bool,
- ) -> list[TestSuiteWithCases]:
- """Get test suites with selected cases.
-
- The test suites with test cases defined in the user configuration are selected
- and the corresponding functions and classes are gathered.
-
- Args:
- test_suite_configs: Test suite configurations.
- func: Whether to include functional test cases in the final list.
- perf: Whether to include performance test cases in the final list.
-
- Returns:
- The test suites, each with test cases.
- """
- test_suites_with_cases = []
-
- for test_suite_config in test_suite_configs:
- test_suite_class = test_suite_config.test_suite_spec.class_obj
- test_cases: list[type[TestCase]] = []
- func_test_cases, perf_test_cases = test_suite_class.filter_test_cases(
- test_suite_config.test_cases_names
- )
- if func:
- test_cases.extend(func_test_cases)
- if perf:
- test_cases.extend(perf_test_cases)
-
- test_suites_with_cases.append(
- TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
- )
- return test_suites_with_cases
-
- def _connect_nodes_and_run_test_run(
- self,
- sut_nodes: dict[str, SutNode],
- tg_nodes: dict[str, TGNode],
- sut_node_config: SutNodeConfiguration,
- tg_node_config: TGNodeConfiguration,
- test_run_config: TestRunConfiguration,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Connect nodes, then continue to run the given test run.
-
- Connect the :class:`SutNode` and the :class:`TGNode` of this `test_run_config`.
- If either has already been connected, it's going to be in either `sut_nodes` or `tg_nodes`,
- respectively.
- If not, connect and add the node to the respective `sut_nodes` or `tg_nodes` :class:`dict`.
-
- Args:
- sut_nodes: A dictionary storing connected/to be connected SUT nodes.
- tg_nodes: A dictionary storing connected/to be connected TG nodes.
- sut_node_config: The test run's SUT node configuration.
- tg_node_config: The test run's TG node configuration.
- test_run_config: A test run configuration.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
- """
- sut_node = sut_nodes.get(sut_node_config.name)
- tg_node = tg_nodes.get(tg_node_config.name)
-
- try:
- if not sut_node:
- sut_node = SutNode(sut_node_config)
- sut_nodes[sut_node.name] = sut_node
- if not tg_node:
- tg_node = TGNode(tg_node_config)
- tg_nodes[tg_node.name] = tg_node
- except Exception as e:
- failed_node = test_run_config.system_under_test_node
- if sut_node:
- failed_node = test_run_config.traffic_generator_node
- self._logger.exception(f"The Creation of node {failed_node} failed.")
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- self._run_test_run(
- sut_node,
- tg_node,
- test_run_config,
- test_run_result,
- test_suites_with_cases,
- )
-
- def _run_test_run(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- test_run_config: TestRunConfiguration,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Run the given test run.
-
- This involves running the test run setup as well as running all test suites
- in the given test run. After that, the test run teardown is run.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- test_run_config: A test run configuration.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
-
- Raises:
- ConfigurationError: If the DPDK sources or build is not set up from config or settings.
- """
- self._logger.info(f"Running test run with SUT '{test_run_config.system_under_test_node}'.")
- test_run_result.ports = sut_node.ports
- test_run_result.sut_info = sut_node.node_info
- try:
- dpdk_build_config = test_run_config.dpdk_config
- sut_node.set_up_test_run(test_run_config, dpdk_build_config)
- test_run_result.dpdk_build_info = sut_node.get_dpdk_build_info()
- tg_node.set_up_test_run(test_run_config, dpdk_build_config)
- test_run_result.update_setup(Result.PASS)
- except Exception as e:
- self._logger.exception("Test run setup failed.")
- test_run_result.update_setup(Result.FAIL, e)
-
- else:
- topology = Topology(
- PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
- for link in test_run_config.port_topology
- )
- self._run_test_suites(
- sut_node, tg_node, topology, test_run_result, test_suites_with_cases
- )
-
- finally:
- try:
- self._logger.set_stage(DtsStage.test_run_teardown)
- sut_node.tear_down_test_run()
- tg_node.tear_down_test_run()
- test_run_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception("Test run teardown failed.")
- test_run_result.update_teardown(Result.FAIL, e)
-
- def _get_supported_capabilities(
- self,
- sut_node: SutNode,
- topology_config: Topology,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> set[Capability]:
- capabilities_to_check = set()
- for test_suite_with_cases in test_suites_with_cases:
- capabilities_to_check.update(test_suite_with_cases.required_capabilities)
-
- self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
-
- return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
-
- def _run_test_suites(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- test_run_result: TestRunResult,
- test_suites_with_cases: Iterable[TestSuiteWithCases],
- ) -> None:
- """Run `test_suites_with_cases` with the current test run.
-
- The method assumes the DPDK we're testing has already been built on the SUT node.
-
- Before running any suites, the method determines whether they should be skipped
- by inspecting any required capabilities the test suite needs and comparing those
- to capabilities supported by the tested environment. If all capabilities are supported,
- the suite is run. If all test cases in a test suite would be skipped, the whole test suite
- is skipped (the setup and teardown is not run).
-
- If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites
- in the current test run won't be executed.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- topology: The test run's port topology.
- test_run_result: The test run's result.
- test_suites_with_cases: The test suites with test cases to run.
- """
- end_dpdk_build = False
- supported_capabilities = self._get_supported_capabilities(
- sut_node, topology, test_suites_with_cases
- )
- for test_suite_with_cases in test_suites_with_cases:
- test_suite_with_cases.mark_skip_unsupported(supported_capabilities)
- test_suite_result = test_run_result.add_test_suite(test_suite_with_cases)
- try:
- if not test_suite_with_cases.skip:
- self._run_test_suite(
- sut_node,
- tg_node,
- topology,
- test_suite_result,
- test_suite_with_cases,
- )
- else:
- self._logger.info(
- f"Test suite execution SKIPPED: "
- f"'{test_suite_with_cases.test_suite_class.__name__}'. Reason: "
- f"{test_suite_with_cases.test_suite_class.skip_reason}"
- )
- test_suite_result.update_setup(Result.SKIP)
- except BlockingTestSuiteError as e:
- self._logger.exception(
- f"An error occurred within {test_suite_with_cases.test_suite_class.__name__}. "
- "Skipping the rest of the test suites in this test run."
- )
- self._result.add_error(e)
- end_dpdk_build = True
- # if a blocking test failed and we need to bail out of suite executions
- if end_dpdk_build:
- break
-
- def _run_test_suite(
- self,
- sut_node: SutNode,
- tg_node: TGNode,
- topology: Topology,
- test_suite_result: TestSuiteResult,
- test_suite_with_cases: TestSuiteWithCases,
- ) -> None:
- """Set up, execute and tear down `test_suite_with_cases`.
-
- The method assumes the DPDK we're testing has already been built on the SUT node.
-
- Test suite execution consists of running the discovered test cases.
- A test case run consists of setup, execution and teardown of said test case.
-
- Record the setup and the teardown and handle failures.
-
- Args:
- sut_node: The test run's SUT node.
- tg_node: The test run's TG node.
- topology: The port topology of the nodes.
- test_suite_result: The test suite level result object associated
- with the current test suite.
- test_suite_with_cases: The test suite with test cases to run.
-
- Raises:
- BlockingTestSuiteError: If a blocking test suite fails.
- """
- test_suite_name = test_suite_with_cases.test_suite_class.__name__
- self._logger.set_stage(
- DtsStage.test_suite_setup, Path(SETTINGS.output_dir, test_suite_name)
- )
- test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node, topology)
- try:
- self._logger.info(f"Starting test suite setup: {test_suite_name}")
- test_suite.set_up_suite()
- test_suite_result.update_setup(Result.PASS)
- self._logger.info(f"Test suite setup successful: {test_suite_name}")
- except Exception as e:
- self._logger.exception(f"Test suite setup ERROR: {test_suite_name}")
- test_suite_result.update_setup(Result.ERROR, e)
-
- else:
- self._execute_test_suite(
- test_suite,
- test_suite_with_cases.test_cases,
- test_suite_result,
- )
- finally:
- try:
- self._logger.set_stage(DtsStage.test_suite_teardown)
- test_suite.tear_down_suite()
- sut_node.kill_cleanup_dpdk_apps()
- test_suite_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}")
- self._logger.warning(
- f"Test suite '{test_suite_name}' teardown failed, "
- "the next test suite may be affected."
- )
- test_suite_result.update_setup(Result.ERROR, e)
- if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking:
- raise BlockingTestSuiteError(test_suite_name)
-
- def _execute_test_suite(
- self,
- test_suite: TestSuite,
- test_cases: Iterable[type[TestCase]],
- test_suite_result: TestSuiteResult,
- ) -> None:
- """Execute all `test_cases` in `test_suite`.
-
- If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment
- variable is set, in case of a test case failure, the test case will be executed again
- until it passes or it fails that many times in addition of the first failure.
-
- Args:
- test_suite: The test suite object.
- test_cases: The list of test case functions.
- test_suite_result: The test suite level result object associated
- with the current test suite.
- """
- self._logger.set_stage(DtsStage.test_suite)
- for test_case in test_cases:
- test_case_name = test_case.__name__
- test_case_result = test_suite_result.add_test_case(test_case_name)
- all_attempts = SETTINGS.re_run + 1
- attempt_nr = 1
- if not test_case.skip:
- self._run_test_case(test_suite, test_case, test_case_result)
- while not test_case_result and attempt_nr < all_attempts:
- attempt_nr += 1
- self._logger.info(
- f"Re-running FAILED test case '{test_case_name}'. "
- f"Attempt number {attempt_nr} out of {all_attempts}."
- )
- self._run_test_case(test_suite, test_case, test_case_result)
- else:
- self._logger.info(
- f"Test case execution SKIPPED: {test_case_name}. Reason: "
- f"{test_case.skip_reason}"
- )
- test_case_result.update_setup(Result.SKIP)
-
- def _run_test_case(
- self,
- test_suite: TestSuite,
- test_case: type[TestCase],
- test_case_result: TestCaseResult,
- ) -> None:
- """Setup, execute and teardown `test_case_method` from `test_suite`.
-
- Record the result of the setup and the teardown and handle failures.
-
- Args:
- test_suite: The test suite object.
- test_case: The test case function.
- test_case_result: The test case level result object associated
- with the current test case.
- """
- test_case_name = test_case.__name__
-
- try:
- # run set_up function for each case
- test_suite.set_up_test_case()
- test_case_result.update_setup(Result.PASS)
- except SSHTimeoutError as e:
- self._logger.exception(f"Test case setup FAILED: {test_case_name}")
- test_case_result.update_setup(Result.FAIL, e)
- except Exception as e:
- self._logger.exception(f"Test case setup ERROR: {test_case_name}")
- test_case_result.update_setup(Result.ERROR, e)
-
- else:
- # run test case if setup was successful
- self._execute_test_case(test_suite, test_case, test_case_result)
-
- finally:
- try:
- test_suite.tear_down_test_case()
- test_case_result.update_teardown(Result.PASS)
- except Exception as e:
- self._logger.exception(f"Test case teardown ERROR: {test_case_name}")
- self._logger.warning(
- f"Test case '{test_case_name}' teardown failed, "
- f"the next test case may be affected."
- )
- test_case_result.update_teardown(Result.ERROR, e)
- test_case_result.update(Result.ERROR)
-
- def _execute_test_case(
- self,
- test_suite: TestSuite,
- test_case: type[TestCase],
- test_case_result: TestCaseResult,
- ) -> None:
- """Execute `test_case_method` from `test_suite`, record the result and handle failures.
-
- Args:
- test_suite: The test suite object.
- test_case: The test case function.
- test_case_result: The test case level result object associated
- with the current test case.
-
- Raises:
- KeyboardInterrupt: If DTS has been interrupted by the user.
- """
- test_case_name = test_case.__name__
- try:
- self._logger.info(f"Starting test case execution: {test_case_name}")
- # Explicit method binding is required, otherwise mypy complains
- MethodType(test_case, test_suite)()
- test_case_result.update(Result.PASS)
- self._logger.info(f"Test case execution PASSED: {test_case_name}")
-
- except TestCaseVerifyError as e:
- self._logger.exception(f"Test case execution FAILED: {test_case_name}")
- test_case_result.update(Result.FAIL, e)
- except Exception as e:
- self._logger.exception(f"Test case execution ERROR: {test_case_name}")
- test_case_result.update(Result.ERROR, e)
- except KeyboardInterrupt:
- self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
- test_case_result.update(Result.SKIP)
- raise KeyboardInterrupt("Stop DTS")
-
def _exit_dts(self) -> None:
"""Process all errors and exit with the proper exit code."""
self._result.process()
@@ -619,9 +121,3 @@ def _exit_dts(self) -> None:
self._logger.info("DTS execution has ended.")
sys.exit(self._result.get_return_code())
-
- def _init_random_seed(self, conf: TestRunConfiguration) -> None:
- """Initialize the random seed to use for the test run."""
- seed = conf.random_seed or random.randrange(0xFFFF_FFFF)
- self._logger.info(f"Initializing test run with random seed {seed}.")
- random.seed(seed)
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 1acb526b64..a59bac71bb 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -25,98 +25,18 @@
import json
from collections.abc import MutableSequence
-from dataclasses import asdict, dataclass, field
from enum import Enum, auto
from pathlib import Path
-from typing import Any, Callable, TypedDict, cast
+from typing import Any, Callable, TypedDict
-from framework.config.node import PortConfig
-from framework.testbed_model.capability import Capability
-
-from .config.test_run import TestRunConfiguration, TestSuiteConfig
+from .config.test_run import TestRunConfiguration
from .exception import DTSError, ErrorSeverity
from .logger import DTSLogger
-from .test_suite import TestCase, TestSuite
from .testbed_model.os_session import OSSessionInfo
from .testbed_model.port import Port
from .testbed_model.sut_node import DPDKBuildInfo
-@dataclass(slots=True, frozen=True)
-class TestSuiteWithCases:
- """A test suite class with test case methods.
-
- An auxiliary class holding a test case class with test case methods. The intended use of this
- class is to hold a subset of test cases (which could be all test cases) because we don't have
- all the data to instantiate the class at the point of inspection. The knowledge of this subset
- is needed in case an error occurs before the class is instantiated and we need to record
- which test cases were blocked by the error.
-
- Attributes:
- test_suite_class: The test suite class.
- test_cases: The test case methods.
- required_capabilities: The combined required capabilities of both the test suite
- and the subset of test cases.
- """
-
- test_suite_class: type[TestSuite]
- test_cases: list[type[TestCase]]
- required_capabilities: set[Capability] = field(default_factory=set, init=False)
-
- def __post_init__(self):
- """Gather the required capabilities of the test suite and all test cases."""
- for test_object in [self.test_suite_class] + self.test_cases:
- self.required_capabilities.update(test_object.required_capabilities)
-
- def create_config(self) -> TestSuiteConfig:
- """Generate a :class:`TestSuiteConfig` from the stored test suite with test cases.
-
- Returns:
- The :class:`TestSuiteConfig` representation.
- """
- return TestSuiteConfig(
- test_suite=self.test_suite_class.__name__,
- test_cases=[test_case.__name__ for test_case in self.test_cases],
- )
-
- def mark_skip_unsupported(self, supported_capabilities: set[Capability]) -> None:
- """Mark the test suite and test cases to be skipped.
-
- The mark is applied if object to be skipped requires any capabilities and at least one of
- them is not among `supported_capabilities`.
-
- Args:
- supported_capabilities: The supported capabilities.
- """
- for test_object in [self.test_suite_class, *self.test_cases]:
- capabilities_not_supported = test_object.required_capabilities - supported_capabilities
- if capabilities_not_supported:
- test_object.skip = True
- capability_str = (
- "capability" if len(capabilities_not_supported) == 1 else "capabilities"
- )
- test_object.skip_reason = (
- f"Required {capability_str} '{capabilities_not_supported}' not found."
- )
- if not self.test_suite_class.skip:
- if all(test_case.skip for test_case in self.test_cases):
- self.test_suite_class.skip = True
-
- self.test_suite_class.skip_reason = (
- "All test cases are marked to be skipped with reasons: "
- f"{' '.join(test_case.skip_reason for test_case in self.test_cases)}"
- )
-
- @property
- def skip(self) -> bool:
- """Skip the test suite if all test cases or the suite itself are to be skipped.
-
- Returns:
- :data:`True` if the test suite should be skipped, :data:`False` otherwise.
- """
- return all(test_case.skip for test_case in self.test_cases) or self.test_suite_class.skip
-
-
class Result(Enum):
"""The possible states that a setup, a teardown or a test case may end up in."""
@@ -463,7 +383,6 @@ class TestRunResult(BaseResult):
"""
_config: TestRunConfiguration
- _test_suites_with_cases: list[TestSuiteWithCases]
_ports: list[Port]
_sut_info: OSSessionInfo | None
_dpdk_build_info: DPDKBuildInfo | None
@@ -476,49 +395,23 @@ def __init__(self, test_run_config: TestRunConfiguration):
"""
super().__init__()
self._config = test_run_config
- self._test_suites_with_cases = []
self._ports = []
self._sut_info = None
self._dpdk_build_info = None
- def add_test_suite(
- self,
- test_suite_with_cases: TestSuiteWithCases,
- ) -> "TestSuiteResult":
+ def add_test_suite(self, test_suite_name: str) -> "TestSuiteResult":
"""Add and return the child result (test suite).
Args:
- test_suite_with_cases: The test suite with test cases.
+ test_suite_name: The test suite name.
Returns:
The test suite's result.
"""
- result = TestSuiteResult(test_suite_with_cases)
+ result = TestSuiteResult(test_suite_name)
self.child_results.append(result)
return result
- @property
- def test_suites_with_cases(self) -> list[TestSuiteWithCases]:
- """The test suites with test cases to be executed in this test run.
-
- The test suites can only be assigned once.
-
- Returns:
- The list of test suites with test cases. If an error occurs between
- the initialization of :class:`TestRunResult` and assigning test cases to the instance,
- return an empty list, representing that we don't know what to execute.
- """
- return self._test_suites_with_cases
-
- @test_suites_with_cases.setter
- def test_suites_with_cases(self, test_suites_with_cases: list[TestSuiteWithCases]) -> None:
- if self._test_suites_with_cases:
- raise ValueError(
- "Attempted to assign test suites to a test run result "
- "which already has test suites."
- )
- self._test_suites_with_cases = test_suites_with_cases
-
@property
def ports(self) -> list[Port]:
"""Get the list of ports associated with this test run."""
@@ -602,24 +495,14 @@ def to_dict(self) -> TestRunResultDict:
compiler_version = self.dpdk_build_info.compiler_version
dpdk_version = self.dpdk_build_info.dpdk_version
- ports = [asdict(port) for port in self.ports]
- for port in ports:
- port["config"] = cast(PortConfig, port["config"]).model_dump()
-
return {
"compiler_version": compiler_version,
"dpdk_version": dpdk_version,
- "ports": ports,
+ "ports": [port.to_dict() for port in self.ports],
"test_suites": [child.to_dict() for child in self.child_results],
"summary": results | self.generate_pass_rate_dict(results),
}
- def _mark_results(self, result) -> None:
- """Mark the test suite results as `result`."""
- for test_suite_with_cases in self._test_suites_with_cases:
- child_result = self.add_test_suite(test_suite_with_cases)
- child_result.update_setup(result)
-
class TestSuiteResult(BaseResult):
"""The test suite specific result.
@@ -631,18 +514,16 @@ class TestSuiteResult(BaseResult):
"""
test_suite_name: str
- _test_suite_with_cases: TestSuiteWithCases
_child_configs: list[str]
- def __init__(self, test_suite_with_cases: TestSuiteWithCases):
+ def __init__(self, test_suite_name: str):
"""Extend the constructor with test suite's config.
Args:
- test_suite_with_cases: The test suite with test cases.
+ test_suite_name: The test suite name.
"""
super().__init__()
- self.test_suite_name = test_suite_with_cases.test_suite_class.__name__
- self._test_suite_with_cases = test_suite_with_cases
+ self.test_suite_name = test_suite_name
def add_test_case(self, test_case_name: str) -> "TestCaseResult":
"""Add and return the child result (test case).
@@ -667,12 +548,6 @@ def to_dict(self) -> TestSuiteResultDict:
"test_cases": [child.to_dict() for child in self.child_results],
}
- def _mark_results(self, result) -> None:
- """Mark the test case results as `result`."""
- for test_case_method in self._test_suite_with_cases.test_cases:
- child_result = self.add_test_case(test_case_method.__name__)
- child_result.update_setup(result)
-
class TestCaseResult(BaseResult, FixtureResult):
r"""The test case specific result.
diff --git a/dts/framework/test_run.py b/dts/framework/test_run.py
new file mode 100644
index 0000000000..811798f57f
--- /dev/null
+++ b/dts/framework/test_run.py
@@ -0,0 +1,640 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+r"""Test run module.
+
+The test run is implemented as a finite state machine which maintains a globally accessible
+:class:`~.context.Context` and each state implements :class:`State`.
+
+To spin up the test run state machine call :meth:`~TestRun.spin`.
+
+The following graph represents all the states and steps of the state machine. Each node represents a
+state labelled with the initials, e.g. ``TRS`` is represented by :class:`TestRunSetup`. States
+represented by a double green circle are looping states. These states are only exited through:
+
+ * **next** which progresses to the next test suite/case.
+ * **end** which indicates that no more test suites/cases are available and
+ the loop is terminated.
+
+Red dashed links represent the path taken when an exception is
+raised in the origin state. If a state does not have one, then the execution progresses as usual.
+When :class:`~.exception.InternalError` is raised in any state, the state machine execution is
+immediately terminated.
+Orange dashed links represent exceptional conditions. Test suites and cases can be ``blocked`` or
+``skipped`` in the following conditions:
+
+ * If a *blocking* test suite fails, the ``blocked`` flag is raised.
+ * If the user sends a ``SIGINT`` signal, the ``blocked`` flag is raised.
+ * If a test suite and/or test case requires a capability unsupported by the test run, then this
+ is ``skipped`` and the state restarts from the beginning.
+
+Finally, test cases **retry** when they fail and DTS is configured to re-run.
+
+.. digraph:: test_run_fsm
+
+ bgcolor=transparent
+ nodesep=0.5
+ ranksep=0.3
+
+ node [fontname="sans-serif" fixedsize="true" width="0.7"]
+ edge [fontname="monospace" color="gray30" fontsize=12]
+ node [shape="circle"] "TRS" "TRT" "TSS" "TST" "TCS" "TCT"
+
+ node [shape="doublecircle" style="bold" color="darkgreen"] "TRE" "TSE" "TCE"
+
+ node [style="solid" shape="plaintext" fontname="monospace" fontsize=12 fixedsize="false"] "exit"
+
+ "TRS" -> "TRE"
+ "TRE":e -> "TRT":w [taillabel="end" labeldistance=1.5 labelangle=45]
+ "TRT" -> "exit" [style="solid" color="gray30"]
+
+ "TRE" -> "TSS" [headlabel="next" labeldistance=3 labelangle=320]
+ "TSS" -> "TSE"
+ "TSE" -> "TST" [label="end"]
+ "TST" -> "TRE"
+
+ "TSE" -> "TCS" [headlabel="next" labeldistance=3 labelangle=320]
+ "TCS" -> "TCE" -> "TCT" -> "TSE":se
+
+
+ edge [fontcolor="orange", color="orange" style="dashed"]
+ "TRE":sw -> "TSE":nw [taillabel="next\n(blocked)" labeldistance=13]
+ "TSE":ne -> "TRE" [taillabel="end\n(blocked)" labeldistance=7.5 labelangle=345]
+ "TRE":w -> "TRE":nw [headlabel="next\n(skipped)" labeldistance=4]
+ "TSE":e -> "TSE":e [taillabel="next\n(blocked)\n(skipped)" labelangle=325 labeldistance=7.5]
+ "TCE":e -> "TCE":e [taillabel="retry" labelangle=5 labeldistance=2.5]
+
+ edge [fontcolor="crimson" color="crimson"]
+ "TRS" -> "TRT"
+ "TSS":w -> "TST":n
+ "TCS" -> "TCT"
+
+ node [fontcolor="crimson" color="crimson"]
+ "InternalError" -> "exit":ew
+"""
+
+import random
+from collections import deque
+from collections.abc import Iterable
+from dataclasses import dataclass
+from functools import cached_property
+from pathlib import Path
+from types import MethodType
+from typing import ClassVar, Protocol, Union, cast
+
+from framework.config.test_run import TestRunConfiguration
+from framework.context import Context, init_ctx
+from framework.exception import (
+ InternalError,
+ SkippedTestException,
+ TestCaseVerifyError,
+)
+from framework.logger import DTSLogger, get_dts_logger
+from framework.settings import SETTINGS
+from framework.test_result import BaseResult, Result, TestCaseResult, TestRunResult, TestSuiteResult
+from framework.test_suite import TestCase, TestSuite
+from framework.testbed_model.capability import (
+ Capability,
+ get_supported_capabilities,
+ test_if_supported,
+)
+from framework.testbed_model.node import Node
+from framework.testbed_model.sut_node import SutNode
+from framework.testbed_model.tg_node import TGNode
+from framework.testbed_model.topology import PortLink, Topology
+
+TestScenario = tuple[type[TestSuite], deque[type[TestCase]]]
+
+
+class TestRun:
+ r"""A class representing a test run.
+
+ The class is responsible for running tests on testbeds defined in the test run configuration.
+ Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult`
+ or one of its subclasses. The test case results are also recorded.
+
+ If an error occurs, the current stage is aborted, the error is recorded, everything in
+ the inner stages is marked as blocked and the run continues in the next iteration
+ of the same stage. The return code is the highest `severity` of all
+ :class:`~.framework.exception.DTSError`\s.
+
+ Example:
+ An error occurs in a test suite setup. The current test suite is aborted,
+ all its test cases are marked as blocked and the run continues
+ with the next test suite. If the errored test suite was the last one in the
+ given test run, the next test run begins.
+
+ Attributes:
+ config: The test run configuration.
+ logger: A reference to the current logger.
+ state: The current state of the state machine.
+ ctx: The test run's runtime context.
+ result: The test run's execution result.
+ selected_tests: The test suites and cases selected in this test run.
+ blocked: :data:`True` if the test run execution has been blocked.
+ remaining_tests: The remaining tests in the execution of the test run.
+ remaining_test_cases: The remaining test cases in the execution of a test suite within the
+ test run's state machine.
+ supported_capabilities: All the capabilities supported by this test run.
+ """
+
+ config: TestRunConfiguration
+ logger: DTSLogger
+
+ state: "State"
+ ctx: Context
+ result: TestRunResult
+ selected_tests: list[TestScenario]
+
+ blocked: bool
+ remaining_tests: deque[TestScenario]
+ remaining_test_cases: deque[type[TestCase]]
+ supported_capabilities: set[Capability]
+
+ def __init__(self, config: TestRunConfiguration, nodes: Iterable[Node], result: TestRunResult):
+ """Test run constructor.
+
+ Args:
+ config: The test run's own configuration.
+ nodes: A reference to all the available nodes.
+ result: A reference to the test run result object.
+ """
+ self.config = config
+ self.logger = get_dts_logger()
+
+ sut_node = next(n for n in nodes if n.name == config.system_under_test_node)
+ sut_node = cast(SutNode, sut_node) # Config validation must render this valid.
+
+ tg_node = next(n for n in nodes if n.name == config.traffic_generator_node)
+ tg_node = cast(TGNode, tg_node) # Config validation must render this valid.
+
+ topology = Topology.from_port_links(
+ PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
+ for link in self.config.port_topology
+ )
+
+ self.ctx = Context(sut_node, tg_node, topology)
+ self.result = result
+ self.selected_tests = list(self.config.filter_tests())
+ self.blocked = False
+ self.remaining_tests = deque()
+ self.remaining_test_cases = deque()
+ self.supported_capabilities = set()
+
+ self.state = TestRunSetup(self, self.result)
+
+ @cached_property
+ def required_capabilities(self) -> set[Capability]:
+ """The capabilities required to run this test run in its totality."""
+ caps = set()
+
+ for test_suite, test_cases in self.selected_tests:
+ caps.update(test_suite.required_capabilities)
+ for test_case in test_cases:
+ caps.update(test_case.required_capabilities)
+
+ return caps
+
+ def spin(self):
+ """Spin the internal state machine that executes the test run."""
+ self.logger.info(f"Running test run with SUT '{self.ctx.sut_node.name}'.")
+
+ while self.state is not None:
+ try:
+ self.state.before()
+ next_state = self.state.next()
+ except (KeyboardInterrupt, Exception) as e:
+ next_state = self.state.handle_exception(e)
+ finally:
+ self.state.after()
+ if next_state is not None:
+ self.logger.debug(
+ f"FSM - moving from '{self.state.logger_name}' to '{next_state.logger_name}'"
+ )
+ self.state = next_state
+
+ def init_random_seed(self) -> None:
+ """Initialize the random seed to use for the test run."""
+ seed = self.config.random_seed or random.randrange(0xFFFF_FFFF)
+ self.logger.info(f"Initializing with random seed {seed}.")
+ random.seed(seed)
+
+
+class State(Protocol):
+ """Protocol indicating the state of the test run."""
+
+ logger_name: ClassVar[str]
+ test_run: TestRun
+ result: BaseResult
+
+ def before(self):
+ """Hook before the state is processed."""
+ self.logger.set_stage(self.logger_name, self.log_file_path)
+
+ def after(self):
+ """Hook after the state is processed."""
+ return
+
+ @property
+ def description(self) -> str:
+ """State description."""
+
+ @cached_property
+ def logger(self) -> DTSLogger:
+ """A reference to the root logger."""
+ return get_dts_logger()
+
+ def get_log_file_name(self) -> str | None:
+ """Name of the log file for this state."""
+ return None
+
+ @property
+ def log_file_path(self) -> Path | None:
+ """Path to the log file for this state."""
+ if file_name := self.get_log_file_name():
+ return Path(SETTINGS.output_dir, file_name)
+ return None
+
+ def next(self) -> Union["State", None]:
+ """Next state."""
+
+ def on_error(self, ex: Exception) -> Union["State", None]:
+ """Next state on error."""
+
+ def handle_exception(self, ex: Exception) -> Union["State", None]:
+ """Handles an exception raised by `next`."""
+ next_state = self.on_error(ex)
+
+ match ex:
+ case InternalError():
+ self.logger.error(
+ f"A CRITICAL ERROR has occurred during {self.description}. "
+ "Unrecoverable state reached, shutting down."
+ )
+ raise
+ case KeyboardInterrupt():
+ self.logger.info(
+ f"{self.description.capitalize()} INTERRUPTED by user! "
+ "Shutting down gracefully."
+ )
+ self.test_run.blocked = True
+ case _:
+ self.logger.error(f"An unexpected ERROR has occurred during {self.description}.")
+ self.logger.exception(ex)
+
+ return next_state
+
+
+@dataclass
+class TestRunSetup(State):
+ """Test run setup."""
+
+ logger_name: ClassVar[str] = "test_run_setup"
+ test_run: TestRun
+ result: TestRunResult
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return "test run setup"
+
+ def next(self) -> State | None:
+ """Process state and return the next one."""
+ test_run = self.test_run
+ init_ctx(test_run.ctx)
+
+ self.logger.info(f"Running on SUT node '{test_run.ctx.sut_node.name}'.")
+ test_run.init_random_seed()
+ test_run.remaining_tests = deque(test_run.selected_tests)
+
+ test_run.ctx.sut_node.set_up_test_run(test_run.config, test_run.ctx.topology.sut_ports)
+
+ self.result.ports = test_run.ctx.topology.sut_ports + test_run.ctx.topology.tg_ports
+ self.result.sut_info = test_run.ctx.sut_node.node_info
+ self.result.dpdk_build_info = test_run.ctx.sut_node.get_dpdk_build_info()
+
+ self.logger.debug(f"Found capabilities to check: {test_run.required_capabilities}")
+ test_run.supported_capabilities = get_supported_capabilities(
+ test_run.ctx.sut_node, test_run.ctx.topology, test_run.required_capabilities
+ )
+
+ self.result.update_setup(Result.PASS)
+ return TestRunExecution(test_run, self.result)
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.result.update_setup(Result.ERROR, ex)
+ return TestRunTeardown(self.test_run, self.result)
+
+
+@dataclass
+class TestRunExecution(State):
+ """Test run execution."""
+
+ logger_name: ClassVar[str] = "test_run"
+ test_run: TestRun
+ result: TestRunResult
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return "test run execution"
+
+ def next(self) -> State | None:
+ """Next state."""
+ test_run = self.test_run
+ try:
+ test_suite_class, test_run.remaining_test_cases = test_run.remaining_tests.popleft()
+ test_suite = test_suite_class()
+ test_suite_result = test_run.result.add_test_suite(test_suite.name)
+
+ if test_run.blocked:
+ test_suite_result.update_setup(Result.BLOCK)
+ self.logger.warning(f"Test suite '{test_suite.name}' was BLOCKED.")
+ # Continue to allow the rest to mark as blocked, no need to setup.
+ return TestSuiteExecution(test_run, test_suite, test_suite_result)
+
+ try:
+ test_if_supported(test_suite_class, test_run.supported_capabilities)
+ except SkippedTestException as e:
+ self.logger.info(
+ f"Test suite '{test_suite.name}' execution SKIPPED with reason: {e}"
+ )
+ test_suite_result.update_setup(Result.SKIP)
+ return self
+
+ test_run.ctx.local.reset()
+ return TestSuiteSetup(test_run, test_suite, test_suite_result)
+ except IndexError:
+ # No more test suites. We are done here.
+ return TestRunTeardown(test_run, self.result)
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.result.update_setup(Result.ERROR, ex)
+ return TestRunTeardown(self.test_run, self.result)
+
+
+@dataclass
+class TestRunTeardown(State):
+ """Test run teardown."""
+
+ logger_name: ClassVar[str] = "test_run_teardown"
+ test_run: TestRun
+ result: TestRunResult
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return "test run teardown"
+
+ def next(self) -> State | None:
+ """Next state."""
+ self.test_run.ctx.sut_node.tear_down_test_run(self.test_run.ctx.topology.sut_ports)
+ self.result.update_teardown(Result.PASS)
+ return None
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.result.update_teardown(Result.ERROR, ex)
+ self.logger.warning(
+ "The environment may have not been cleaned up correctly. "
+ "The subsequent tests could be affected!"
+ )
+ return None
+
+
+@dataclass
+class TestSuiteState(State):
+ """A test suite state template."""
+
+ test_run: TestRun
+ test_suite: TestSuite
+ result: TestSuiteResult
+
+ def get_log_file_name(self) -> str | None:
+ """Get the log file name."""
+ return self.test_suite.name
+
+
+@dataclass
+class TestSuiteSetup(TestSuiteState):
+ """Test suite setup."""
+
+ logger_name: ClassVar[str] = "test_suite_setup"
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return f"test suite '{self.test_suite.name}' setup"
+
+ def next(self) -> State | None:
+ """Next state."""
+ self.test_suite.set_up_suite()
+ self.result.update_setup(Result.PASS)
+ return TestSuiteExecution(self.test_run, self.test_suite, self.result)
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.result.update_setup(Result.ERROR, ex)
+ return TestSuiteTeardown(self.test_run, self.test_suite, self.result)
+
+
+@dataclass
+class TestSuiteExecution(TestSuiteState):
+ """Test suite execution."""
+
+ logger_name: ClassVar[str] = "test_suite"
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return f"test suite '{self.test_suite.name}' execution"
+
+ def next(self) -> State | None:
+ """Next state."""
+ try:
+ test_case = self.test_run.remaining_test_cases.popleft()
+ test_case_result = self.result.add_test_case(test_case.name)
+
+ if self.test_run.blocked:
+ test_case_result.update_setup(Result.BLOCK)
+ self.logger.warning(f"Test case '{test_case.name}' execution was BLOCKED.")
+ return TestSuiteExecution(self.test_run, self.test_suite, self.result)
+
+ try:
+ test_if_supported(test_case, self.test_run.supported_capabilities)
+ except SkippedTestException as e:
+ self.logger.info(f"Test case '{test_case.name}' execution SKIPPED with reason: {e}")
+ test_case_result.update_setup(Result.SKIP)
+ return self
+
+ return TestCaseSetup(
+ self.test_run, self.test_suite, self.result, test_case, test_case_result
+ )
+ except IndexError:
+ if self.test_run.blocked and self.result.setup_result.result is Result.BLOCK:
+ # Skip teardown if the test case AND suite were blocked.
+ return TestRunExecution(self.test_run, self.test_run.result)
+ else:
+ # No more test cases. We are done here.
+ return TestSuiteTeardown(self.test_run, self.test_suite, self.result)
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.result.update_setup(Result.ERROR, ex)
+ return TestSuiteTeardown(self.test_run, self.test_suite, self.result)
+
+
+@dataclass
+class TestSuiteTeardown(TestSuiteState):
+ """Test suite teardown."""
+
+ logger_name: ClassVar[str] = "test_suite_teardown"
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return f"test suite '{self.test_suite.name}' teardown"
+
+ def next(self) -> State | None:
+ """Next state."""
+ self.test_suite.tear_down_suite()
+ self.test_run.ctx.sut_node.kill_cleanup_dpdk_apps()
+ self.result.update_teardown(Result.PASS)
+ return TestRunExecution(self.test_run, self.test_run.result)
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.logger.warning(
+ "The environment may have not been cleaned up correctly. "
+ "The subsequent tests could be affected!"
+ )
+ self.result.update_teardown(Result.ERROR, ex)
+ return TestRunExecution(self.test_run, self.test_run.result)
+
+ def after(self):
+ """Hook after state is processed."""
+ if self.result.get_errors() and self.test_suite.is_blocking:
+ self.logger.warning(
+ f"An error occurred within blocking {self.test_suite.name}. "
+ "The remaining test suites will be skipped."
+ )
+ self.test_run.blocked = True
+
+
+@dataclass
+class TestCaseState(State):
+ """A test case state template."""
+
+ test_run: TestRun
+ test_suite: TestSuite
+ test_suite_result: TestSuiteResult
+ test_case: type[TestCase]
+ result: TestCaseResult
+
+ def get_log_file_name(self) -> str | None:
+ """Get the log file name."""
+ return self.test_suite.name
+
+
+@dataclass
+class TestCaseSetup(TestCaseState):
+ """Test case setup."""
+
+ logger_name: ClassVar[str] = "test_case_setup"
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return f"test case '{self.test_case.name}' setup"
+
+ def next(self) -> State | None:
+ """Next state."""
+ self.test_suite.set_up_test_case()
+ self.result.update_setup(Result.PASS)
+ return TestCaseExecution(
+ self.test_run,
+ self.test_suite,
+ self.test_suite_result,
+ self.test_case,
+ self.result,
+ SETTINGS.re_run,
+ )
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.result.update_setup(Result.ERROR, ex)
+ return TestCaseTeardown(
+ self.test_run, self.test_suite, self.test_suite_result, self.test_case, self.result
+ )
+
+
+@dataclass
+class TestCaseExecution(TestCaseState):
+ """Test case execution."""
+
+ logger_name: ClassVar[str] = "test_case"
+ reattempts_left: int
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return f"test case '{self.test_case.name}' execution"
+
+ def next(self) -> State | None:
+ """Next state."""
+ self.logger.info(f"Running test case '{self.test_case.name}'.")
+ run_test_case = MethodType(self.test_case, self.test_suite)
+ try:
+ run_test_case()
+ except TestCaseVerifyError as e:
+ self.logger.error(f"{self.description.capitalize()} FAILED: {e}")
+
+ self.reattempts_left -= 1
+ if self.reattempts_left > 0:
+ self.logger.info(f"Re-attempting. {self.reattempts_left} attempts left.")
+ return self
+
+ self.result.update(Result.FAIL, e)
+ else:
+ self.result.update(Result.PASS)
+ self.logger.info(f"{self.description.capitalize()} PASSED.")
+
+ return TestCaseTeardown(
+ self.test_run, self.test_suite, self.test_suite_result, self.test_case, self.result
+ )
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.result.update(Result.ERROR, ex)
+ return TestCaseTeardown(
+ self.test_run, self.test_suite, self.test_suite_result, self.test_case, self.result
+ )
+
+
+@dataclass
+class TestCaseTeardown(TestCaseState):
+ """Test case teardown."""
+
+ logger_name: ClassVar[str] = "test_case_teardown"
+
+ @property
+ def description(self) -> str:
+ """State description."""
+ return f"test case '{self.test_case.name}' teardown"
+
+ def next(self) -> State | None:
+ """Next state."""
+ self.test_suite.tear_down_test_case()
+ self.result.update_teardown(Result.PASS)
+ return TestSuiteExecution(self.test_run, self.test_suite, self.test_suite_result)
+
+ def on_error(self, ex: Exception) -> State | None:
+ """Next state on error."""
+ self.logger.warning(
+ "The environment may have not been cleaned up correctly. "
+ "The subsequent tests could be affected!"
+ )
+ self.result.update_teardown(Result.ERROR, ex)
+ return TestSuiteExecution(self.test_run, self.test_suite, self.test_suite_result)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index b9b527e40d..ae90997061 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -60,7 +60,7 @@ class TestSuite(TestProtocol):
By default, all test cases will be executed. A list of testcase names may be specified
in the YAML test run configuration file and in the :option:`--test-suite` command line argument
- or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run.
+ or in the :envvar:`DTS_TEST_SUITES` environment variable to filter which test cases to run.
The union of both lists will be used. Any unknown test cases from the latter lists
will be silently ignored.
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index e1215f9703..a1d6d9dd32 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2024 PANTHEON.tech s.r.o.
+# Copyright(c) 2025 Arm Limited
"""Testbed capabilities.
@@ -53,7 +54,7 @@ def test_scatter_mbuf_2048(self):
from typing_extensions import Self
-from framework.exception import ConfigurationError
+from framework.exception import ConfigurationError, SkippedTestException
from framework.logger import get_dts_logger
from framework.remote_session.testpmd_shell import (
NicCapability,
@@ -217,9 +218,7 @@ def get_supported_capabilities(
)
if cls.capabilities_to_check:
capabilities_to_check_map = cls._get_decorated_capabilities_map()
- with TestPmdShell(
- sut_node, privileged=True, disable_device_start=True
- ) as testpmd_shell:
+ with TestPmdShell() as testpmd_shell:
for (
conditional_capability_fn,
capabilities,
@@ -506,3 +505,20 @@ def get_supported_capabilities(
supported_capabilities.update(callback(sut_node, topology_config))
return supported_capabilities
+
+
+def test_if_supported(test: type[TestProtocol], supported_caps: set[Capability]) -> None:
+ """Test if the given test suite or test case is supported.
+
+ Args:
+ test: The test suite or case.
+ supported_caps: The capabilities that need to be checked against the test.
+
+ Raises:
+ SkippedTestException: If the test hasn't met the requirements.
+ """
+ unsupported_caps = test.required_capabilities - supported_caps
+ if unsupported_caps:
+ capability_str = "capabilities" if len(unsupported_caps) > 1 else "capability"
+ msg = f"Required {capability_str} '{unsupported_caps}' not found."
+ raise SkippedTestException(msg)
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 595836a664..290a3fbd74 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -37,9 +37,11 @@ class TGNode(Node):
must be a way to send traffic without that.
Attributes:
+ config: The traffic generator node configuration.
traffic_generator: The traffic generator running on the node.
"""
+ config: TGNodeConfiguration
traffic_generator: CapturingTrafficGenerator
def __init__(self, node_config: TGNodeConfiguration):
@@ -51,7 +53,6 @@ def __init__(self, node_config: TGNodeConfiguration):
node_config: The TG node's test run configuration.
"""
super().__init__(node_config)
- self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
self._logger.info(f"Created node: {self.name}")
def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
@@ -64,6 +65,7 @@ def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable
"""
super().set_up_test_run(test_run_config, ports)
self.main_session.bring_up_link(ports)
+ self.traffic_generator = create_traffic_generator(self, self.config.traffic_generator)
def tear_down_test_run(self, ports: Iterable[Port]) -> None:
"""Extend the test run teardown with the teardown of the traffic generator.
@@ -72,6 +74,7 @@ def tear_down_test_run(self, ports: Iterable[Port]) -> None:
ports: The ports to tear down for the test run.
"""
super().tear_down_test_run(ports)
+ self.traffic_generator.close()
def send_packets_and_capture(
self,
@@ -119,5 +122,4 @@ def close(self) -> None:
This extends the superclass method with TG cleanup.
"""
- self.traffic_generator.close()
super().close()
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 7/7] dts: remove node distinction
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
` (5 preceding siblings ...)
2025-02-12 16:45 ` [PATCH v2 6/7] dts: revamp runtime internals Luca Vizzarro
@ 2025-02-12 16:46 ` Luca Vizzarro
2025-02-12 16:47 ` [PATCH v2 0/7] dts: revamp framework Luca Vizzarro
7 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:46 UTC (permalink / raw)
To: dev
Cc: Nicholas Pratte, Dean Marx, Luca Vizzarro, Paul Szczepanek, Patrick Robb
Remove the distinction between SUT and TG nodes for configuration
purposes. As DPDK and the traffic generator belong to the runtime side
of testing, and don't belong to the testbed model, these are better
suited to be configured under the test runs.
Split the DPDK configuration in DPDKBuildConfiguration and
DPDKRuntimeConfiguration. The former stays the same but gains
implementation in its own self-contained class. DPDKRuntimeConfiguration
instead represents the prior dpdk options. Through a new
DPDKRuntimeEnvironment class all the DPDK-related runtime features are
now also self-contained. This sets a predisposition for DPDK-based
traffic generator as well, as it'd make it easier to handle their own
environment using the pre-existing functionality.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
doc/api/dts/framework.remote_session.dpdk.rst | 8 +
doc/api/dts/framework.remote_session.rst | 1 +
dts/framework/config/__init__.py | 16 +-
dts/framework/config/node.py | 73 +--
dts/framework/config/test_run.py | 72 ++-
dts/framework/context.py | 11 +-
.../sut_node.py => remote_session/dpdk.py} | 444 +++++++++---------
dts/framework/remote_session/dpdk_shell.py | 12 +-
.../single_active_interactive_shell.py | 4 +-
dts/framework/runner.py | 16 +-
dts/framework/test_result.py | 2 +-
dts/framework/test_run.py | 23 +-
dts/framework/test_suite.py | 9 +-
dts/framework/testbed_model/capability.py | 24 +-
dts/framework/testbed_model/node.py | 87 ++--
dts/framework/testbed_model/tg_node.py | 125 -----
.../traffic_generator/__init__.py | 8 +-
.../testbed_model/traffic_generator/scapy.py | 12 +-
.../traffic_generator/traffic_generator.py | 9 +-
dts/tests/TestSuite_smoke_tests.py | 6 +-
dts/tests/TestSuite_softnic.py | 2 +-
21 files changed, 393 insertions(+), 571 deletions(-)
create mode 100644 doc/api/dts/framework.remote_session.dpdk.rst
rename dts/framework/{testbed_model/sut_node.py => remote_session/dpdk.py} (61%)
delete mode 100644 dts/framework/testbed_model/tg_node.py
diff --git a/doc/api/dts/framework.remote_session.dpdk.rst b/doc/api/dts/framework.remote_session.dpdk.rst
new file mode 100644
index 0000000000..830364b984
--- /dev/null
+++ b/doc/api/dts/framework.remote_session.dpdk.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+dpdk - DPDK Environments
+===========================================
+
+.. automodule:: framework.remote_session.dpdk
+ :members:
+ :show-inheritance:
diff --git a/doc/api/dts/framework.remote_session.rst b/doc/api/dts/framework.remote_session.rst
index dd6f8530d7..79d65e3444 100644
--- a/doc/api/dts/framework.remote_session.rst
+++ b/doc/api/dts/framework.remote_session.rst
@@ -15,6 +15,7 @@ remote\_session - Node Connections Package
framework.remote_session.ssh_session
framework.remote_session.interactive_remote_session
framework.remote_session.interactive_shell
+ framework.remote_session.dpdk
framework.remote_session.dpdk_shell
framework.remote_session.testpmd_shell
framework.remote_session.python_shell
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 273a5cc3a7..c42eacb748 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -37,16 +37,12 @@
from framework.exception import ConfigurationError
from .common import FrozenModel, ValidationContext
-from .node import (
- NodeConfigurationTypes,
- SutNodeConfiguration,
- TGNodeConfiguration,
-)
+from .node import NodeConfiguration
from .test_run import TestRunConfiguration
TestRunsConfig = Annotated[list[TestRunConfiguration], Field(min_length=1)]
-NodesConfig = Annotated[list[NodeConfigurationTypes], Field(min_length=1)]
+NodesConfig = Annotated[list[NodeConfiguration], Field(min_length=1)]
class Configuration(FrozenModel):
@@ -125,10 +121,6 @@ def validate_test_runs_against_nodes(self) -> Self:
f"Test run {test_run_no}.system_under_test_node "
f"({sut_node_name}) is not a valid node name."
)
- assert isinstance(sut_node, SutNodeConfiguration), (
- f"Test run {test_run_no}.system_under_test_node is a valid node name, "
- "but it is not a valid SUT node."
- )
tg_node_name = test_run.traffic_generator_node
tg_node = next((n for n in self.nodes if n.name == tg_node_name), None)
@@ -137,10 +129,6 @@ def validate_test_runs_against_nodes(self) -> Self:
f"Test run {test_run_no}.traffic_generator_name "
f"({tg_node_name}) is not a valid node name."
)
- assert isinstance(tg_node, TGNodeConfiguration), (
- f"Test run {test_run_no}.traffic_generator_name is a valid node name, "
- "but it is not a valid TG node."
- )
return self
diff --git a/dts/framework/config/node.py b/dts/framework/config/node.py
index 97e0285912..438a1bdc8f 100644
--- a/dts/framework/config/node.py
+++ b/dts/framework/config/node.py
@@ -9,8 +9,7 @@
The root model of a node configuration is :class:`NodeConfiguration`.
"""
-from enum import Enum, auto, unique
-from typing import Annotated, Literal
+from enum import auto, unique
from pydantic import Field, model_validator
from typing_extensions import Self
@@ -32,14 +31,6 @@ class OS(StrEnum):
windows = auto()
-@unique
-class TrafficGeneratorType(str, Enum):
- """The supported traffic generators."""
-
- #:
- SCAPY = "SCAPY"
-
-
class HugepageConfiguration(FrozenModel):
r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s."""
@@ -62,33 +53,6 @@ class PortConfig(FrozenModel):
os_driver: str = Field(examples=["i40e", "ice", "mlx5_core"])
-class TrafficGeneratorConfig(FrozenModel):
- """A protocol required to define traffic generator types."""
-
- #: The traffic generator type the child class is required to define to be distinguished among
- #: others.
- type: TrafficGeneratorType
-
-
-class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
- """Scapy traffic generator specific configuration."""
-
- type: Literal[TrafficGeneratorType.SCAPY]
-
-
-#: A union type discriminating traffic generators by the `type` field.
-TrafficGeneratorConfigTypes = Annotated[ScapyTrafficGeneratorConfig, Field(discriminator="type")]
-
-#: Comma-separated list of logical cores to use. An empty string or ```any``` means use all lcores.
-LogicalCores = Annotated[
- str,
- Field(
- examples=["1,2,3,4,5,18-22", "10-15", "any"],
- pattern=r"^(([0-9]+|([0-9]+-[0-9]+))(,([0-9]+|([0-9]+-[0-9]+)))*)?$|any",
- ),
-]
-
-
class NodeConfiguration(FrozenModel):
r"""The configuration of :class:`~framework.testbed_model.node.Node`\s."""
@@ -118,38 +82,3 @@ def verify_unique_port_names(self) -> Self:
)
used_port_names[port.name] = idx
return self
-
-
-class DPDKConfiguration(FrozenModel):
- """Configuration of the DPDK EAL parameters."""
-
- #: A comma delimited list of logical cores to use when running DPDK. ```any```, an empty
- #: string or omitting this field means use any core except for the first one. The first core
- #: will only be used if explicitly set.
- lcores: LogicalCores = ""
-
- #: The number of memory channels to use when running DPDK.
- memory_channels: int = 1
-
- @property
- def use_first_core(self) -> bool:
- """Returns :data:`True` if `lcores` explicitly selects the first core."""
- return "0" in self.lcores
-
-
-class SutNodeConfiguration(NodeConfiguration):
- """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration."""
-
- #: The runtime configuration for DPDK.
- dpdk_config: DPDKConfiguration
-
-
-class TGNodeConfiguration(NodeConfiguration):
- """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration."""
-
- #: The configuration of the traffic generator present on the TG node.
- traffic_generator: TrafficGeneratorConfigTypes
-
-
-#: Union type for all the node configuration types.
-NodeConfigurationTypes = TGNodeConfiguration | SutNodeConfiguration
diff --git a/dts/framework/config/test_run.py b/dts/framework/config/test_run.py
index eef01d0340..1b3045730d 100644
--- a/dts/framework/config/test_run.py
+++ b/dts/framework/config/test_run.py
@@ -13,10 +13,10 @@
import tarfile
from collections import deque
from collections.abc import Iterable
-from enum import auto, unique
+from enum import Enum, auto, unique
from functools import cached_property
from pathlib import Path, PurePath
-from typing import Any, Literal, NamedTuple
+from typing import Annotated, Any, Literal, NamedTuple
from pydantic import Field, field_validator, model_validator
from typing_extensions import TYPE_CHECKING, Self
@@ -361,6 +361,68 @@ def verify_distinct_nodes(self) -> Self:
return self
+@unique
+class TrafficGeneratorType(str, Enum):
+ """The supported traffic generators."""
+
+ #:
+ SCAPY = "SCAPY"
+
+
+class TrafficGeneratorConfig(FrozenModel):
+ """A protocol required to define traffic generator types."""
+
+ #: The traffic generator type the child class is required to define to be distinguished among
+ #: others.
+ type: TrafficGeneratorType
+
+
+class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+ """Scapy traffic generator specific configuration."""
+
+ type: Literal[TrafficGeneratorType.SCAPY]
+
+
+#: A union type discriminating traffic generators by the `type` field.
+TrafficGeneratorConfigTypes = Annotated[ScapyTrafficGeneratorConfig, Field(discriminator="type")]
+
+#: Comma-separated list of logical cores to use. An empty string or ```any``` means use all lcores.
+LogicalCores = Annotated[
+ str,
+ Field(
+ examples=["1,2,3,4,5,18-22", "10-15", "any"],
+ pattern=r"^(([0-9]+|([0-9]+-[0-9]+))(,([0-9]+|([0-9]+-[0-9]+)))*)?$|any",
+ ),
+]
+
+
+class DPDKRuntimeConfiguration(FrozenModel):
+ """Configuration of the DPDK EAL parameters."""
+
+ #: A comma delimited list of logical cores to use when running DPDK. ```any```, an empty
+ #: string or omitting this field means use any core except for the first one. The first core
+ #: will only be used if explicitly set.
+ lcores: LogicalCores = ""
+
+ #: The number of memory channels to use when running DPDK.
+ memory_channels: int = 1
+
+ #: The names of virtual devices to test.
+ vdevs: list[str] = Field(default_factory=list)
+
+ @property
+ def use_first_core(self) -> bool:
+ """Returns :data:`True` if `lcores` explicitly selects the first core."""
+ return "0" in self.lcores
+
+
+class DPDKConfiguration(DPDKRuntimeConfiguration):
+ """The DPDK configuration needed to test."""
+
+ #: The DPDKD build configuration used to test.
+ build: DPDKBuildConfiguration
+
+
class TestRunConfiguration(FrozenModel):
"""The configuration of a test run.
@@ -369,7 +431,9 @@ class TestRunConfiguration(FrozenModel):
"""
#: The DPDK configuration used to test.
- dpdk_config: DPDKBuildConfiguration = Field(alias="dpdk_build")
+ dpdk: DPDKConfiguration
+ #: The traffic generator configuration used to test.
+ traffic_generator: TrafficGeneratorConfigTypes
#: Whether to run performance tests.
perf: bool
#: Whether to run functional tests.
@@ -382,8 +446,6 @@ class TestRunConfiguration(FrozenModel):
system_under_test_node: str
#: The TG node name to use in this test run.
traffic_generator_node: str
- #: The names of virtual devices to test.
- vdevs: list[str] = Field(default_factory=list)
#: The seed to use for pseudo-random generation.
random_seed: int | None = None
#: The port links between the specified nodes to use.
diff --git a/dts/framework/context.py b/dts/framework/context.py
index 03eaf63b88..8adffff57f 100644
--- a/dts/framework/context.py
+++ b/dts/framework/context.py
@@ -10,11 +10,12 @@
from framework.exception import InternalError
from framework.settings import SETTINGS
from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
+from framework.testbed_model.node import Node
from framework.testbed_model.topology import Topology
if TYPE_CHECKING:
- from framework.testbed_model.sut_node import SutNode
- from framework.testbed_model.tg_node import TGNode
+ from framework.remote_session.dpdk import DPDKRuntimeEnvironment
+ from framework.testbed_model.traffic_generator.traffic_generator import TrafficGenerator
P = ParamSpec("P")
@@ -62,9 +63,11 @@ def reset(self) -> None:
class Context:
"""Runtime context."""
- sut_node: "SutNode"
- tg_node: "TGNode"
+ sut_node: Node
+ tg_node: Node
topology: Topology
+ dpdk: "DPDKRuntimeEnvironment"
+ tg: "TrafficGenerator"
local: LocalContext = field(default_factory=LocalContext)
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/remote_session/dpdk.py
similarity index 61%
rename from dts/framework/testbed_model/sut_node.py
rename to dts/framework/remote_session/dpdk.py
index 9007d89b1c..476d6915d3 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/remote_session/dpdk.py
@@ -1,47 +1,40 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2010-2014 Intel Corporation
+# SPDX-License-Identifier: BSD-3-C
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
-# Copyright(c) 2024 Arm Limited
+# Copyright(c) 2025 Arm Limited
-"""System under test (DPDK + hardware) node.
-
-A system under test (SUT) is the combination of DPDK
-and the hardware we're testing with DPDK (NICs, crypto and other devices).
-An SUT node is where this SUT runs.
-"""
+"""DPDK environment."""
import os
import time
from collections.abc import Iterable
from dataclasses import dataclass
+from functools import cached_property
from pathlib import Path, PurePath
+from typing import Final
-from framework.config.node import (
- SutNodeConfiguration,
-)
from framework.config.test_run import (
DPDKBuildConfiguration,
DPDKBuildOptionsConfiguration,
DPDKPrecompiledBuildConfiguration,
+ DPDKRuntimeConfiguration,
DPDKUncompiledBuildConfiguration,
LocalDPDKTarballLocation,
LocalDPDKTreeLocation,
RemoteDPDKTarballLocation,
RemoteDPDKTreeLocation,
- TestRunConfiguration,
)
from framework.exception import ConfigurationError, RemoteFileNotFoundError
+from framework.logger import DTSLogger, get_dts_logger
from framework.params.eal import EalParams
from framework.remote_session.remote_session import CommandResult
+from framework.testbed_model.cpu import LogicalCore, LogicalCoreCount, LogicalCoreList, lcore_filter
+from framework.testbed_model.node import Node
+from framework.testbed_model.os_session import OSSession
from framework.testbed_model.port import Port
+from framework.testbed_model.virtual_device import VirtualDevice
from framework.utils import MesonArgs, TarCompressionFormat
-from .cpu import LogicalCore, LogicalCoreList
-from .node import Node
-from .os_session import OSSession, OSSessionInfo
-from .virtual_device import VirtualDevice
-
@dataclass(slots=True, frozen=True)
class DPDKBuildInfo:
@@ -56,177 +49,36 @@ class DPDKBuildInfo:
compiler_version: str | None
-class SutNode(Node):
- """The system under test node.
-
- The SUT node extends :class:`Node` with DPDK specific features:
-
- * Managing DPDK source tree on the remote SUT,
- * Building the DPDK from source or using a pre-built version,
- * Gathering of DPDK build info,
- * The running of DPDK apps, interactively or one-time execution,
- * DPDK apps cleanup.
+class DPDKBuildEnvironment:
+ """Class handling a DPDK build environment."""
- Building DPDK from source uses `build` configuration inside `dpdk_build` of configuration.
-
- Attributes:
- config: The SUT node configuration.
- virtual_devices: The virtual devices used on the node.
- """
-
- config: SutNodeConfiguration
- virtual_devices: list[VirtualDevice]
- dpdk_prefix_list: list[str]
- dpdk_timestamp: str
- _env_vars: dict
+ config: DPDKBuildConfiguration
+ _node: Node
+ _session: OSSession
+ _logger: DTSLogger
_remote_tmp_dir: PurePath
- __remote_dpdk_tree_path: str | PurePath | None
+ _remote_dpdk_tree_path: str | PurePath | None
_remote_dpdk_build_dir: PurePath | None
_app_compile_timeout: float
- _dpdk_kill_session: OSSession | None
- _dpdk_version: str | None
- _node_info: OSSessionInfo | None
- _compiler_version: str | None
- _path_to_devbind_script: PurePath | None
- def __init__(self, node_config: SutNodeConfiguration):
- """Extend the constructor with SUT node specifics.
+ compiler_version: str | None
- Args:
- node_config: The SUT node's test run configuration.
- """
- super().__init__(node_config)
- self.lcores = self.filter_lcores(LogicalCoreList(self.config.dpdk_config.lcores))
- if LogicalCore(lcore=0, core=0, socket=0, node=0) in self.lcores:
- self._logger.info(
- """
- WARNING: First core being used;
- using the first core is considered risky and should only
- be done by advanced users.
- """
- )
- else:
- self._logger.info("Not using first core")
- self.virtual_devices = []
- self.dpdk_prefix_list = []
- self._env_vars = {}
- self._remote_tmp_dir = self.main_session.get_remote_tmp_dir()
- self.__remote_dpdk_tree_path = None
+ def __init__(self, config: DPDKBuildConfiguration, node: Node):
+ """DPDK build environment class constructor."""
+ self.config = config
+ self._node = node
+ self._logger = get_dts_logger()
+ self._session = node.create_session("dpdk_build")
+
+ self._remote_tmp_dir = node.main_session.get_remote_tmp_dir()
+ self._remote_dpdk_tree_path = None
self._remote_dpdk_build_dir = None
self._app_compile_timeout = 90
- self._dpdk_kill_session = None
- self.dpdk_timestamp = (
- f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"
- )
- self._dpdk_version = None
- self._node_info = None
- self._compiler_version = None
- self._path_to_devbind_script = None
- self._ports_bound_to_dpdk = False
- self._logger.info(f"Created node: {self.name}")
-
- @property
- def _remote_dpdk_tree_path(self) -> str | PurePath:
- """The remote DPDK tree path."""
- if self.__remote_dpdk_tree_path:
- return self.__remote_dpdk_tree_path
-
- self._logger.warning(
- "Failed to get remote dpdk tree path because we don't know the "
- "location on the SUT node."
- )
- return ""
-
- @property
- def remote_dpdk_build_dir(self) -> str | PurePath:
- """The remote DPDK build dir path."""
- if self._remote_dpdk_build_dir:
- return self._remote_dpdk_build_dir
-
- self._logger.warning(
- "Failed to get remote dpdk build dir because we don't know the "
- "location on the SUT node."
- )
- return ""
-
- @property
- def dpdk_version(self) -> str | None:
- """Last built DPDK version."""
- if self._dpdk_version is None:
- self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_tree_path)
- return self._dpdk_version
-
- @property
- def node_info(self) -> OSSessionInfo:
- """Additional node information."""
- if self._node_info is None:
- self._node_info = self.main_session.get_node_info()
- return self._node_info
-
- @property
- def compiler_version(self) -> str | None:
- """The node's compiler version."""
- if self._compiler_version is None:
- self._logger.warning("The `compiler_version` is None because a pre-built DPDK is used.")
- return self._compiler_version
-
- @compiler_version.setter
- def compiler_version(self, value: str) -> None:
- """Set the `compiler_version` used on the SUT node.
-
- Args:
- value: The node's compiler version.
- """
- self._compiler_version = value
-
- @property
- def path_to_devbind_script(self) -> PurePath | str:
- """The path to the dpdk-devbind.py script on the node."""
- if self._path_to_devbind_script is None:
- self._path_to_devbind_script = self.main_session.join_remote_path(
- self._remote_dpdk_tree_path, "usertools", "dpdk-devbind.py"
- )
- return self._path_to_devbind_script
-
- def get_dpdk_build_info(self) -> DPDKBuildInfo:
- """Get additional DPDK build information.
-
- Returns:
- The DPDK build information,
- """
- return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
-
- def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
- """Extend the test run setup with vdev config and DPDK build set up.
-
- This method extends the setup process by configuring virtual devices and preparing the DPDK
- environment based on the provided configuration.
-
- Args:
- test_run_config: A test run configuration according to which
- the setup steps will be taken.
- ports: The ports to set up for the test run.
- """
- super().set_up_test_run(test_run_config, ports)
- for vdev in test_run_config.vdevs:
- self.virtual_devices.append(VirtualDevice(vdev))
- self._set_up_dpdk(test_run_config.dpdk_config, ports)
-
- def tear_down_test_run(self, ports: Iterable[Port]) -> None:
- """Extend the test run teardown with virtual device teardown and DPDK teardown.
-
- Args:
- ports: The ports to tear down for the test run.
- """
- super().tear_down_test_run(ports)
- self.virtual_devices = []
- self._tear_down_dpdk(ports)
+ self.compiler_version = None
- def _set_up_dpdk(
- self, dpdk_build_config: DPDKBuildConfiguration, ports: Iterable[Port]
- ) -> None:
- """Set up DPDK the SUT node and bind ports.
+ def setup(self):
+ """Set up the DPDK build on the target node.
DPDK setup includes setting all internals needed for the build, the copying of DPDK
sources and then building DPDK or using the exist ones from the `dpdk_location`. The drivers
@@ -236,7 +88,7 @@ def _set_up_dpdk(
dpdk_build_config: A DPDK build configuration to test.
ports: The ports to use for DPDK.
"""
- match dpdk_build_config.dpdk_location:
+ match self.config.dpdk_location:
case RemoteDPDKTreeLocation(dpdk_tree=dpdk_tree):
self._set_remote_dpdk_tree_path(dpdk_tree)
case LocalDPDKTreeLocation(dpdk_tree=dpdk_tree):
@@ -248,24 +100,13 @@ def _set_up_dpdk(
remote_tarball = self._copy_dpdk_tarball_to_remote(tarball)
self._prepare_and_extract_dpdk_tarball(remote_tarball)
- match dpdk_build_config:
+ match self.config:
case DPDKPrecompiledBuildConfiguration(precompiled_build_dir=build_dir):
self._set_remote_dpdk_build_dir(build_dir)
case DPDKUncompiledBuildConfiguration(build_options=build_options):
self._configure_dpdk_build(build_options)
self._build_dpdk()
- self.bind_ports_to_driver(ports)
-
- def _tear_down_dpdk(self, ports: Iterable[Port]) -> None:
- """Reset DPDK variables and bind port driver to the OS driver."""
- self._env_vars = {}
- self.__remote_dpdk_tree_path = None
- self._remote_dpdk_build_dir = None
- self._dpdk_version = None
- self.compiler_version = None
- self.bind_ports_to_driver(ports, for_dpdk=False)
-
def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
"""Set the path to the remote DPDK source tree based on the provided DPDK location.
@@ -280,14 +121,14 @@ def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
is not found.
ConfigurationError: If the remote DPDK source tree specified is not a valid directory.
"""
- if not self.main_session.remote_path_exists(dpdk_tree):
+ if not self._session.remote_path_exists(dpdk_tree):
raise RemoteFileNotFoundError(
f"Remote DPDK source tree '{dpdk_tree}' not found in SUT node."
)
- if not self.main_session.is_remote_dir(dpdk_tree):
+ if not self._session.is_remote_dir(dpdk_tree):
raise ConfigurationError(f"Remote DPDK source tree '{dpdk_tree}' must be a directory.")
- self.__remote_dpdk_tree_path = dpdk_tree
+ self._remote_dpdk_tree_path = dpdk_tree
def _copy_dpdk_tree(self, dpdk_tree_path: Path) -> None:
"""Copy the DPDK source tree to the SUT.
@@ -298,14 +139,14 @@ def _copy_dpdk_tree(self, dpdk_tree_path: Path) -> None:
self._logger.info(
f"Copying DPDK source tree to SUT: '{dpdk_tree_path}' into '{self._remote_tmp_dir}'."
)
- self.main_session.copy_dir_to(
+ self._session.copy_dir_to(
dpdk_tree_path,
self._remote_tmp_dir,
exclude=[".git", "*.o"],
compress_format=TarCompressionFormat.gzip,
)
- self.__remote_dpdk_tree_path = self.main_session.join_remote_path(
+ self._remote_dpdk_tree_path = self._session.join_remote_path(
self._remote_tmp_dir, PurePath(dpdk_tree_path).name
)
@@ -320,9 +161,9 @@ def _validate_remote_dpdk_tarball(self, dpdk_tarball: PurePath) -> None:
not found.
ConfigurationError: If the `dpdk_tarball` is a valid path but not a valid tar archive.
"""
- if not self.main_session.remote_path_exists(dpdk_tarball):
+ if not self._session.remote_path_exists(dpdk_tarball):
raise RemoteFileNotFoundError(f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT.")
- if not self.main_session.is_remote_tarfile(dpdk_tarball):
+ if not self._session.is_remote_tarfile(dpdk_tarball):
raise ConfigurationError(f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive.")
def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
@@ -337,8 +178,8 @@ def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
self._logger.info(
f"Copying DPDK tarball to SUT: '{dpdk_tarball}' into '{self._remote_tmp_dir}'."
)
- self.main_session.copy_to(dpdk_tarball, self._remote_tmp_dir)
- return self.main_session.join_remote_path(self._remote_tmp_dir, dpdk_tarball.name)
+ self._session.copy_to(dpdk_tarball, self._remote_tmp_dir)
+ return self._session.join_remote_path(self._remote_tmp_dir, dpdk_tarball.name)
def _prepare_and_extract_dpdk_tarball(self, remote_tarball_path: PurePath) -> None:
"""Prepare the remote DPDK tree path and extract the tarball.
@@ -365,19 +206,19 @@ def remove_tarball_suffix(remote_tarball_path: PurePath) -> PurePath:
return PurePath(str(remote_tarball_path).replace(suffixes_to_remove, ""))
return remote_tarball_path.with_suffix("")
- tarball_top_dir = self.main_session.get_tarball_top_dir(remote_tarball_path)
- self.__remote_dpdk_tree_path = self.main_session.join_remote_path(
+ tarball_top_dir = self._session.get_tarball_top_dir(remote_tarball_path)
+ self._remote_dpdk_tree_path = self._session.join_remote_path(
remote_tarball_path.parent,
tarball_top_dir or remove_tarball_suffix(remote_tarball_path),
)
self._logger.info(
"Extracting DPDK tarball on SUT: "
- f"'{remote_tarball_path}' into '{self._remote_dpdk_tree_path}'."
+ f"'{remote_tarball_path}' into '{self.remote_dpdk_tree_path}'."
)
- self.main_session.extract_remote_tarball(
+ self._session.extract_remote_tarball(
remote_tarball_path,
- self._remote_dpdk_tree_path,
+ self.remote_dpdk_tree_path,
)
def _set_remote_dpdk_build_dir(self, build_dir: str):
@@ -395,10 +236,10 @@ def _set_remote_dpdk_build_dir(self, build_dir: str):
RemoteFileNotFoundError: If the `build_dir` is expected but does not exist on the SUT
node.
"""
- remote_dpdk_build_dir = self.main_session.join_remote_path(
- self._remote_dpdk_tree_path, build_dir
+ remote_dpdk_build_dir = self._session.join_remote_path(
+ self.remote_dpdk_tree_path, build_dir
)
- if not self.main_session.remote_path_exists(remote_dpdk_build_dir):
+ if not self._session.remote_path_exists(remote_dpdk_build_dir):
raise RemoteFileNotFoundError(
f"Remote DPDK build dir '{remote_dpdk_build_dir}' not found in SUT node."
)
@@ -415,20 +256,18 @@ def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration
dpdk_build_config: A DPDK build configuration to test.
"""
self._env_vars = {}
- self._env_vars.update(self.main_session.get_dpdk_build_env_vars(self.arch))
+ self._env_vars.update(self._session.get_dpdk_build_env_vars(self._node.arch))
if compiler_wrapper := dpdk_build_config.compiler_wrapper:
self._env_vars["CC"] = f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
else:
self._env_vars["CC"] = dpdk_build_config.compiler.name
- self.compiler_version = self.main_session.get_compiler_version(
- dpdk_build_config.compiler.name
- )
+ self.compiler_version = self._session.get_compiler_version(dpdk_build_config.compiler.name)
- build_dir_name = f"{self.arch}-{self.config.os}-{dpdk_build_config.compiler}"
+ build_dir_name = f"{self._node.arch}-{self._node.config.os}-{dpdk_build_config.compiler}"
- self._remote_dpdk_build_dir = self.main_session.join_remote_path(
- self._remote_dpdk_tree_path, build_dir_name
+ self._remote_dpdk_build_dir = self._session.join_remote_path(
+ self.remote_dpdk_tree_path, build_dir_name
)
def _build_dpdk(self) -> None:
@@ -437,10 +276,10 @@ def _build_dpdk(self) -> None:
Uses the already configured DPDK build configuration. Assumes that the
`_remote_dpdk_tree_path` has already been set on the SUT node.
"""
- self.main_session.build_dpdk(
+ self._session.build_dpdk(
self._env_vars,
MesonArgs(default_library="static", enable_kmods=True, libdir="lib"),
- self._remote_dpdk_tree_path,
+ self.remote_dpdk_tree_path,
self.remote_dpdk_build_dir,
)
@@ -459,31 +298,120 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
The directory path of the built app. If building all apps, return
the path to the examples directory (where all apps reside).
"""
- self.main_session.build_dpdk(
+ self._session.build_dpdk(
self._env_vars,
MesonArgs(examples=app_name, **meson_dpdk_args), # type: ignore [arg-type]
# ^^ https://github.com/python/mypy/issues/11583
- self._remote_dpdk_tree_path,
+ self.remote_dpdk_tree_path,
self.remote_dpdk_build_dir,
rebuild=True,
timeout=self._app_compile_timeout,
)
if app_name == "all":
- return self.main_session.join_remote_path(self.remote_dpdk_build_dir, "examples")
- return self.main_session.join_remote_path(
+ return self._session.join_remote_path(self.remote_dpdk_build_dir, "examples")
+ return self._session.join_remote_path(
self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
)
- def kill_cleanup_dpdk_apps(self) -> None:
- """Kill all dpdk applications on the SUT, then clean up hugepages."""
- if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
- # we can use the session if it exists and responds
- self._dpdk_kill_session.kill_cleanup_dpdk_apps(self.dpdk_prefix_list)
+ @property
+ def remote_dpdk_tree_path(self) -> str | PurePath:
+ """The remote DPDK tree path."""
+ if self._remote_dpdk_tree_path:
+ return self._remote_dpdk_tree_path
+
+ self._logger.warning(
+ "Failed to get remote dpdk tree path because we don't know the "
+ "location on the SUT node."
+ )
+ return ""
+
+ @property
+ def remote_dpdk_build_dir(self) -> str | PurePath:
+ """The remote DPDK build dir path."""
+ if self._remote_dpdk_build_dir:
+ return self._remote_dpdk_build_dir
+
+ self._logger.warning(
+ "Failed to get remote dpdk build dir because we don't know the "
+ "location on the SUT node."
+ )
+ return ""
+
+ @cached_property
+ def dpdk_version(self) -> str | None:
+ """Last built DPDK version."""
+ return self._session.get_dpdk_version(self.remote_dpdk_tree_path)
+
+ def get_dpdk_build_info(self) -> DPDKBuildInfo:
+ """Get additional DPDK build information.
+
+ Returns:
+ The DPDK build information,
+ """
+ return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
+
+
+class DPDKRuntimeEnvironment:
+ """Class handling a DPDK runtime environment."""
+
+ config: Final[DPDKRuntimeConfiguration]
+ build: Final[DPDKBuildEnvironment]
+ _node: Final[Node]
+ _logger: Final[DTSLogger]
+
+ timestamp: Final[str]
+ _virtual_devices: list[VirtualDevice]
+ _lcores: list[LogicalCore]
+
+ _env_vars: dict
+ _kill_session: OSSession | None
+ prefix_list: list[str]
+
+ def __init__(
+ self,
+ config: DPDKRuntimeConfiguration,
+ node: Node,
+ build_env: DPDKBuildEnvironment,
+ ):
+ """DPDK environment constructor.
+
+ Args:
+ config: The configuration of DPDK.
+ node: The target node to manage a DPDK environment.
+ build_env: The DPDK build environment.
+ """
+ self.config = config
+ self.build = build_env
+ self._node = node
+ self._logger = get_dts_logger()
+
+ self.timestamp = f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"
+ self._virtual_devices = [VirtualDevice(vdev) for vdev in config.vdevs]
+
+ self._lcores = node.lcores
+ self._lcores = self.filter_lcores(LogicalCoreList(self.config.lcores))
+ if LogicalCore(lcore=0, core=0, socket=0, node=0) in self._lcores:
+ self._logger.warning(
+ "First core being used; "
+ "the first core is considered risky and should only be done by advanced users."
+ )
else:
- # otherwise, we need to (re)create it
- self._dpdk_kill_session = self.create_session("dpdk_kill")
- self.dpdk_prefix_list = []
+ self._logger.info("Not using first core")
+
+ self.prefix_list = []
+ self._env_vars = {}
+ self._ports_bound_to_dpdk = False
+ self._kill_session = None
+
+ def setup(self, ports: Iterable[Port]):
+ """Set up the DPDK runtime on the target node."""
+ self.build.setup()
+ self.bind_ports_to_driver(ports)
+
+ def teardown(self, ports: Iterable[Port]) -> None:
+ """Reset DPDK variables and bind port driver to the OS driver."""
+ self.bind_ports_to_driver(ports, for_dpdk=False)
def run_dpdk_app(
self, app_path: PurePath, eal_params: EalParams, timeout: float = 30
@@ -501,7 +429,7 @@ def run_dpdk_app(
Returns:
The result of the DPDK app execution.
"""
- return self.main_session.send_command(
+ return self._node.main_session.send_command(
f"{app_path} {eal_params}", timeout, privileged=True, verify=True
)
@@ -518,9 +446,59 @@ def bind_ports_to_driver(self, ports: Iterable[Port], for_dpdk: bool = True) ->
continue
driver = port.config.os_driver_for_dpdk if for_dpdk else port.config.os_driver
- self.main_session.send_command(
- f"{self.path_to_devbind_script} -b {driver} --force {port.pci}",
+ self._node.main_session.send_command(
+ f"{self.devbind_script_path} -b {driver} --force {port.pci}",
privileged=True,
verify=True,
)
port.bound_for_dpdk = for_dpdk
+
+ @cached_property
+ def devbind_script_path(self) -> PurePath:
+ """The path to the dpdk-devbind.py script on the node."""
+ return self._node.main_session.join_remote_path(
+ self.build.remote_dpdk_tree_path, "usertools", "dpdk-devbind.py"
+ )
+
+ def filter_lcores(
+ self,
+ filter_specifier: LogicalCoreCount | LogicalCoreList,
+ ascending: bool = True,
+ ) -> list[LogicalCore]:
+ """Filter the node's logical cores that DTS can use.
+
+ Logical cores that DTS can use are the ones that are present on the node, but filtered
+ according to the test run configuration. The `filter_specifier` will filter cores from
+ those logical cores.
+
+ Args:
+ filter_specifier: Two different filters can be used, one that specifies the number
+ of logical cores per core, cores per socket and the number of sockets,
+ and another one that specifies a logical core list.
+ ascending: If :data:`True`, use cores with the lowest numerical id first and continue
+ in ascending order. If :data:`False`, start with the highest id and continue
+ in descending order. This ordering affects which sockets to consider first as well.
+
+ Returns:
+ The filtered logical cores.
+ """
+ self._logger.debug(f"Filtering {filter_specifier} from {self._lcores}.")
+ return lcore_filter(
+ self._lcores,
+ filter_specifier,
+ ascending,
+ ).filter()
+
+ def kill_cleanup_dpdk_apps(self) -> None:
+ """Kill all dpdk applications on the SUT, then clean up hugepages."""
+ if self._kill_session and self._kill_session.is_alive():
+ # we can use the session if it exists and responds
+ self._kill_session.kill_cleanup_dpdk_apps(self.prefix_list)
+ else:
+ # otherwise, we need to (re)create it
+ self._kill_session = self._node.create_session("dpdk_kill")
+ self.prefix_list = []
+
+ def get_virtual_devices(self) -> Iterable[VirtualDevice]:
+ """The available DPDK virtual devices."""
+ return (v for v in self._virtual_devices)
diff --git a/dts/framework/remote_session/dpdk_shell.py b/dts/framework/remote_session/dpdk_shell.py
index b55deb7fa0..fc43448e06 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -15,7 +15,6 @@
SingleActiveInteractiveShell,
)
from framework.testbed_model.cpu import LogicalCoreList
-from framework.testbed_model.sut_node import SutNode
def compute_eal_params(
@@ -35,15 +34,15 @@ def compute_eal_params(
if params.lcore_list is None:
params.lcore_list = LogicalCoreList(
- ctx.sut_node.filter_lcores(ctx.local.lcore_filter_specifier, ctx.local.ascending_cores)
+ ctx.dpdk.filter_lcores(ctx.local.lcore_filter_specifier, ctx.local.ascending_cores)
)
prefix = params.prefix
if ctx.local.append_prefix_timestamp:
- prefix = f"{prefix}_{ctx.sut_node.dpdk_timestamp}"
+ prefix = f"{prefix}_{ctx.dpdk.timestamp}"
prefix = ctx.sut_node.main_session.get_dpdk_file_prefix(prefix)
if prefix:
- ctx.sut_node.dpdk_prefix_list.append(prefix)
+ ctx.dpdk.prefix_list.append(prefix)
params.prefix = prefix
if params.allowed_ports is None:
@@ -60,7 +59,6 @@ class DPDKShell(SingleActiveInteractiveShell, ABC):
supplied app parameters.
"""
- _node: SutNode
_app_params: EalParams
def __init__(
@@ -80,4 +78,6 @@ def _update_real_path(self, path: PurePath) -> None:
Adds the remote DPDK build directory to the path.
"""
- super()._update_real_path(PurePath(self._node.remote_dpdk_build_dir).joinpath(path))
+ super()._update_real_path(
+ PurePath(get_ctx().dpdk.build.remote_dpdk_build_dir).joinpath(path)
+ )
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index 2eec2f698a..c1369ef77e 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -27,7 +27,6 @@
from paramiko import Channel, channel
from typing_extensions import Self
-from framework.context import get_ctx
from framework.exception import (
InteractiveCommandExecutionError,
InteractiveSSHSessionDeadError,
@@ -35,6 +34,7 @@
)
from framework.logger import DTSLogger, get_dts_logger
from framework.params import Params
+from framework.settings import SETTINGS
from framework.testbed_model.node import Node
from framework.utils import MultiInheritanceBaseClass
@@ -114,7 +114,7 @@ def __init__(
self._logger = get_dts_logger(f"{node.name}.{name}")
self._app_params = app_params
self._privileged = privileged
- self._timeout = get_ctx().local.timeout
+ self._timeout = SETTINGS.timeout
# Ensure path is properly formatted for the host
self._update_real_path(self.path)
super().__init__(**kwargs)
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index 90aeb63cfb..801709a2aa 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -15,17 +15,11 @@
from framework.config.common import ValidationContext
from framework.test_run import TestRun
from framework.testbed_model.node import Node
-from framework.testbed_model.sut_node import SutNode
-from framework.testbed_model.tg_node import TGNode
from .config import (
Configuration,
load_config,
)
-from .config.node import (
- SutNodeConfiguration,
- TGNodeConfiguration,
-)
from .logger import DTSLogger, get_dts_logger
from .settings import SETTINGS
from .test_result import (
@@ -63,15 +57,7 @@ def run(self) -> None:
self._result.update_setup(Result.PASS)
for node_config in self._configuration.nodes:
- node: Node
-
- match node_config:
- case SutNodeConfiguration():
- node = SutNode(node_config)
- case TGNodeConfiguration():
- node = TGNode(node_config)
-
- nodes.append(node)
+ nodes.append(Node(node_config))
# for all test run sections
for test_run_config in self._configuration.test_runs:
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index a59bac71bb..7f576022c7 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -32,9 +32,9 @@
from .config.test_run import TestRunConfiguration
from .exception import DTSError, ErrorSeverity
from .logger import DTSLogger
+from .remote_session.dpdk import DPDKBuildInfo
from .testbed_model.os_session import OSSessionInfo
from .testbed_model.port import Port
-from .testbed_model.sut_node import DPDKBuildInfo
class Result(Enum):
diff --git a/dts/framework/test_run.py b/dts/framework/test_run.py
index 811798f57f..6801bf87fd 100644
--- a/dts/framework/test_run.py
+++ b/dts/framework/test_run.py
@@ -80,7 +80,7 @@
from functools import cached_property
from pathlib import Path
from types import MethodType
-from typing import ClassVar, Protocol, Union, cast
+from typing import ClassVar, Protocol, Union
from framework.config.test_run import TestRunConfiguration
from framework.context import Context, init_ctx
@@ -90,6 +90,7 @@
TestCaseVerifyError,
)
from framework.logger import DTSLogger, get_dts_logger
+from framework.remote_session.dpdk import DPDKBuildEnvironment, DPDKRuntimeEnvironment
from framework.settings import SETTINGS
from framework.test_result import BaseResult, Result, TestCaseResult, TestRunResult, TestSuiteResult
from framework.test_suite import TestCase, TestSuite
@@ -99,9 +100,8 @@
test_if_supported,
)
from framework.testbed_model.node import Node
-from framework.testbed_model.sut_node import SutNode
-from framework.testbed_model.tg_node import TGNode
from framework.testbed_model.topology import PortLink, Topology
+from framework.testbed_model.traffic_generator import create_traffic_generator
TestScenario = tuple[type[TestSuite], deque[type[TestCase]]]
@@ -163,17 +163,18 @@ def __init__(self, config: TestRunConfiguration, nodes: Iterable[Node], result:
self.logger = get_dts_logger()
sut_node = next(n for n in nodes if n.name == config.system_under_test_node)
- sut_node = cast(SutNode, sut_node) # Config validation must render this valid.
-
tg_node = next(n for n in nodes if n.name == config.traffic_generator_node)
- tg_node = cast(TGNode, tg_node) # Config validation must render this valid.
topology = Topology.from_port_links(
PortLink(sut_node.ports_by_name[link.sut_port], tg_node.ports_by_name[link.tg_port])
for link in self.config.port_topology
)
- self.ctx = Context(sut_node, tg_node, topology)
+ dpdk_build_env = DPDKBuildEnvironment(config.dpdk.build, sut_node)
+ dpdk_runtime_env = DPDKRuntimeEnvironment(config.dpdk, sut_node, dpdk_build_env)
+ traffic_generator = create_traffic_generator(config.traffic_generator, tg_node)
+
+ self.ctx = Context(sut_node, tg_node, topology, dpdk_runtime_env, traffic_generator)
self.result = result
self.selected_tests = list(self.config.filter_tests())
self.blocked = False
@@ -307,11 +308,11 @@ def next(self) -> State | None:
test_run.init_random_seed()
test_run.remaining_tests = deque(test_run.selected_tests)
- test_run.ctx.sut_node.set_up_test_run(test_run.config, test_run.ctx.topology.sut_ports)
+ test_run.ctx.dpdk.setup(test_run.ctx.topology.sut_ports)
self.result.ports = test_run.ctx.topology.sut_ports + test_run.ctx.topology.tg_ports
self.result.sut_info = test_run.ctx.sut_node.node_info
- self.result.dpdk_build_info = test_run.ctx.sut_node.get_dpdk_build_info()
+ self.result.dpdk_build_info = test_run.ctx.dpdk.build.get_dpdk_build_info()
self.logger.debug(f"Found capabilities to check: {test_run.required_capabilities}")
test_run.supported_capabilities = get_supported_capabilities(
@@ -390,7 +391,7 @@ def description(self) -> str:
def next(self) -> State | None:
"""Next state."""
- self.test_run.ctx.sut_node.tear_down_test_run(self.test_run.ctx.topology.sut_ports)
+ self.test_run.ctx.dpdk.teardown(self.test_run.ctx.topology.sut_ports)
self.result.update_teardown(Result.PASS)
return None
@@ -500,7 +501,7 @@ def description(self) -> str:
def next(self) -> State | None:
"""Next state."""
self.test_suite.tear_down_suite()
- self.test_run.ctx.sut_node.kill_cleanup_dpdk_apps()
+ self.test_run.ctx.dpdk.kill_cleanup_dpdk_apps()
self.result.update_teardown(Result.PASS)
return TestRunExecution(self.test_run, self.test_run.result)
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index ae90997061..58da26adf0 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -34,6 +34,7 @@
from framework.testbed_model.capability import TestProtocol
from framework.testbed_model.topology import Topology
from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
+ CapturingTrafficGenerator,
PacketFilteringConfig,
)
@@ -246,8 +247,12 @@ def send_packets_and_capture(
Returns:
A list of received packets.
"""
+ assert isinstance(
+ self._ctx.tg, CapturingTrafficGenerator
+ ), "Cannot capture with a non-capturing traffic generator"
+ # TODO: implement @requires for types of traffic generator
packets = self._adjust_addresses(packets)
- return self._ctx.tg_node.send_packets_and_capture(
+ return self._ctx.tg.send_packets_and_capture(
packets,
self._ctx.topology.tg_port_egress,
self._ctx.topology.tg_port_ingress,
@@ -265,7 +270,7 @@ def send_packets(
packets: Packets to send.
"""
packets = self._adjust_addresses(packets)
- self._ctx.tg_node.send_packets(packets, self._ctx.topology.tg_port_egress)
+ self._ctx.tg.send_packets(packets, self._ctx.topology.tg_port_egress)
def get_expected_packets(
self,
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index a1d6d9dd32..ea0e647a47 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -63,8 +63,8 @@ def test_scatter_mbuf_2048(self):
TestPmdShellDecorator,
TestPmdShellMethod,
)
+from framework.testbed_model.node import Node
-from .sut_node import SutNode
from .topology import Topology, TopologyType
if TYPE_CHECKING:
@@ -90,7 +90,7 @@ class Capability(ABC):
#: A set storing the capabilities whose support should be checked.
capabilities_to_check: ClassVar[set[Self]] = set()
- def register_to_check(self) -> Callable[[SutNode, "Topology"], set[Self]]:
+ def register_to_check(self) -> Callable[[Node, "Topology"], set[Self]]:
"""Register the capability to be checked for support.
Returns:
@@ -118,27 +118,27 @@ def _preprocess_required(self, test_case_or_suite: type["TestProtocol"]) -> None
"""An optional method that modifies the required capabilities."""
@classmethod
- def _get_and_reset(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
+ def _get_and_reset(cls, node: Node, topology: "Topology") -> set[Self]:
"""The callback method to be called after all capabilities have been registered.
Not only does this method check the support of capabilities,
but it also reset the internal set of registered capabilities
so that the "register, then get support" workflow works in subsequent test runs.
"""
- supported_capabilities = cls.get_supported_capabilities(sut_node, topology)
+ supported_capabilities = cls.get_supported_capabilities(node, topology)
cls.capabilities_to_check = set()
return supported_capabilities
@classmethod
@abstractmethod
- def get_supported_capabilities(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
+ def get_supported_capabilities(cls, node: Node, topology: "Topology") -> set[Self]:
"""Get the support status of each registered capability.
Each subclass must implement this method and return the subset of supported capabilities
of :attr:`capabilities_to_check`.
Args:
- sut_node: The SUT node of the current test run.
+ node: The node to check capabilities against.
topology: The topology of the current test run.
Returns:
@@ -197,7 +197,7 @@ def get_unique(cls, nic_capability: NicCapability) -> Self:
@classmethod
def get_supported_capabilities(
- cls, sut_node: SutNode, topology: "Topology"
+ cls, node: Node, topology: "Topology"
) -> set["DecoratedNicCapability"]:
"""Overrides :meth:`~Capability.get_supported_capabilities`.
@@ -207,7 +207,7 @@ def get_supported_capabilities(
before executing its `capability_fn` so that each capability is retrieved only once.
"""
supported_conditional_capabilities: set["DecoratedNicCapability"] = set()
- logger = get_dts_logger(f"{sut_node.name}.{cls.__name__}")
+ logger = get_dts_logger(f"{node.name}.{cls.__name__}")
if topology.type is topology.type.no_link:
logger.debug(
"No links available in the current topology, not getting NIC capabilities."
@@ -332,7 +332,7 @@ def get_unique(cls, topology_type: TopologyType) -> Self:
@classmethod
def get_supported_capabilities(
- cls, sut_node: SutNode, topology: "Topology"
+ cls, node: Node, topology: "Topology"
) -> set["TopologyCapability"]:
"""Overrides :meth:`~Capability.get_supported_capabilities`."""
supported_capabilities = set()
@@ -483,14 +483,14 @@ def add_required_capability(
def get_supported_capabilities(
- sut_node: SutNode,
+ node: Node,
topology_config: Topology,
capabilities_to_check: set[Capability],
) -> set[Capability]:
"""Probe the environment for `capabilities_to_check` and return the supported ones.
Args:
- sut_node: The SUT node to check for capabilities.
+ node: The node to check capabilities against.
topology_config: The topology config to check for capabilities.
capabilities_to_check: The capabilities to check.
@@ -502,7 +502,7 @@ def get_supported_capabilities(
callbacks.add(capability_to_check.register_to_check())
supported_capabilities = set()
for callback in callbacks:
- supported_capabilities.update(callback(sut_node, topology_config))
+ supported_capabilities.update(callback(node, topology_config))
return supported_capabilities
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 1a4c825ed2..be1b4ac2ac 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -13,25 +13,22 @@
The :func:`~Node.skip_setup` decorator can be used without subclassing.
"""
-from abc import ABC
-from collections.abc import Iterable
from functools import cached_property
from framework.config.node import (
OS,
NodeConfiguration,
)
-from framework.config.test_run import TestRunConfiguration
from framework.exception import ConfigurationError
from framework.logger import DTSLogger, get_dts_logger
-from .cpu import Architecture, LogicalCore, LogicalCoreCount, LogicalCoreList, lcore_filter
+from .cpu import Architecture, LogicalCore
from .linux_session import LinuxSession
-from .os_session import OSSession
+from .os_session import OSSession, OSSessionInfo
from .port import Port
-class Node(ABC):
+class Node:
"""The base class for node management.
It shouldn't be instantiated, but rather subclassed.
@@ -57,7 +54,8 @@ class Node(ABC):
ports: list[Port]
_logger: DTSLogger
_other_sessions: list[OSSession]
- _test_run_config: TestRunConfiguration
+ _node_info: OSSessionInfo | None
+ _compiler_version: str | None
def __init__(self, node_config: NodeConfiguration):
"""Connect to the node and gather info during initialization.
@@ -80,35 +78,13 @@ def __init__(self, node_config: NodeConfiguration):
self._get_remote_cpus()
self._other_sessions = []
self.ports = [Port(self, port_config) for port_config in self.config.ports]
+ self._logger.info(f"Created node: {self.name}")
@cached_property
def ports_by_name(self) -> dict[str, Port]:
"""Ports mapped by the name assigned at configuration."""
return {port.name: port for port in self.ports}
- def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
- """Test run setup steps.
-
- Configure hugepages on all DTS node types. Additional steps can be added by
- extending the method in subclasses with the use of super().
-
- Args:
- test_run_config: A test run configuration according to which
- the setup steps will be taken.
- ports: The ports to set up for the test run.
- """
- self._setup_hugepages()
-
- def tear_down_test_run(self, ports: Iterable[Port]) -> None:
- """Test run teardown steps.
-
- There are currently no common execution teardown steps common to all DTS node types.
- Additional steps can be added by extending the method in subclasses with the use of super().
-
- Args:
- ports: The ports to tear down for the test run.
- """
-
def create_session(self, name: str) -> OSSession:
"""Create and return a new OS-aware remote session.
@@ -134,40 +110,33 @@ def create_session(self, name: str) -> OSSession:
self._other_sessions.append(connection)
return connection
- def filter_lcores(
- self,
- filter_specifier: LogicalCoreCount | LogicalCoreList,
- ascending: bool = True,
- ) -> list[LogicalCore]:
- """Filter the node's logical cores that DTS can use.
-
- Logical cores that DTS can use are the ones that are present on the node, but filtered
- according to the test run configuration. The `filter_specifier` will filter cores from
- those logical cores.
-
- Args:
- filter_specifier: Two different filters can be used, one that specifies the number
- of logical cores per core, cores per socket and the number of sockets,
- and another one that specifies a logical core list.
- ascending: If :data:`True`, use cores with the lowest numerical id first and continue
- in ascending order. If :data:`False`, start with the highest id and continue
- in descending order. This ordering affects which sockets to consider first as well.
-
- Returns:
- The filtered logical cores.
- """
- self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
- return lcore_filter(
- self.lcores,
- filter_specifier,
- ascending,
- ).filter()
-
def _get_remote_cpus(self) -> None:
"""Scan CPUs in the remote OS and store a list of LogicalCores."""
self._logger.info("Getting CPU information.")
self.lcores = self.main_session.get_remote_cpus()
+ @cached_property
+ def node_info(self) -> OSSessionInfo:
+ """Additional node information."""
+ return self.main_session.get_node_info()
+
+ @property
+ def compiler_version(self) -> str | None:
+ """The node's compiler version."""
+ if self._compiler_version is None:
+ self._logger.warning("The `compiler_version` is None because a pre-built DPDK is used.")
+
+ return self._compiler_version
+
+ @compiler_version.setter
+ def compiler_version(self, value: str) -> None:
+ """Set the `compiler_version` used on the SUT node.
+
+ Args:
+ value: The node's compiler version.
+ """
+ self._compiler_version = value
+
def _setup_hugepages(self) -> None:
"""Setup hugepages on the node.
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
deleted file mode 100644
index 290a3fbd74..0000000000
--- a/dts/framework/testbed_model/tg_node.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 University of New Hampshire
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-"""Traffic generator node.
-
-A traffic generator (TG) generates traffic that's sent towards the SUT node.
-A TG node is where the TG runs.
-"""
-
-from collections.abc import Iterable
-
-from scapy.packet import Packet
-
-from framework.config.node import TGNodeConfiguration
-from framework.config.test_run import TestRunConfiguration
-from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
- PacketFilteringConfig,
-)
-
-from .node import Node
-from .port import Port
-from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
-
-
-class TGNode(Node):
- """The traffic generator node.
-
- The TG node extends :class:`Node` with TG specific features:
-
- * Traffic generator initialization,
- * The sending of traffic and receiving packets,
- * The sending of traffic without receiving packets.
-
- Not all traffic generators are capable of capturing traffic, which is why there
- must be a way to send traffic without that.
-
- Attributes:
- config: The traffic generator node configuration.
- traffic_generator: The traffic generator running on the node.
- """
-
- config: TGNodeConfiguration
- traffic_generator: CapturingTrafficGenerator
-
- def __init__(self, node_config: TGNodeConfiguration):
- """Extend the constructor with TG node specifics.
-
- Initialize the traffic generator on the TG node.
-
- Args:
- node_config: The TG node's test run configuration.
- """
- super().__init__(node_config)
- self._logger.info(f"Created node: {self.name}")
-
- def set_up_test_run(self, test_run_config: TestRunConfiguration, ports: Iterable[Port]) -> None:
- """Extend the test run setup with the setup of the traffic generator.
-
- Args:
- test_run_config: A test run configuration according to which
- the setup steps will be taken.
- ports: The ports to set up for the test run.
- """
- super().set_up_test_run(test_run_config, ports)
- self.main_session.bring_up_link(ports)
- self.traffic_generator = create_traffic_generator(self, self.config.traffic_generator)
-
- def tear_down_test_run(self, ports: Iterable[Port]) -> None:
- """Extend the test run teardown with the teardown of the traffic generator.
-
- Args:
- ports: The ports to tear down for the test run.
- """
- super().tear_down_test_run(ports)
- self.traffic_generator.close()
-
- def send_packets_and_capture(
- self,
- packets: list[Packet],
- send_port: Port,
- receive_port: Port,
- filter_config: PacketFilteringConfig = PacketFilteringConfig(),
- duration: float = 1,
- ) -> list[Packet]:
- """Send `packets`, return received traffic.
-
- Send `packets` on `send_port` and then return all traffic captured
- on `receive_port` for the given duration. Also record the captured traffic
- in a pcap file.
-
- Args:
- packets: The packets to send.
- send_port: The egress port on the TG node.
- receive_port: The ingress port in the TG node.
- filter_config: The filter to use when capturing packets.
- duration: Capture traffic for this amount of time after sending `packet`.
-
- Returns:
- A list of received packets. May be empty if no packets are captured.
- """
- return self.traffic_generator.send_packets_and_capture(
- packets,
- send_port,
- receive_port,
- filter_config,
- duration,
- )
-
- def send_packets(self, packets: list[Packet], port: Port):
- """Send packets without capturing resulting received packets.
-
- Args:
- packets: Packets to send.
- port: Port to send the packets on.
- """
- self.traffic_generator.send_packets(packets, port)
-
- def close(self) -> None:
- """Free all resources used by the node.
-
- This extends the superclass method with TG cleanup.
- """
- super().close()
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 922875f401..2a259a6e6c 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -14,7 +14,7 @@
and a capturing traffic generator is required.
"""
-from framework.config.node import ScapyTrafficGeneratorConfig, TrafficGeneratorConfig
+from framework.config.test_run import ScapyTrafficGeneratorConfig, TrafficGeneratorConfig
from framework.exception import ConfigurationError
from framework.testbed_model.node import Node
@@ -23,13 +23,13 @@
def create_traffic_generator(
- tg_node: Node, traffic_generator_config: TrafficGeneratorConfig
+ traffic_generator_config: TrafficGeneratorConfig, node: Node
) -> CapturingTrafficGenerator:
"""The factory function for creating traffic generator objects from the test run configuration.
Args:
- tg_node: The traffic generator node where the created traffic generator will be running.
traffic_generator_config: The traffic generator config.
+ node: The node where the created traffic generator will be running.
Returns:
A traffic generator capable of capturing received packets.
@@ -39,6 +39,6 @@ def create_traffic_generator(
"""
match traffic_generator_config:
case ScapyTrafficGeneratorConfig():
- return ScapyTrafficGenerator(tg_node, traffic_generator_config, privileged=True)
+ return ScapyTrafficGenerator(node, traffic_generator_config, privileged=True)
case _:
raise ConfigurationError(f"Unknown traffic generator: {traffic_generator_config.type}")
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index c9c7dac54a..520561b2eb 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -14,13 +14,15 @@
import re
import time
+from collections.abc import Iterable
from typing import ClassVar
from scapy.compat import base64_bytes
from scapy.layers.l2 import Ether
from scapy.packet import Packet
-from framework.config.node import OS, ScapyTrafficGeneratorConfig
+from framework.config.node import OS
+from framework.config.test_run import ScapyTrafficGeneratorConfig
from framework.remote_session.python_shell import PythonShell
from framework.testbed_model.node import Node
from framework.testbed_model.port import Port
@@ -83,6 +85,14 @@ def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig, **kwargs)
super().__init__(node=tg_node, config=config, tg_node=tg_node, **kwargs)
self.start_application()
+ def setup(self, ports: Iterable[Port]):
+ """Extends :meth:`.traffic_generator.TrafficGenerator.setup`.
+
+ Brings up the port links.
+ """
+ super().setup(ports)
+ self._tg_node.main_session.bring_up_link(ports)
+
def start_application(self) -> None:
"""Extends :meth:`framework.remote_session.interactive_shell.start_application`.
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 9b4d5dc80a..4469273e36 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -9,10 +9,11 @@
"""
from abc import ABC, abstractmethod
+from typing import Iterable
from scapy.packet import Packet
-from framework.config.node import TrafficGeneratorConfig
+from framework.config.test_run import TrafficGeneratorConfig
from framework.logger import DTSLogger, get_dts_logger
from framework.testbed_model.node import Node
from framework.testbed_model.port import Port
@@ -49,6 +50,12 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig, **kwargs):
self._logger = get_dts_logger(f"{self._tg_node.name} {self._config.type}")
super().__init__(**kwargs)
+ def setup(self, ports: Iterable[Port]):
+ """Setup the traffic generator."""
+
+ def teardown(self):
+ """Teardown the traffic generator."""
+
def send_packet(self, packet: Packet, port: Port) -> None:
"""Send `packet` and block until it is fully sent.
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 8a5799c684..a8ea07595f 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -47,7 +47,7 @@ def set_up_suite(self) -> None:
Set the build directory path and a list of NICs in the SUT node.
"""
self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
- self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir
+ self.dpdk_build_dir_path = self._ctx.dpdk.build.remote_dpdk_build_dir
self.nics_in_node = self.sut_node.config.ports
@func_test
@@ -79,7 +79,7 @@ def test_driver_tests(self) -> None:
Run the ``driver-tests`` unit test suite through meson.
"""
vdev_args = ""
- for dev in self.sut_node.virtual_devices:
+ for dev in self._ctx.dpdk.get_virtual_devices():
vdev_args += f"--vdev {dev} "
vdev_args = vdev_args[:-1]
driver_tests_command = f"meson test -C {self.dpdk_build_dir_path} --suite driver-tests"
@@ -125,7 +125,7 @@ def test_device_bound_to_driver(self) -> None:
List all devices with the ``dpdk-devbind.py`` script and verify that
the configured devices are bound to the proper driver.
"""
- path_to_devbind = self.sut_node.path_to_devbind_script
+ path_to_devbind = self._ctx.dpdk.devbind_script_path
all_nics_in_dpdk_devbind = self.sut_node.main_session.send_command(
f"{path_to_devbind} --status | awk '/{REGEX_FOR_PCI_ADDRESS}/'",
diff --git a/dts/tests/TestSuite_softnic.py b/dts/tests/TestSuite_softnic.py
index 370fd6b419..eefd6d3273 100644
--- a/dts/tests/TestSuite_softnic.py
+++ b/dts/tests/TestSuite_softnic.py
@@ -46,7 +46,7 @@ def prepare_softnic_files(self) -> PurePath:
spec_file = Path("rx_tx.spec")
rx_tx_1_file = Path("rx_tx_1.io")
rx_tx_2_file = Path("rx_tx_2.io")
- path_sut = self.sut_node.remote_dpdk_build_dir
+ path_sut = self._ctx.dpdk.build.remote_dpdk_build_dir
cli_file_sut = self.sut_node.main_session.join_remote_path(path_sut, cli_file)
spec_file_sut = self.sut_node.main_session.join_remote_path(path_sut, spec_file)
rx_tx_1_file_sut = self.sut_node.main_session.join_remote_path(path_sut, rx_tx_1_file)
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v2 0/7] dts: revamp framework
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
` (6 preceding siblings ...)
2025-02-12 16:46 ` [PATCH v2 7/7] dts: remove node distinction Luca Vizzarro
@ 2025-02-12 16:47 ` Luca Vizzarro
7 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:47 UTC (permalink / raw)
To: dev; +Cc: Nicholas Pratte, Dean Marx, Patrick Robb, Paul Szczepanek
Forgot to mention that I've also removed the distinction between SUT and
TG nodes. These are now all the same nodes. Runtime configuration for
DPDK and TG now lies within the test run.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 1/7] dts: add port topology configuration
2025-02-07 18:25 ` Nicholas Pratte
@ 2025-02-12 16:47 ` Luca Vizzarro
0 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:47 UTC (permalink / raw)
To: Nicholas Pratte; +Cc: dev, Patrick Robb, Paul Szczepanek
Great points! Thank you, will apply.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 1/7] dts: add port topology configuration
2025-02-11 18:00 ` Dean Marx
@ 2025-02-12 16:47 ` Luca Vizzarro
0 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:47 UTC (permalink / raw)
To: Dean Marx; +Cc: dev, Patrick Robb, Paul Szczepanek
Thank you Dean! Applied.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 0/7] dts: revamp framework
2025-02-04 21:08 ` [RFC PATCH 0/7] dts: revamp framework Dean Marx
@ 2025-02-12 16:52 ` Luca Vizzarro
0 siblings, 0 replies; 33+ messages in thread
From: Luca Vizzarro @ 2025-02-12 16:52 UTC (permalink / raw)
To: Dean Marx; +Cc: dev, Patrick Robb, Paul Szczepanek
On 04/02/2025 21:08, Dean Marx wrote:
> Hi Luca,
>
> I saw in the meeting minutes that the main purpose of this series is
> to implement the separation of concern principle better in DTS. Just
> wondering, what parts of the current framework did you think needed to
> be separated and why? I'm taking an OOP class this semester and we
> just started talking in depth about separation of concerns, so if you
> wouldn't mind I'd be interested in your thought process. Working on a
> review for the series currently as well, should be done relatively
> soon.
>
> Thanks,
> Dean
Hi Dean,
For starters all the major logic was held within the `runner.py` file,
which is not ideal. When some logic touches the same object too often,
maybe its an indicator that it actually belongs to that class instead.
For example, the test suite and cases are configured within the
configuration. Anything configuration should be kept within the
configuration, the runner should assume that the configuration is final
and is ready to be fed to the classes. For this reason, I've moved the
test suite and cases filtering logic there. Test runs are a whole
concept on their own, and they have their own logic. For this reason it
is a better idea to have them individually in their own dedicated class
where they can control themselves. As opposed to having a runner.py
basically dealing with every part of the framework in the same place. If
you were to make a graph with all the linked classes to runner.py, I am
sure it'd look like spaghetti. We don't want that. We want a hierarical
and structured approach. Separation of concerns will aid this by keeping
things where they belong, minimising import/export. This will also help
with circular import issues.
Hope this provides some insight!
Luca
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC PATCH 5/7] dts: add runtime status
2025-02-03 15:16 ` [RFC PATCH 5/7] dts: add runtime status Luca Vizzarro
2025-02-11 19:45 ` Dean Marx
@ 2025-02-12 18:50 ` Nicholas Pratte
1 sibling, 0 replies; 33+ messages in thread
From: Nicholas Pratte @ 2025-02-12 18:50 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Patrick Robb, Paul Szczepanek
Reviewed-by: Nicholas Pratte <npratte@iol.unh.edu>
On Mon, Feb 3, 2025 at 10:17 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> Add a new module which defines the global runtime status of DTS and the
> distinct execution stages and steps.
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
> ---
> doc/api/dts/framework.status.rst | 8 ++++
> doc/api/dts/index.rst | 1 +
> dts/framework/logger.py | 36 ++++--------------
> dts/framework/status.py | 64 ++++++++++++++++++++++++++++++++
> 4 files changed, 81 insertions(+), 28 deletions(-)
> create mode 100644 doc/api/dts/framework.status.rst
> create mode 100644 dts/framework/status.py
>
> diff --git a/doc/api/dts/framework.status.rst b/doc/api/dts/framework.status.rst
> new file mode 100644
> index 0000000000..07277b5301
> --- /dev/null
> +++ b/doc/api/dts/framework.status.rst
> @@ -0,0 +1,8 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> +
> +status - DTS status definitions
> +===========================================================
> +
> +.. automodule:: framework.status
> + :members:
> + :show-inheritance:
> diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
> index 534512dc17..cde603576c 100644
> --- a/doc/api/dts/index.rst
> +++ b/doc/api/dts/index.rst
> @@ -29,6 +29,7 @@ Modules
> framework.test_suite
> framework.test_result
> framework.settings
> + framework.status
> framework.logger
> framework.parser
> framework.utils
> diff --git a/dts/framework/logger.py b/dts/framework/logger.py
> index d2b8e37da4..7b1c8e6637 100644
> --- a/dts/framework/logger.py
> +++ b/dts/framework/logger.py
> @@ -13,37 +13,17 @@
> """
>
> import logging
> -from enum import auto
> from logging import FileHandler, StreamHandler
> from pathlib import Path
> from typing import ClassVar
>
> -from .utils import StrEnum
> +from framework.status import PRE_RUN, State
>
> date_fmt = "%Y/%m/%d %H:%M:%S"
> stream_fmt = "%(asctime)s - %(stage)s - %(name)s - %(levelname)s - %(message)s"
> dts_root_logger_name = "dts"
>
>
> -class DtsStage(StrEnum):
> - """The DTS execution stage."""
> -
> - #:
> - pre_run = auto()
> - #:
> - test_run_setup = auto()
> - #:
> - test_suite_setup = auto()
> - #:
> - test_suite = auto()
> - #:
> - test_suite_teardown = auto()
> - #:
> - test_run_teardown = auto()
> - #:
> - post_run = auto()
> -
> -
> class DTSLogger(logging.Logger):
> """The DTS logger class.
>
> @@ -55,7 +35,7 @@ class DTSLogger(logging.Logger):
> a new stage switch occurs. This is useful mainly for logging per test suite.
> """
>
> - _stage: ClassVar[DtsStage] = DtsStage.pre_run
> + _stage: ClassVar[State] = PRE_RUN
> _extra_file_handlers: list[FileHandler] = []
>
> def __init__(self, *args, **kwargs):
> @@ -75,7 +55,7 @@ def makeRecord(self, *args, **kwargs) -> logging.LogRecord:
> record: The generated record with the stage information.
> """
> record = super().makeRecord(*args, **kwargs)
> - record.stage = DTSLogger._stage
> + record.stage = str(DTSLogger._stage)
> return record
>
> def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
> @@ -110,7 +90,7 @@ def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
>
> self._add_file_handlers(Path(output_dir, self.name))
>
> - def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None:
> + def set_stage(self, state: State, log_file_path: Path | None = None) -> None:
> """Set the DTS execution stage and optionally log to files.
>
> Set the DTS execution stage of the DTSLog class and optionally add
> @@ -120,15 +100,15 @@ def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None:
> the other one is a machine-readable log file with extra debug information.
>
> Args:
> - stage: The DTS stage to set.
> + state: The DTS execution state to set.
> log_file_path: An optional path of the log file to use. This should be a full path
> (either relative or absolute) without suffix (which will be appended).
> """
> self._remove_extra_file_handlers()
>
> - if DTSLogger._stage != stage:
> - self.info(f"Moving from stage '{DTSLogger._stage}' to stage '{stage}'.")
> - DTSLogger._stage = stage
> + if DTSLogger._stage != state:
> + self.info(f"Moving from stage '{DTSLogger._stage}' " f"to stage '{state}'.")
> + DTSLogger._stage = state
>
> if log_file_path:
> self._extra_file_handlers.extend(self._add_file_handlers(log_file_path))
> diff --git a/dts/framework/status.py b/dts/framework/status.py
> new file mode 100644
> index 0000000000..4a59aa50e6
> --- /dev/null
> +++ b/dts/framework/status.py
> @@ -0,0 +1,64 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2025 Arm Limited
> +
> +"""Running status of DTS.
> +
> +This module contains the definitions that represent the different states of execution within DTS.
> +"""
> +
> +from enum import auto
> +from typing import NamedTuple
> +
> +from .utils import StrEnum
> +
> +
> +class Stage(StrEnum):
> + """Execution stage."""
> +
> + #:
> + PRE_RUN = auto()
> + #:
> + TEST_RUN = auto()
> + #:
> + TEST_SUITE = auto()
> + #:
> + TEST_CASE = auto()
> + #:
> + POST_RUN = auto()
> +
> +
> +class InternalState(StrEnum):
> + """Internal state of the current execution stage."""
> +
> + #:
> + BEGIN = auto()
> + #:
> + SETUP = auto()
> + #:
> + RUN = auto()
> + #:
> + TEARDOWN = auto()
> + #:
> + END = auto()
> +
> +
> +class State(NamedTuple):
> + """Representation of the DTS execution state."""
> +
> + #:
> + stage: Stage
> + #:
> + state: InternalState
> +
> + def __str__(self) -> str:
> + """A formatted name."""
> + name = self.stage.value.lower()
> + if self.state is not InternalState.RUN:
> + return f"{name}_{self.state.value.lower()}"
> + return name
> +
> +
> +#: A ready-made pre-run DTS state.
> +PRE_RUN = State(Stage.PRE_RUN, InternalState.RUN)
> +#: A ready-made post-run DTS state.
> +POST_RUN = State(Stage.POST_RUN, InternalState.RUN)
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v2 5/7] dts: add global runtime context
2025-02-12 16:45 ` [PATCH v2 5/7] dts: add global runtime context Luca Vizzarro
@ 2025-02-12 19:45 ` Nicholas Pratte
0 siblings, 0 replies; 33+ messages in thread
From: Nicholas Pratte @ 2025-02-12 19:45 UTC (permalink / raw)
To: Luca Vizzarro; +Cc: dev, Dean Marx, Paul Szczepanek, Patrick Robb
Definitely much easier to read, in a general sense, with these changes.
Reviewed-by: Nicholas Pratte <npratte@iol.unh.edu>
On Wed, Feb 12, 2025 at 11:46 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:
>
> Add a new context module which holds the runtime context. The new
> context will describe the current scenario and aid underlying classes
> used by the test suites to automatically infer their parameters. This
> further simplifies the test writing process as the test writer won't need
> to be concerned about the nodes, and can directly use the tools
> implementing context awareness, e.g. TestPmdShell, as needed.
>
> Bugzilla ID: 1461
>
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
> Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
> Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
> ---
> doc/api/dts/framework.context.rst | 8 ++
> doc/api/dts/index.rst | 1 +
> dts/framework/context.py | 117 ++++++++++++++++++
> dts/framework/remote_session/dpdk_shell.py | 53 +++-----
> .../single_active_interactive_shell.py | 14 +--
> dts/framework/remote_session/testpmd_shell.py | 27 ++--
> dts/framework/test_suite.py | 73 +++++------
> dts/tests/TestSuite_blocklist.py | 6 +-
> dts/tests/TestSuite_checksum_offload.py | 14 +--
> dts/tests/TestSuite_dual_vlan.py | 6 +-
> dts/tests/TestSuite_dynamic_config.py | 8 +-
> dts/tests/TestSuite_dynamic_queue_conf.py | 1 -
> dts/tests/TestSuite_hello_world.py | 2 +-
> dts/tests/TestSuite_l2fwd.py | 9 +-
> dts/tests/TestSuite_mac_filter.py | 10 +-
> dts/tests/TestSuite_mtu.py | 17 +--
> dts/tests/TestSuite_pmd_buffer_scatter.py | 9 +-
> ...stSuite_port_restart_config_persistency.py | 8 +-
> dts/tests/TestSuite_promisc_support.py | 8 +-
> dts/tests/TestSuite_smoke_tests.py | 3 +-
> dts/tests/TestSuite_softnic.py | 4 +-
> dts/tests/TestSuite_uni_pkt.py | 14 +--
> dts/tests/TestSuite_vlan.py | 8 +-
> 23 files changed, 247 insertions(+), 173 deletions(-)
> create mode 100644 doc/api/dts/framework.context.rst
> create mode 100644 dts/framework/context.py
>
> diff --git a/doc/api/dts/framework.context.rst b/doc/api/dts/framework.context.rst
> new file mode 100644
> index 0000000000..a8c8b5022e
> --- /dev/null
> +++ b/doc/api/dts/framework.context.rst
> @@ -0,0 +1,8 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> +
> +context - DTS execution context
> +===========================================================
> +
> +.. automodule:: framework.context
> + :members:
> + :show-inheritance:
> diff --git a/doc/api/dts/index.rst b/doc/api/dts/index.rst
> index 534512dc17..90092014d2 100644
> --- a/doc/api/dts/index.rst
> +++ b/doc/api/dts/index.rst
> @@ -29,6 +29,7 @@ Modules
> framework.test_suite
> framework.test_result
> framework.settings
> + framework.context
> framework.logger
> framework.parser
> framework.utils
> diff --git a/dts/framework/context.py b/dts/framework/context.py
> new file mode 100644
> index 0000000000..03eaf63b88
> --- /dev/null
> +++ b/dts/framework/context.py
> @@ -0,0 +1,117 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2025 Arm Limited
> +
> +"""Runtime contexts."""
> +
> +import functools
> +from dataclasses import MISSING, dataclass, field, fields
> +from typing import TYPE_CHECKING, ParamSpec
> +
> +from framework.exception import InternalError
> +from framework.settings import SETTINGS
> +from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
> +from framework.testbed_model.topology import Topology
> +
> +if TYPE_CHECKING:
> + from framework.testbed_model.sut_node import SutNode
> + from framework.testbed_model.tg_node import TGNode
> +
> +P = ParamSpec("P")
> +
> +
> +@dataclass
> +class LocalContext:
> + """Updatable context local to test suites and cases.
> +
> + Attributes:
> + lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
> + use. The default will select one lcore for each of two cores on one socket, in ascending
> + order of core ids.
> + ascending_cores: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
> + sort in descending order.
> + append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
> + timeout: The timeout used for the SSH channel that is dedicated to this interactive
> + shell. This timeout is for collecting output, so if reading from the buffer
> + and no output is gathered within the timeout, an exception is thrown.
> + """
> +
> + lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = field(
> + default_factory=LogicalCoreCount
> + )
> + ascending_cores: bool = True
> + append_prefix_timestamp: bool = True
> + timeout: float = SETTINGS.timeout
> +
> + def reset(self) -> None:
> + """Reset the local context to the default values."""
> + for _field in fields(LocalContext):
> + default = (
> + _field.default_factory()
> + if _field.default_factory is not MISSING
> + else _field.default
> + )
> +
> + assert (
> + default is not MISSING
> + ), "{LocalContext.__name__} must have defaults on all fields!"
> +
> + setattr(self, _field.name, default)
> +
> +
> +@dataclass(frozen=True)
> +class Context:
> + """Runtime context."""
> +
> + sut_node: "SutNode"
> + tg_node: "TGNode"
> + topology: Topology
> + local: LocalContext = field(default_factory=LocalContext)
> +
> +
> +__current_ctx: Context | None = None
> +
> +
> +def get_ctx() -> Context:
> + """Retrieve the current runtime context.
> +
> + Raises:
> + InternalError: If there is no context.
> + """
> + if __current_ctx:
> + return __current_ctx
> +
> + raise InternalError("Attempted to retrieve context that has not been initialized yet.")
> +
> +
> +def init_ctx(ctx: Context) -> None:
> + """Initialize context."""
> + global __current_ctx
> + __current_ctx = ctx
> +
> +
> +def filter_cores(
> + specifier: LogicalCoreCount | LogicalCoreList, ascending_cores: bool | None = None
> +):
> + """Decorates functions that require a temporary update to the lcore specifier."""
> +
> + def decorator(func):
> + @functools.wraps(func)
> + def wrapper(*args: P.args, **kwargs: P.kwargs):
> + local_ctx = get_ctx().local
> +
> + old_specifier = local_ctx.lcore_filter_specifier
> + local_ctx.lcore_filter_specifier = specifier
> + if ascending_cores is not None:
> + old_ascending_cores = local_ctx.ascending_cores
> + local_ctx.ascending_cores = ascending_cores
> +
> + try:
> + return func(*args, **kwargs)
> + finally:
> + local_ctx.lcore_filter_specifier = old_specifier
> + if ascending_cores is not None:
> + local_ctx.ascending_cores = old_ascending_cores
> +
> + return wrapper
> +
> + return decorator
> diff --git a/dts/framework/remote_session/dpdk_shell.py b/dts/framework/remote_session/dpdk_shell.py
> index c11d9ab81c..b55deb7fa0 100644
> --- a/dts/framework/remote_session/dpdk_shell.py
> +++ b/dts/framework/remote_session/dpdk_shell.py
> @@ -9,54 +9,45 @@
> from abc import ABC
> from pathlib import PurePath
>
> +from framework.context import get_ctx
> from framework.params.eal import EalParams
> from framework.remote_session.single_active_interactive_shell import (
> SingleActiveInteractiveShell,
> )
> -from framework.settings import SETTINGS
> -from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
> +from framework.testbed_model.cpu import LogicalCoreList
> from framework.testbed_model.sut_node import SutNode
>
>
> def compute_eal_params(
> - sut_node: SutNode,
> params: EalParams | None = None,
> - lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
> - ascending_cores: bool = True,
> - append_prefix_timestamp: bool = True,
> ) -> EalParams:
> """Compute EAL parameters based on the node's specifications.
>
> Args:
> - sut_node: The SUT node to compute the values for.
> params: If :data:`None`, a new object is created and returned. Otherwise `params.lcore_list`
> is modified according to `lcore_filter_specifier`. A DPDK file prefix is also added. If
> `params.ports` is :data:`None`, then `sut_node.ports` is assigned to it.
> - lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
> - use. The default will select one lcore for each of two cores on one socket, in ascending
> - order of core ids.
> - ascending_cores: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
> - sort in descending order.
> - append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
> """
> + ctx = get_ctx()
> +
> if params is None:
> params = EalParams()
>
> if params.lcore_list is None:
> params.lcore_list = LogicalCoreList(
> - sut_node.filter_lcores(lcore_filter_specifier, ascending_cores)
> + ctx.sut_node.filter_lcores(ctx.local.lcore_filter_specifier, ctx.local.ascending_cores)
> )
>
> prefix = params.prefix
> - if append_prefix_timestamp:
> - prefix = f"{prefix}_{sut_node.dpdk_timestamp}"
> - prefix = sut_node.main_session.get_dpdk_file_prefix(prefix)
> + if ctx.local.append_prefix_timestamp:
> + prefix = f"{prefix}_{ctx.sut_node.dpdk_timestamp}"
> + prefix = ctx.sut_node.main_session.get_dpdk_file_prefix(prefix)
> if prefix:
> - sut_node.dpdk_prefix_list.append(prefix)
> + ctx.sut_node.dpdk_prefix_list.append(prefix)
> params.prefix = prefix
>
> if params.allowed_ports is None:
> - params.allowed_ports = sut_node.ports
> + params.allowed_ports = ctx.topology.sut_ports
>
> return params
>
> @@ -74,29 +65,15 @@ class DPDKShell(SingleActiveInteractiveShell, ABC):
>
> def __init__(
> self,
> - node: SutNode,
> + name: str | None = None,
> privileged: bool = True,
> - timeout: float = SETTINGS.timeout,
> - lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
> - ascending_cores: bool = True,
> - append_prefix_timestamp: bool = True,
> app_params: EalParams = EalParams(),
> - name: str | None = None,
> ) -> None:
> - """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`.
> -
> - Adds the `lcore_filter_specifier`, `ascending_cores` and `append_prefix_timestamp` arguments
> - which are then used to compute the EAL parameters based on the node's configuration.
> - """
> - app_params = compute_eal_params(
> - node,
> - app_params,
> - lcore_filter_specifier,
> - ascending_cores,
> - append_prefix_timestamp,
> - )
> + """Extends :meth:`~.interactive_shell.InteractiveShell.__init__`."""
> + app_params = compute_eal_params(app_params)
> + node = get_ctx().sut_node
>
> - super().__init__(node, privileged, timeout, app_params, name)
> + super().__init__(node, name, privileged, app_params)
>
> def _update_real_path(self, path: PurePath) -> None:
> """Extends :meth:`~.interactive_shell.InteractiveShell._update_real_path`.
> diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
> index cfe5baec14..2eec2f698a 100644
> --- a/dts/framework/remote_session/single_active_interactive_shell.py
> +++ b/dts/framework/remote_session/single_active_interactive_shell.py
> @@ -27,6 +27,7 @@
> from paramiko import Channel, channel
> from typing_extensions import Self
>
> +from framework.context import get_ctx
> from framework.exception import (
> InteractiveCommandExecutionError,
> InteractiveSSHSessionDeadError,
> @@ -34,7 +35,6 @@
> )
> from framework.logger import DTSLogger, get_dts_logger
> from framework.params import Params
> -from framework.settings import SETTINGS
> from framework.testbed_model.node import Node
> from framework.utils import MultiInheritanceBaseClass
>
> @@ -90,10 +90,9 @@ class SingleActiveInteractiveShell(MultiInheritanceBaseClass, ABC):
> def __init__(
> self,
> node: Node,
> + name: str | None = None,
> privileged: bool = False,
> - timeout: float = SETTINGS.timeout,
> app_params: Params = Params(),
> - name: str | None = None,
> **kwargs,
> ) -> None:
> """Create an SSH channel during initialization.
> @@ -103,13 +102,10 @@ def __init__(
>
> Args:
> node: The node on which to run start the interactive shell.
> - privileged: Enables the shell to run as superuser.
> - timeout: The timeout used for the SSH channel that is dedicated to this interactive
> - shell. This timeout is for collecting output, so if reading from the buffer
> - and no output is gathered within the timeout, an exception is thrown.
> - app_params: The command line parameters to be passed to the application on startup.
> name: Name for the interactive shell to use for logging. This name will be appended to
> the name of the underlying node which it is running on.
> + privileged: Enables the shell to run as superuser.
> + app_params: The command line parameters to be passed to the application on startup.
> **kwargs: Any additional arguments if any.
> """
> self._node = node
> @@ -118,7 +114,7 @@ def __init__(
> self._logger = get_dts_logger(f"{node.name}.{name}")
> self._app_params = app_params
> self._privileged = privileged
> - self._timeout = timeout
> + self._timeout = get_ctx().local.timeout
> # Ensure path is properly formatted for the host
> self._update_real_path(self.path)
> super().__init__(**kwargs)
> diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
> index 287886ec44..672ecd970f 100644
> --- a/dts/framework/remote_session/testpmd_shell.py
> +++ b/dts/framework/remote_session/testpmd_shell.py
> @@ -32,6 +32,9 @@
> TypeAlias,
> )
>
> +from framework.context import get_ctx
> +from framework.testbed_model.topology import TopologyType
> +
> if TYPE_CHECKING or environ.get("DTS_DOC_BUILD"):
> from enum import Enum as NoAliasEnum
> else:
> @@ -40,13 +43,11 @@
> from typing_extensions import Self, Unpack
>
> from framework.exception import InteractiveCommandExecutionError, InternalError
> -from framework.params.testpmd import SimpleForwardingModes, TestPmdParams
> +from framework.params.testpmd import PortTopology, SimpleForwardingModes, TestPmdParams
> from framework.params.types import TestPmdParamsDict
> from framework.parser import ParserFn, TextParser
> from framework.remote_session.dpdk_shell import DPDKShell
> from framework.settings import SETTINGS
> -from framework.testbed_model.cpu import LogicalCoreCount, LogicalCoreList
> -from framework.testbed_model.sut_node import SutNode
> from framework.utils import REGEX_FOR_MAC_ADDRESS, StrEnum
>
> P = ParamSpec("P")
> @@ -1532,26 +1533,14 @@ class TestPmdShell(DPDKShell):
>
> def __init__(
> self,
> - node: SutNode,
> - privileged: bool = True,
> - timeout: float = SETTINGS.timeout,
> - lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = LogicalCoreCount(),
> - ascending_cores: bool = True,
> - append_prefix_timestamp: bool = True,
> name: str | None = None,
> + privileged: bool = True,
> **app_params: Unpack[TestPmdParamsDict],
> ) -> None:
> """Overrides :meth:`~.dpdk_shell.DPDKShell.__init__`. Changes app_params to kwargs."""
> - super().__init__(
> - node,
> - privileged,
> - timeout,
> - lcore_filter_specifier,
> - ascending_cores,
> - append_prefix_timestamp,
> - TestPmdParams(**app_params),
> - name,
> - )
> + if "port_topology" not in app_params and get_ctx().topology.type is TopologyType.one_link:
> + app_params["port_topology"] = PortTopology.loop
> + super().__init__(name, privileged, TestPmdParams(**app_params))
> self.ports_started = not self._app_params.disable_device_start
> self._ports = None
>
> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index 3d168d522b..b9b527e40d 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -24,7 +24,7 @@
> from ipaddress import IPv4Interface, IPv6Interface, ip_interface
> from pkgutil import iter_modules
> from types import ModuleType
> -from typing import ClassVar, Protocol, TypeVar, Union, cast
> +from typing import TYPE_CHECKING, ClassVar, Protocol, TypeVar, Union, cast
>
> from scapy.layers.inet import IP
> from scapy.layers.l2 import Ether
> @@ -32,9 +32,6 @@
> from typing_extensions import Self
>
> from framework.testbed_model.capability import TestProtocol
> -from framework.testbed_model.port import Port
> -from framework.testbed_model.sut_node import SutNode
> -from framework.testbed_model.tg_node import TGNode
> from framework.testbed_model.topology import Topology
> from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
> PacketFilteringConfig,
> @@ -44,6 +41,9 @@
> from .logger import DTSLogger, get_dts_logger
> from .utils import get_packet_summaries, to_pascal_case
>
> +if TYPE_CHECKING:
> + from framework.context import Context
> +
>
> class TestSuite(TestProtocol):
> """The base class with building blocks needed by most test cases.
> @@ -69,33 +69,19 @@ class TestSuite(TestProtocol):
>
> The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
> properly choose the IP addresses and other configuration that must be tailored to the testbed.
> -
> - Attributes:
> - sut_node: The SUT node where the test suite is running.
> - tg_node: The TG node where the test suite is running.
> """
>
> - sut_node: SutNode
> - tg_node: TGNode
> #: Whether the test suite is blocking. A failure of a blocking test suite
> #: will block the execution of all subsequent test suites in the current test run.
> is_blocking: ClassVar[bool] = False
> + _ctx: "Context"
> _logger: DTSLogger
> - _sut_port_ingress: Port
> - _sut_port_egress: Port
> _sut_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
> _sut_ip_address_egress: Union[IPv4Interface, IPv6Interface]
> - _tg_port_ingress: Port
> - _tg_port_egress: Port
> _tg_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
> _tg_ip_address_egress: Union[IPv4Interface, IPv6Interface]
>
> - def __init__(
> - self,
> - sut_node: SutNode,
> - tg_node: TGNode,
> - topology: Topology,
> - ):
> + def __init__(self):
> """Initialize the test suite testbed information and basic configuration.
>
> Find links between ports and set up default IP addresses to be used when
> @@ -106,18 +92,25 @@ def __init__(
> tg_node: The TG node where the test suite will run.
> topology: The topology where the test suite will run.
> """
> - self.sut_node = sut_node
> - self.tg_node = tg_node
> + from framework.context import get_ctx
> +
> + self._ctx = get_ctx()
> self._logger = get_dts_logger(self.__class__.__name__)
> - self._tg_port_egress = topology.tg_port_egress
> - self._sut_port_ingress = topology.sut_port_ingress
> - self._sut_port_egress = topology.sut_port_egress
> - self._tg_port_ingress = topology.tg_port_ingress
> self._sut_ip_address_ingress = ip_interface("192.168.100.2/24")
> self._sut_ip_address_egress = ip_interface("192.168.101.2/24")
> self._tg_ip_address_egress = ip_interface("192.168.100.3/24")
> self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
>
> + @property
> + def name(self) -> str:
> + """The name of the test suite class."""
> + return type(self).__name__
> +
> + @property
> + def topology(self) -> Topology:
> + """The current topology in use."""
> + return self._ctx.topology
> +
> @classmethod
> def get_test_cases(cls) -> list[type["TestCase"]]:
> """A list of all the available test cases."""
> @@ -254,10 +247,10 @@ def send_packets_and_capture(
> A list of received packets.
> """
> packets = self._adjust_addresses(packets)
> - return self.tg_node.send_packets_and_capture(
> + return self._ctx.tg_node.send_packets_and_capture(
> packets,
> - self._tg_port_egress,
> - self._tg_port_ingress,
> + self._ctx.topology.tg_port_egress,
> + self._ctx.topology.tg_port_ingress,
> filter_config,
> duration,
> )
> @@ -272,7 +265,7 @@ def send_packets(
> packets: Packets to send.
> """
> packets = self._adjust_addresses(packets)
> - self.tg_node.send_packets(packets, self._tg_port_egress)
> + self._ctx.tg_node.send_packets(packets, self._ctx.topology.tg_port_egress)
>
> def get_expected_packets(
> self,
> @@ -352,15 +345,15 @@ def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> li
> # only be the Ether src/dst.
> if "src" not in packet.fields:
> packet.src = (
> - self._sut_port_egress.mac_address
> + self.topology.sut_port_egress.mac_address
> if expected
> - else self._tg_port_egress.mac_address
> + else self.topology.tg_port_egress.mac_address
> )
> if "dst" not in packet.fields:
> packet.dst = (
> - self._tg_port_ingress.mac_address
> + self.topology.tg_port_ingress.mac_address
> if expected
> - else self._sut_port_ingress.mac_address
> + else self.topology.sut_port_ingress.mac_address
> )
>
> # update l3 addresses
> @@ -400,10 +393,10 @@ def verify(self, condition: bool, failure_description: str) -> None:
>
> def _fail_test_case_verify(self, failure_description: str) -> None:
> self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
> - for command_res in self.sut_node.main_session.remote_session.history[-10:]:
> + for command_res in self._ctx.sut_node.main_session.remote_session.history[-10:]:
> self._logger.debug(command_res.command)
> self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
> - for command_res in self.tg_node.main_session.remote_session.history[-10:]:
> + for command_res in self._ctx.tg_node.main_session.remote_session.history[-10:]:
> self._logger.debug(command_res.command)
> raise TestCaseVerifyError(failure_description)
>
> @@ -517,14 +510,14 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
> self._logger.debug("Looking at the Ether layer.")
> self._logger.debug(
> f"Comparing received dst mac '{received_packet.dst}' "
> - f"with expected '{self._tg_port_ingress.mac_address}'."
> + f"with expected '{self.topology.tg_port_ingress.mac_address}'."
> )
> - if received_packet.dst != self._tg_port_ingress.mac_address:
> + if received_packet.dst != self.topology.tg_port_ingress.mac_address:
> return False
>
> - expected_src_mac = self._tg_port_egress.mac_address
> + expected_src_mac = self.topology.tg_port_egress.mac_address
> if l3:
> - expected_src_mac = self._sut_port_egress.mac_address
> + expected_src_mac = self.topology.sut_port_egress.mac_address
> self._logger.debug(
> f"Comparing received src mac '{received_packet.src}' "
> f"with expected '{expected_src_mac}'."
> diff --git a/dts/tests/TestSuite_blocklist.py b/dts/tests/TestSuite_blocklist.py
> index b9e9cd1d1a..ce7da1cc8f 100644
> --- a/dts/tests/TestSuite_blocklist.py
> +++ b/dts/tests/TestSuite_blocklist.py
> @@ -18,7 +18,7 @@ class TestBlocklist(TestSuite):
>
> def verify_blocklisted_ports(self, ports_to_block: list[Port]):
> """Runs testpmd with the given ports blocklisted and verifies the ports."""
> - with TestPmdShell(self.sut_node, allowed_ports=[], blocked_ports=ports_to_block) as testpmd:
> + with TestPmdShell(allowed_ports=[], blocked_ports=ports_to_block) as testpmd:
> allowlisted_ports = {port.device_name for port in testpmd.show_port_info_all()}
> blocklisted_ports = {port.pci for port in ports_to_block}
>
> @@ -49,7 +49,7 @@ def one_port_blocklisted(self):
> Verify:
> That the port was successfully blocklisted.
> """
> - self.verify_blocklisted_ports(self.sut_node.ports[:1])
> + self.verify_blocklisted_ports(self.topology.sut_ports[:1])
>
> @func_test
> def all_but_one_port_blocklisted(self):
> @@ -60,4 +60,4 @@ def all_but_one_port_blocklisted(self):
> Verify:
> That all specified ports were successfully blocklisted.
> """
> - self.verify_blocklisted_ports(self.sut_node.ports[:-1])
> + self.verify_blocklisted_ports(self.topology.sut_ports[:-1])
> diff --git a/dts/tests/TestSuite_checksum_offload.py b/dts/tests/TestSuite_checksum_offload.py
> index a8bb6a71f7..b38d73421b 100644
> --- a/dts/tests/TestSuite_checksum_offload.py
> +++ b/dts/tests/TestSuite_checksum_offload.py
> @@ -128,7 +128,7 @@ def test_insert_checksums(self) -> None:
> Ether(dst=mac_id) / IPv6(src="::1") / UDP() / Raw(payload),
> Ether(dst=mac_id) / IPv6(src="::1") / TCP() / Raw(payload),
> ]
> - with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
> + with TestPmdShell(enable_rx_cksum=True) as testpmd:
> testpmd.set_forward_mode(SimpleForwardingModes.csum)
> testpmd.set_verbose(level=1)
> self.setup_hw_offload(testpmd=testpmd)
> @@ -160,7 +160,7 @@ def test_no_insert_checksums(self) -> None:
> Ether(dst=mac_id) / IPv6(src="::1") / UDP() / Raw(payload),
> Ether(dst=mac_id) / IPv6(src="::1") / TCP() / Raw(payload),
> ]
> - with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
> + with TestPmdShell(enable_rx_cksum=True) as testpmd:
> testpmd.set_forward_mode(SimpleForwardingModes.csum)
> testpmd.set_verbose(level=1)
> testpmd.start()
> @@ -190,7 +190,7 @@ def test_l4_rx_checksum(self) -> None:
> Ether(dst=mac_id) / IP() / UDP(chksum=0xF),
> Ether(dst=mac_id) / IP() / TCP(chksum=0xF),
> ]
> - with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
> + with TestPmdShell(enable_rx_cksum=True) as testpmd:
> testpmd.set_forward_mode(SimpleForwardingModes.csum)
> testpmd.set_verbose(level=1)
> self.setup_hw_offload(testpmd=testpmd)
> @@ -223,7 +223,7 @@ def test_l3_rx_checksum(self) -> None:
> Ether(dst=mac_id) / IP(chksum=0xF) / UDP(),
> Ether(dst=mac_id) / IP(chksum=0xF) / TCP(),
> ]
> - with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
> + with TestPmdShell(enable_rx_cksum=True) as testpmd:
> testpmd.set_forward_mode(SimpleForwardingModes.csum)
> testpmd.set_verbose(level=1)
> self.setup_hw_offload(testpmd=testpmd)
> @@ -260,7 +260,7 @@ def test_validate_rx_checksum(self) -> None:
> Ether(dst=mac_id) / IPv6(src="::1") / UDP(chksum=0xF),
> Ether(dst=mac_id) / IPv6(src="::1") / TCP(chksum=0xF),
> ]
> - with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
> + with TestPmdShell(enable_rx_cksum=True) as testpmd:
> testpmd.set_forward_mode(SimpleForwardingModes.csum)
> testpmd.set_verbose(level=1)
> self.setup_hw_offload(testpmd=testpmd)
> @@ -299,7 +299,7 @@ def test_vlan_checksum(self) -> None:
> Ether(dst=mac_id) / Dot1Q(vlan=1) / IPv6(src="::1") / UDP(chksum=0xF) / Raw(payload),
> Ether(dst=mac_id) / Dot1Q(vlan=1) / IPv6(src="::1") / TCP(chksum=0xF) / Raw(payload),
> ]
> - with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
> + with TestPmdShell(enable_rx_cksum=True) as testpmd:
> testpmd.set_forward_mode(SimpleForwardingModes.csum)
> testpmd.set_verbose(level=1)
> self.setup_hw_offload(testpmd=testpmd)
> @@ -333,7 +333,7 @@ def test_validate_sctp_checksum(self) -> None:
> Ether(dst=mac_id) / IP() / SCTP(),
> Ether(dst=mac_id) / IP() / SCTP(chksum=0xF),
> ]
> - with TestPmdShell(node=self.sut_node, enable_rx_cksum=True) as testpmd:
> + with TestPmdShell(enable_rx_cksum=True) as testpmd:
> testpmd.set_forward_mode(SimpleForwardingModes.csum)
> testpmd.set_verbose(level=1)
> testpmd.csum_set_hw(layers=ChecksumOffloadOptions.sctp)
> diff --git a/dts/tests/TestSuite_dual_vlan.py b/dts/tests/TestSuite_dual_vlan.py
> index bdbee7e8d1..6af503528d 100644
> --- a/dts/tests/TestSuite_dual_vlan.py
> +++ b/dts/tests/TestSuite_dual_vlan.py
> @@ -193,7 +193,7 @@ def insert_second_vlan(self) -> None:
> Packets are received.
> Packet contains two VLAN tags.
> """
> - with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
> + with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
> testpmd.tx_vlan_set(port=self.tx_port, enable=True, vlan=self.vlan_insert_tag)
> testpmd.start()
> recv = self.send_packet_and_capture(
> @@ -229,7 +229,7 @@ def all_vlan_functions(self) -> None:
> / Dot1Q(vlan=self.inner_vlan_tag)
> / Raw(b"X" * 20)
> )
> - with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
> + with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
> testpmd.start()
> recv = self.send_packet_and_capture(send_pkt)
> self.verify(len(recv) > 0, "Unmodified packet was not received.")
> @@ -269,7 +269,7 @@ def maintains_priority(self) -> None:
> / Dot1Q(vlan=self.inner_vlan_tag, prio=2)
> / Raw(b"X" * 20)
> )
> - with TestPmdShell(self.sut_node, forward_mode=SimpleForwardingModes.mac) as testpmd:
> + with TestPmdShell(forward_mode=SimpleForwardingModes.mac) as testpmd:
> testpmd.start()
> recv = self.send_packet_and_capture(pkt)
> self.verify(len(recv) > 0, "Did not receive any packets when testing VLAN priority.")
> diff --git a/dts/tests/TestSuite_dynamic_config.py b/dts/tests/TestSuite_dynamic_config.py
> index 5a33f6f3c2..a4bee2e90b 100644
> --- a/dts/tests/TestSuite_dynamic_config.py
> +++ b/dts/tests/TestSuite_dynamic_config.py
> @@ -88,7 +88,7 @@ def test_default_mode(self) -> None:
> and sends two packets; one matching source MAC address and one unknown.
> Verifies that both are received.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> is_promisc = testpmd.show_port_info(0).is_promiscuous_mode_enabled
> self.verify(is_promisc, "Promiscuous mode was not enabled by default.")
> testpmd.start()
> @@ -106,7 +106,7 @@ def test_disable_promisc(self) -> None:
> and sends two packets; one matching source MAC address and one unknown.
> Verifies that only the matching address packet is received.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
> mac = testpmd.show_port_info(0).mac_address
> self.send_packet_and_verify(should_receive=True, mac_address=str(mac))
> @@ -120,7 +120,7 @@ def test_disable_promisc_broadcast(self) -> None:
> and sends two packets; one matching source MAC address and one broadcast.
> Verifies that both packets are received.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
> mac = testpmd.show_port_info(0).mac_address
> self.send_packet_and_verify(should_receive=True, mac_address=str(mac))
> @@ -134,7 +134,7 @@ def test_disable_promisc_multicast(self) -> None:
> and sends two packets; one matching source MAC address and one multicast.
> Verifies that the multicast packet is only received once allmulticast mode is enabled.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd = self.disable_promisc_setup(testpmd=testpmd, port_id=0)
> testpmd.set_multicast_all(on=False)
> # 01:00:5E:00:00:01 is the first of the multicast MAC range of addresses
> diff --git a/dts/tests/TestSuite_dynamic_queue_conf.py b/dts/tests/TestSuite_dynamic_queue_conf.py
> index e55716f545..344dd540eb 100644
> --- a/dts/tests/TestSuite_dynamic_queue_conf.py
> +++ b/dts/tests/TestSuite_dynamic_queue_conf.py
> @@ -84,7 +84,6 @@ def wrap(self: "TestDynamicQueueConf", is_rx_testing: bool) -> None:
> queues_to_config.add(random.randint(1, self.number_of_queues - 1))
> unchanged_queues = set(range(self.number_of_queues)) - queues_to_config
> with TestPmdShell(
> - self.sut_node,
> port_topology=PortTopology.chained,
> rx_queues=self.number_of_queues,
> tx_queues=self.number_of_queues,
> diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
> index 031b94de4d..141f2bc4c9 100644
> --- a/dts/tests/TestSuite_hello_world.py
> +++ b/dts/tests/TestSuite_hello_world.py
> @@ -23,6 +23,6 @@ def test_hello_world(self) -> None:
> Verify:
> The testpmd session throws no errors.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd.start()
> self.log("Hello World!")
> diff --git a/dts/tests/TestSuite_l2fwd.py b/dts/tests/TestSuite_l2fwd.py
> index 0f6ff18907..0555d75ed8 100644
> --- a/dts/tests/TestSuite_l2fwd.py
> +++ b/dts/tests/TestSuite_l2fwd.py
> @@ -7,6 +7,7 @@
> The forwarding test is performed with several packets being sent at once.
> """
>
> +from framework.context import filter_cores
> from framework.params.testpmd import EthPeer, SimpleForwardingModes
> from framework.remote_session.testpmd_shell import TestPmdShell
> from framework.test_suite import TestSuite, func_test
> @@ -33,6 +34,7 @@ def set_up_suite(self) -> None:
> """
> self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
>
> + @filter_cores(LogicalCoreCount(cores_per_socket=4))
> @func_test
> def l2fwd_integrity(self) -> None:
> """Test the L2 forwarding integrity.
> @@ -44,11 +46,12 @@ def l2fwd_integrity(self) -> None:
> """
> queues = [1, 2, 4, 8]
>
> + self.topology.sut_ports[0]
> + self.topology.tg_ports[0]
> +
> with TestPmdShell(
> - self.sut_node,
> - lcore_filter_specifier=LogicalCoreCount(cores_per_socket=4),
> forward_mode=SimpleForwardingModes.mac,
> - eth_peer=[EthPeer(1, self.tg_node.ports[1].mac_address)],
> + eth_peer=[EthPeer(1, self.topology.tg_port_ingress.mac_address)],
> disable_device_start=True,
> ) as shell:
> for queues_num in queues:
> diff --git a/dts/tests/TestSuite_mac_filter.py b/dts/tests/TestSuite_mac_filter.py
> index 11e4b595c7..e6c55d3ec6 100644
> --- a/dts/tests/TestSuite_mac_filter.py
> +++ b/dts/tests/TestSuite_mac_filter.py
> @@ -101,10 +101,10 @@ def test_add_remove_mac_addresses(self) -> None:
> Remove the fake mac address from the PMD's address pool.
> Send a packet with the fake mac address to the PMD. (Should not receive)
> """
> - with TestPmdShell(self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd.set_promisc(0, enable=False)
> testpmd.start()
> - mac_address = self._sut_port_ingress.mac_address
> + mac_address = self.topology.sut_port_ingress.mac_address
>
> # Send a packet with NIC default mac address
> self.send_packet_and_verify(mac_address=mac_address, should_receive=True)
> @@ -137,9 +137,9 @@ def test_invalid_address(self) -> None:
> Determine the device's mac address pool size, and fill the pool with fake addresses.
> Attempt to add another fake mac address, overloading the address pool. (Should fail)
> """
> - with TestPmdShell(self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd.start()
> - mac_address = self._sut_port_ingress.mac_address
> + mac_address = self.topology.sut_port_ingress.mac_address
> try:
> testpmd.set_mac_addr(0, "00:00:00:00:00:00", add=True)
> self.verify(False, "Invalid mac address added.")
> @@ -191,7 +191,7 @@ def test_multicast_filter(self) -> None:
> Remove the fake multicast address from the PMDs multicast address filter.
> Send a packet with the fake multicast address to the PMD. (Should not receive)
> """
> - with TestPmdShell(self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd.start()
> testpmd.set_promisc(0, enable=False)
> multicast_address = "01:00:5E:00:00:00"
> diff --git a/dts/tests/TestSuite_mtu.py b/dts/tests/TestSuite_mtu.py
> index 4b59515bae..63e570ba03 100644
> --- a/dts/tests/TestSuite_mtu.py
> +++ b/dts/tests/TestSuite_mtu.py
> @@ -51,8 +51,8 @@ def set_up_suite(self) -> None:
> Set traffic generator MTU lengths to a size greater than scope of all
> test cases.
> """
> - self.tg_node.main_session.configure_port_mtu(JUMBO_MTU + 200, self._tg_port_egress)
> - self.tg_node.main_session.configure_port_mtu(JUMBO_MTU + 200, self._tg_port_ingress)
> + self.topology.tg_port_egress.configure_mtu(JUMBO_MTU + 200)
> + self.topology.tg_port_ingress.configure_mtu(JUMBO_MTU + 200)
>
> def send_packet_and_verify(self, pkt_size: int, should_receive: bool) -> None:
> """Generate, send a packet, and assess its behavior based on a given packet size.
> @@ -156,11 +156,7 @@ def test_runtime_mtu_updating_and_forwarding(self) -> None:
> Verify that standard MTU packets forward, in addition to packets within the limits of
> an MTU size set during runtime.
> """
> - with TestPmdShell(
> - self.sut_node,
> - tx_offloads=0x8000,
> - mbuf_size=[JUMBO_MTU + 200],
> - ) as testpmd:
> + with TestPmdShell(tx_offloads=0x8000, mbuf_size=[JUMBO_MTU + 200]) as testpmd:
> testpmd.set_port_mtu_all(1500, verify=True)
> testpmd.start()
> self.assess_mtu_boundary(testpmd, 1500)
> @@ -201,7 +197,6 @@ def test_cli_mtu_forwarding_for_std_packets(self) -> None:
> MTU modification.
> """
> with TestPmdShell(
> - self.sut_node,
> tx_offloads=0x8000,
> mbuf_size=[JUMBO_MTU + 200],
> mbcache=200,
> @@ -230,7 +225,6 @@ def test_cli_jumbo_forwarding_for_jumbo_mtu(self) -> None:
> Verify that all packets are forwarded after pre-runtime MTU modification.
> """
> with TestPmdShell(
> - self.sut_node,
> tx_offloads=0x8000,
> mbuf_size=[JUMBO_MTU + 200],
> mbcache=200,
> @@ -259,7 +253,6 @@ def test_cli_mtu_std_packets_for_jumbo_mtu(self) -> None:
> MTU modification.
> """
> with TestPmdShell(
> - self.sut_node,
> tx_offloads=0x8000,
> mbuf_size=[JUMBO_MTU + 200],
> mbcache=200,
> @@ -277,5 +270,5 @@ def tear_down_suite(self) -> None:
> Teardown:
> Set the MTU size of the traffic generator back to the standard 1518 byte size.
> """
> - self.tg_node.main_session.configure_port_mtu(STANDARD_MTU, self._tg_port_egress)
> - self.tg_node.main_session.configure_port_mtu(STANDARD_MTU, self._tg_port_ingress)
> + self.topology.tg_port_egress.configure_mtu(STANDARD_MTU)
> + self.topology.tg_port_ingress.configure_mtu(STANDARD_MTU)
> diff --git a/dts/tests/TestSuite_pmd_buffer_scatter.py b/dts/tests/TestSuite_pmd_buffer_scatter.py
> index a8c111eea7..5e23f28bc6 100644
> --- a/dts/tests/TestSuite_pmd_buffer_scatter.py
> +++ b/dts/tests/TestSuite_pmd_buffer_scatter.py
> @@ -58,8 +58,8 @@ def set_up_suite(self) -> None:
> Increase the MTU of both ports on the traffic generator to 9000
> to support larger packet sizes.
> """
> - self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_egress)
> - self.tg_node.main_session.configure_port_mtu(9000, self._tg_port_ingress)
> + self.topology.tg_port_egress.configure_mtu(9000)
> + self.topology.tg_port_ingress.configure_mtu(9000)
>
> def scatter_pktgen_send_packet(self, pkt_size: int) -> list[Packet]:
> """Generate and send a packet to the SUT then capture what is forwarded back.
> @@ -110,7 +110,6 @@ def pmd_scatter(self, mb_size: int, enable_offload: bool = False) -> None:
> Start testpmd and run functional test with preset `mb_size`.
> """
> with TestPmdShell(
> - self.sut_node,
> forward_mode=SimpleForwardingModes.mac,
> mbcache=200,
> mbuf_size=[mb_size],
> @@ -147,5 +146,5 @@ def tear_down_suite(self) -> None:
> Teardown:
> Set the MTU of the tg_node back to a more standard size of 1500.
> """
> - self.tg_node.main_session.configure_port_mtu(1500, self._tg_port_egress)
> - self.tg_node.main_session.configure_port_mtu(1500, self._tg_port_ingress)
> + self.topology.tg_port_egress.configure_mtu(1500)
> + self.topology.tg_port_ingress.configure_mtu(1500)
> diff --git a/dts/tests/TestSuite_port_restart_config_persistency.py b/dts/tests/TestSuite_port_restart_config_persistency.py
> index ad42c6c2e6..42ea221586 100644
> --- a/dts/tests/TestSuite_port_restart_config_persistency.py
> +++ b/dts/tests/TestSuite_port_restart_config_persistency.py
> @@ -61,8 +61,8 @@ def port_configuration_persistence(self) -> None:
> Verify:
> The configuration persists after the port is restarted.
> """
> - with TestPmdShell(self.sut_node, disable_device_start=True) as testpmd:
> - for port_id in range(len(self.sut_node.ports)):
> + with TestPmdShell(disable_device_start=True) as testpmd:
> + for port_id, _ in enumerate(self.topology.sut_ports):
> testpmd.set_port_mtu(port_id=port_id, mtu=STANDARD_MTU, verify=True)
> self.restart_port_and_verify(port_id, testpmd, "MTU")
>
> @@ -90,8 +90,8 @@ def flow_ctrl_port_configuration_persistence(self) -> None:
> Verify:
> The configuration persists after the port is restarted.
> """
> - with TestPmdShell(self.sut_node, disable_device_start=True) as testpmd:
> - for port_id in range(len(self.sut_node.ports)):
> + with TestPmdShell(disable_device_start=True) as testpmd:
> + for port_id, _ in enumerate(self.topology.sut_ports):
> flow_ctrl = TestPmdPortFlowCtrl(rx=True)
> testpmd.set_flow_control(port=port_id, flow_ctrl=flow_ctrl)
> self.restart_port_and_verify(port_id, testpmd, "flow_ctrl")
> diff --git a/dts/tests/TestSuite_promisc_support.py b/dts/tests/TestSuite_promisc_support.py
> index a3ea2461f0..445f6e1d69 100644
> --- a/dts/tests/TestSuite_promisc_support.py
> +++ b/dts/tests/TestSuite_promisc_support.py
> @@ -38,10 +38,8 @@ def test_promisc_packets(self) -> None:
> """
> packet = [Ether(dst=self.ALTERNATIVE_MAC_ADDRESS) / IP() / Raw(load=b"\x00" * 64)]
>
> - with TestPmdShell(
> - self.sut_node,
> - ) as testpmd:
> - for port_id in range(len(self.sut_node.ports)):
> + with TestPmdShell() as testpmd:
> + for port_id, _ in enumerate(self.topology.sut_ports):
> testpmd.set_promisc(port=port_id, enable=True, verify=True)
> testpmd.start()
>
> @@ -51,7 +49,7 @@ def test_promisc_packets(self) -> None:
>
> testpmd.stop()
>
> - for port_id in range(len(self.sut_node.ports)):
> + for port_id, _ in enumerate(self.topology.sut_ports):
> testpmd.set_promisc(port=port_id, enable=False, verify=True)
> testpmd.start()
>
> diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
> index 7ed266dac0..8a5799c684 100644
> --- a/dts/tests/TestSuite_smoke_tests.py
> +++ b/dts/tests/TestSuite_smoke_tests.py
> @@ -46,6 +46,7 @@ def set_up_suite(self) -> None:
> Setup:
> Set the build directory path and a list of NICs in the SUT node.
> """
> + self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
> self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir
> self.nics_in_node = self.sut_node.config.ports
>
> @@ -104,7 +105,7 @@ def test_devices_listed_in_testpmd(self) -> None:
> Test:
> List all devices found in testpmd and verify the configured devices are among them.
> """
> - with TestPmdShell(self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> dev_list = [str(x) for x in testpmd.get_devices()]
> for nic in self.nics_in_node:
> self.verify(
> diff --git a/dts/tests/TestSuite_softnic.py b/dts/tests/TestSuite_softnic.py
> index 07480db392..370fd6b419 100644
> --- a/dts/tests/TestSuite_softnic.py
> +++ b/dts/tests/TestSuite_softnic.py
> @@ -32,6 +32,7 @@ def set_up_suite(self) -> None:
> Setup:
> Generate the random packets that will be sent and create the softnic config files.
> """
> + self.sut_node = self._ctx.sut_node # FIXME: accessing the context should be forbidden
> self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
> self.cli_file = self.prepare_softnic_files()
>
> @@ -105,9 +106,8 @@ def softnic(self) -> None:
>
> """
> with TestPmdShell(
> - self.sut_node,
> vdevs=[VirtualDevice(f"net_softnic0,firmware={self.cli_file},cpu_id=1,conn_port=8086")],
> - eth_peer=[EthPeer(1, self.tg_node.ports[1].mac_address)],
> + eth_peer=[EthPeer(1, self.topology.tg_port_ingress.mac_address)],
> port_topology=None,
> ) as shell:
> shell.start()
> diff --git a/dts/tests/TestSuite_uni_pkt.py b/dts/tests/TestSuite_uni_pkt.py
> index 0898187675..656a69b0f1 100644
> --- a/dts/tests/TestSuite_uni_pkt.py
> +++ b/dts/tests/TestSuite_uni_pkt.py
> @@ -85,7 +85,7 @@ def test_l2_packet_detect(self) -> None:
> mac_id = "00:00:00:00:00:01"
> packet_list = [Ether(dst=mac_id, type=0x88F7) / Raw(), Ether(dst=mac_id) / ARP() / Raw()]
> flag_list = [RtePTypes.L2_ETHER_TIMESYNC, RtePTypes.L2_ETHER_ARP]
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
>
> @func_test
> @@ -118,7 +118,7 @@ def test_l3_l4_packet_detect(self) -> None:
> RtePTypes.L4_ICMP,
> RtePTypes.L4_FRAG | RtePTypes.L3_IPV4_EXT_UNKNOWN | RtePTypes.L2_ETHER,
> ]
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
>
> @func_test
> @@ -147,7 +147,7 @@ def test_ipv6_l4_packet_detect(self) -> None:
> RtePTypes.L4_TCP,
> RtePTypes.L3_IPV6_EXT_UNKNOWN,
> ]
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
>
> @func_test
> @@ -182,7 +182,7 @@ def test_l3_tunnel_packet_detect(self) -> None:
> RtePTypes.TUNNEL_IP | RtePTypes.INNER_L4_ICMP,
> RtePTypes.TUNNEL_IP | RtePTypes.INNER_L3_IPV6_EXT_UNKNOWN | RtePTypes.INNER_L4_FRAG,
> ]
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
>
> @func_test
> @@ -215,7 +215,7 @@ def test_gre_tunnel_packet_detect(self) -> None:
> RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_SCTP,
> RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_ICMP,
> ]
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
>
> @func_test
> @@ -250,7 +250,7 @@ def test_nsh_packet_detect(self) -> None:
> RtePTypes.L2_ETHER_NSH | RtePTypes.L3_IPV4_EXT_UNKNOWN | RtePTypes.L4_SCTP,
> RtePTypes.L2_ETHER_NSH | RtePTypes.L3_IPV6_EXT_UNKNOWN | RtePTypes.L4_NONFRAG,
> ]
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
>
> @func_test
> @@ -295,6 +295,6 @@ def test_vxlan_tunnel_packet_detect(self) -> None:
> RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L4_ICMP,
> RtePTypes.TUNNEL_GRENAT | RtePTypes.INNER_L3_IPV6_EXT_UNKNOWN | RtePTypes.INNER_L4_FRAG,
> ]
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd.rx_vxlan(4789, 0, True)
> self.setup_session(testpmd=testpmd, expected_flags=flag_list, packet_list=packet_list)
> diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
> index c67520baef..d2a9e614d4 100644
> --- a/dts/tests/TestSuite_vlan.py
> +++ b/dts/tests/TestSuite_vlan.py
> @@ -124,7 +124,7 @@ def test_vlan_receipt_no_stripping(self) -> None:
> Test:
> Create an interactive testpmd shell and verify a VLAN packet.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
> testpmd.start()
> self.send_vlan_packet_and_verify(True, strip=False, vlan_id=1)
> @@ -137,7 +137,7 @@ def test_vlan_receipt_stripping(self) -> None:
> Test:
> Create an interactive testpmd shell and verify a VLAN packet.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
> testpmd.set_vlan_strip(port=0, enable=True)
> testpmd.start()
> @@ -150,7 +150,7 @@ def test_vlan_no_receipt(self) -> None:
> Test:
> Create an interactive testpmd shell and verify a VLAN packet.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
> testpmd.start()
> self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
> @@ -162,7 +162,7 @@ def test_vlan_header_insertion(self) -> None:
> Test:
> Create an interactive testpmd shell and verify a non-VLAN packet.
> """
> - with TestPmdShell(node=self.sut_node) as testpmd:
> + with TestPmdShell() as testpmd:
> testpmd.set_forward_mode(SimpleForwardingModes.mac)
> testpmd.set_promisc(port=0, enable=False)
> testpmd.stop_all_ports()
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2025-02-12 19:46 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-03 15:16 [RFC PATCH 0/7] dts: revamp framework Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 1/7] dts: add port topology configuration Luca Vizzarro
2025-02-07 18:25 ` Nicholas Pratte
2025-02-12 16:47 ` Luca Vizzarro
2025-02-11 18:00 ` Dean Marx
2025-02-12 16:47 ` Luca Vizzarro
2025-02-03 15:16 ` [RFC PATCH 2/7] dts: isolate test specification to config Luca Vizzarro
2025-02-10 19:09 ` Nicholas Pratte
2025-02-11 18:11 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 3/7] dts: revamp Topology model Luca Vizzarro
2025-02-10 19:42 ` Nicholas Pratte
2025-02-11 18:18 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 4/7] dts: improve Port model Luca Vizzarro
2025-02-11 18:56 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 5/7] dts: add runtime status Luca Vizzarro
2025-02-11 19:45 ` Dean Marx
2025-02-12 18:50 ` Nicholas Pratte
2025-02-03 15:16 ` [RFC PATCH 6/7] dts: add global runtime context Luca Vizzarro
2025-02-11 20:26 ` Dean Marx
2025-02-03 15:16 ` [RFC PATCH 7/7] dts: revamp runtime internals Luca Vizzarro
2025-02-11 20:50 ` Dean Marx
2025-02-04 21:08 ` [RFC PATCH 0/7] dts: revamp framework Dean Marx
2025-02-12 16:52 ` Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 " Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 1/7] dts: add port topology configuration Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 2/7] dts: isolate test specification to config Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 3/7] dts: revamp Topology model Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 4/7] dts: improve Port model Luca Vizzarro
2025-02-12 16:45 ` [PATCH v2 5/7] dts: add global runtime context Luca Vizzarro
2025-02-12 19:45 ` Nicholas Pratte
2025-02-12 16:45 ` [PATCH v2 6/7] dts: revamp runtime internals Luca Vizzarro
2025-02-12 16:46 ` [PATCH v2 7/7] dts: remove node distinction Luca Vizzarro
2025-02-12 16:47 ` [PATCH v2 0/7] dts: revamp framework Luca Vizzarro
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).