From: Luca Vizzarro <luca.vizzarro@arm.com>
To: dev@dpdk.org
Cc: Patrick Robb <probb@iol.unh.edu>,
Luca Vizzarro <luca.vizzarro@arm.com>,
Paul Szczepanek <paul.szczepanek@arm.com>
Subject: [PATCH v2 4/7] dts: apply Ruff formatting
Date: Thu, 12 Dec 2024 14:00:10 +0000 [thread overview]
Message-ID: <20241212140013.17548-5-luca.vizzarro@arm.com> (raw)
In-Reply-To: <20241212140013.17548-1-luca.vizzarro@arm.com>
While Ruff formatting is Black-compatible and is near-identical, it
still requires formatting for a small set of elements.
Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
dts/framework/params/eal.py | 5 +-
dts/framework/remote_session/dpdk_shell.py | 1 -
dts/framework/remote_session/python_shell.py | 1 +
.../single_active_interactive_shell.py | 8 +--
dts/framework/runner.py | 36 ++++----------
dts/framework/settings.py | 5 +-
dts/framework/test_suite.py | 37 ++++----------
dts/framework/testbed_model/capability.py | 20 ++------
dts/framework/testbed_model/cpu.py | 28 ++++-------
dts/framework/testbed_model/linux_session.py | 36 ++++----------
dts/framework/testbed_model/node.py | 4 +-
dts/framework/testbed_model/os_session.py | 13 ++---
dts/framework/testbed_model/port.py | 1 -
dts/framework/testbed_model/posix_session.py | 48 +++++-------------
dts/framework/testbed_model/sut_node.py | 49 +++++--------------
dts/framework/testbed_model/topology.py | 4 +-
.../traffic_generator/__init__.py | 8 +--
.../testbed_model/traffic_generator/scapy.py | 5 +-
dts/framework/utils.py | 32 +++---------
dts/tests/TestSuite_vlan.py | 8 +--
20 files changed, 90 insertions(+), 259 deletions(-)
diff --git a/dts/framework/params/eal.py b/dts/framework/params/eal.py
index 71bc781eab..b90ff33dcf 100644
--- a/dts/framework/params/eal.py
+++ b/dts/framework/params/eal.py
@@ -27,10 +27,7 @@ class EalParams(Params):
no_pci: Switch to disable PCI bus, e.g.: ``no_pci=True``.
vdevs: Virtual devices, e.g.::
- vdevs=[
- VirtualDevice('net_ring0'),
- VirtualDevice('net_ring1')
- ]
+ vdevs = [VirtualDevice("net_ring0"), VirtualDevice("net_ring1")]
ports: The list of ports to allow.
other_eal_param: user defined DPDK EAL parameters, e.g.::
diff --git a/dts/framework/remote_session/dpdk_shell.py b/dts/framework/remote_session/dpdk_shell.py
index 82fa4755f0..c11d9ab81c 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -6,7 +6,6 @@
Provides a base class to create interactive shells based on DPDK.
"""
-
from abc import ABC
from pathlib import PurePath
diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
index 953ed100df..9d4abab12c 100644
--- a/dts/framework/remote_session/python_shell.py
+++ b/dts/framework/remote_session/python_shell.py
@@ -6,6 +6,7 @@
Typical usage example in a TestSuite::
from framework.remote_session import PythonShell
+
python_shell = PythonShell(self.tg_node, timeout=5, privileged=True)
python_shell.send_command("print('Hello World')")
python_shell.close()
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index a53e8fc6e1..3539f634f9 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -124,9 +124,7 @@ def __init__(
super().__init__()
def _setup_ssh_channel(self):
- self._ssh_channel = (
- self._node.main_session.interactive_session.session.invoke_shell()
- )
+ self._ssh_channel = self._node.main_session.interactive_session.session.invoke_shell()
self._stdin = self._ssh_channel.makefile_stdin("w")
self._stdout = self._ssh_channel.makefile("r")
self._ssh_channel.settimeout(self._timeout)
@@ -136,9 +134,7 @@ def _make_start_command(self) -> str:
"""Makes the command that starts the interactive shell."""
start_command = f"{self._real_path} {self._app_params or ''}"
if self._privileged:
- start_command = self._node.main_session._get_privileged_command(
- start_command
- )
+ start_command = self._node.main_session._get_privileged_command(start_command)
return start_command
def _start_application(self) -> None:
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index d228ed1b18..510be1a870 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -136,25 +136,17 @@ def run(self) -> None:
# for all test run sections
for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
- test_run_config, sut_node_config, tg_node_config = (
- test_run_with_nodes_config
- )
+ test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
self._logger.set_stage(DtsStage.test_run_setup)
- self._logger.info(
- f"Running test run with SUT '{sut_node_config.name}'."
- )
+ self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
self._init_random_seed(test_run_config)
test_run_result = self._result.add_test_run(test_run_config)
# we don't want to modify the original config, so create a copy
test_run_test_suites = list(
- SETTINGS.test_suites
- if SETTINGS.test_suites
- else test_run_config.test_suites
+ SETTINGS.test_suites if SETTINGS.test_suites else test_run_config.test_suites
)
if not test_run_config.skip_smoke_tests:
- test_run_test_suites[:0] = [
- TestSuiteConfig(test_suite="smoke_tests")
- ]
+ test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
try:
test_suites_with_cases = self._get_test_suites_with_cases(
test_run_test_suites, test_run_config.func, test_run_config.perf
@@ -162,8 +154,7 @@ def run(self) -> None:
test_run_result.test_suites_with_cases = test_suites_with_cases
except Exception as e:
self._logger.exception(
- f"Invalid test suite configuration found: "
- f"{test_run_test_suites}."
+ f"Invalid test suite configuration found: " f"{test_run_test_suites}."
)
test_run_result.update_setup(Result.FAIL, e)
@@ -245,9 +236,7 @@ def _get_test_suites_with_cases(
test_cases.extend(perf_test_cases)
test_suites_with_cases.append(
- TestSuiteWithCases(
- test_suite_class=test_suite_class, test_cases=test_cases
- )
+ TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
)
return test_suites_with_cases
@@ -351,9 +340,7 @@ def _run_test_run(
test_run_result.update_setup(Result.FAIL, e)
else:
- self._run_test_suites(
- sut_node, tg_node, test_run_result, test_suites_with_cases
- )
+ self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
finally:
try:
@@ -371,16 +358,13 @@ def _get_supported_capabilities(
topology_config: Topology,
test_suites_with_cases: Iterable[TestSuiteWithCases],
) -> set[Capability]:
-
capabilities_to_check = set()
for test_suite_with_cases in test_suites_with_cases:
capabilities_to_check.update(test_suite_with_cases.required_capabilities)
self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
- return get_supported_capabilities(
- sut_node, topology_config, capabilities_to_check
- )
+ return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
def _run_test_suites(
self,
@@ -625,9 +609,7 @@ def _execute_test_case(
self._logger.exception(f"Test case execution ERROR: {test_case_name}")
test_case_result.update(Result.ERROR, e)
except KeyboardInterrupt:
- self._logger.error(
- f"Test case execution INTERRUPTED by user: {test_case_name}"
- )
+ self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
test_case_result.update(Result.SKIP)
raise KeyboardInterrupt("Stop DTS")
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 91f317105a..873d400bec 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -88,6 +88,7 @@
Typical usage example::
from framework.settings import SETTINGS
+
foo = SETTINGS.foo
"""
@@ -257,9 +258,7 @@ def _get_help_string(self, action):
return help
-def _required_with_one_of(
- parser: _DTSArgumentParser, action: Action, *required_dests: str
-) -> None:
+def _required_with_one_of(parser: _DTSArgumentParser, action: Action, *required_dests: str) -> None:
"""Verify that `action` is listed together with at least one of `required_dests`.
Verify that when `action` is among the command-line arguments or
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index fd6706289e..161bb10066 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -300,9 +300,7 @@ def get_expected_packet(self, packet: Packet) -> Packet:
"""
return self.get_expected_packets([packet])[0]
- def _adjust_addresses(
- self, packets: list[Packet], expected: bool = False
- ) -> list[Packet]:
+ def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> list[Packet]:
"""L2 and L3 address additions in both directions.
Copies of `packets` will be made, modified and returned in this method.
@@ -380,21 +378,15 @@ def verify(self, condition: bool, failure_description: str) -> None:
self._fail_test_case_verify(failure_description)
def _fail_test_case_verify(self, failure_description: str) -> None:
- self._logger.debug(
- "A test case failed, showing the last 10 commands executed on SUT:"
- )
+ self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
for command_res in self.sut_node.main_session.remote_session.history[-10:]:
self._logger.debug(command_res.command)
- self._logger.debug(
- "A test case failed, showing the last 10 commands executed on TG:"
- )
+ self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
for command_res in self.tg_node.main_session.remote_session.history[-10:]:
self._logger.debug(command_res.command)
raise TestCaseVerifyError(failure_description)
- def verify_packets(
- self, expected_packet: Packet, received_packets: list[Packet]
- ) -> None:
+ def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
"""Verify that `expected_packet` has been received.
Go through `received_packets` and check that `expected_packet` is among them.
@@ -416,9 +408,7 @@ def verify_packets(
f"The expected packet {get_packet_summaries(expected_packet)} "
f"not found among received {get_packet_summaries(received_packets)}"
)
- self._fail_test_case_verify(
- "An expected packet not found among received packets."
- )
+ self._fail_test_case_verify("An expected packet not found among received packets.")
def match_all_packets(
self, expected_packets: list[Packet], received_packets: list[Packet]
@@ -454,9 +444,7 @@ def match_all_packets(
f"but {missing_packets_count} were missing."
)
- def _compare_packets(
- self, expected_packet: Packet, received_packet: Packet
- ) -> bool:
+ def _compare_packets(self, expected_packet: Packet, received_packet: Packet) -> bool:
self._logger.debug(
f"Comparing packets: \n{expected_packet.summary()}\n{received_packet.summary()}"
)
@@ -485,14 +473,10 @@ def _compare_packets(
expected_payload = expected_payload.payload
if expected_payload:
- self._logger.debug(
- f"The expected packet did not contain {expected_payload}."
- )
+ self._logger.debug(f"The expected packet did not contain {expected_payload}.")
return False
if received_payload and received_payload.__class__ != Padding:
- self._logger.debug(
- "The received payload had extra layers which were not padding."
- )
+ self._logger.debug("The received payload had extra layers which were not padding.")
return False
return True
@@ -519,10 +503,7 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
self._logger.debug("Looking at the IP layer.")
- if (
- received_packet.src != expected_packet.src
- or received_packet.dst != expected_packet.dst
- ):
+ if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst:
return False
return True
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 6e06c75c3d..63f99c4479 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -130,9 +130,7 @@ def _get_and_reset(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
@classmethod
@abstractmethod
- def get_supported_capabilities(
- cls, sut_node: SutNode, topology: "Topology"
- ) -> set[Self]:
+ def get_supported_capabilities(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
"""Get the support status of each registered capability.
Each subclass must implement this method and return the subset of supported capabilities
@@ -242,18 +240,14 @@ def get_supported_capabilities(
if capability.nic_capability in supported_capabilities:
supported_conditional_capabilities.add(capability)
- logger.debug(
- f"Found supported capabilities {supported_conditional_capabilities}."
- )
+ logger.debug(f"Found supported capabilities {supported_conditional_capabilities}.")
return supported_conditional_capabilities
@classmethod
def _get_decorated_capabilities_map(
cls,
) -> dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]]:
- capabilities_map: dict[
- TestPmdShellDecorator | None, set["DecoratedNicCapability"]
- ] = {}
+ capabilities_map: dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]] = {}
for capability in cls.capabilities_to_check:
if capability.capability_decorator not in capabilities_map:
capabilities_map[capability.capability_decorator] = set()
@@ -316,9 +310,7 @@ class TopologyCapability(Capability):
_unique_capabilities: ClassVar[dict[str, Self]] = {}
def _preprocess_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
- test_case_or_suite.required_capabilities.discard(
- test_case_or_suite.topology_type
- )
+ test_case_or_suite.required_capabilities.discard(test_case_or_suite.topology_type)
test_case_or_suite.topology_type = self
@classmethod
@@ -458,9 +450,7 @@ class TestProtocol(Protocol):
#: The reason for skipping the test case or suite.
skip_reason: ClassVar[str] = ""
#: The topology type of the test case or suite.
- topology_type: ClassVar[TopologyCapability] = TopologyCapability(
- TopologyType.default
- )
+ topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
#: The capabilities the test case or suite requires in order to be executed.
required_capabilities: ClassVar[set[Capability]] = set()
diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index 0746878770..46bf13960d 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -67,10 +67,10 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
There are four supported logical core list formats::
- lcore_list=[LogicalCore1, LogicalCore2] # a list of LogicalCores
- lcore_list=[0,1,2,3] # a list of int indices
- lcore_list=['0','1','2-3'] # a list of str indices; ranges are supported
- lcore_list='0,1,2-3' # a comma delimited str of indices; ranges are supported
+ lcore_list = [LogicalCore1, LogicalCore2] # a list of LogicalCores
+ lcore_list = [0, 1, 2, 3] # a list of int indices
+ lcore_list = ["0", "1", "2-3"] # a list of str indices; ranges are supported
+ lcore_list = "0,1,2-3" # a comma delimited str of indices; ranges are supported
Args:
lcore_list: Various ways to represent multiple logical cores.
@@ -87,9 +87,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
# the input lcores may not be sorted
self._lcore_list.sort()
- self._lcore_str = (
- f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
- )
+ self._lcore_str = f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
@property
def lcore_list(self) -> list[int]:
@@ -104,15 +102,11 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
segment.append(lcore_id)
else:
formatted_core_list.append(
- f"{segment[0]}-{segment[-1]}"
- if len(segment) > 1
- else f"{segment[0]}"
+ f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
)
current_core_index = lcore_ids_list.index(lcore_id)
formatted_core_list.extend(
- self._get_consecutive_lcores_range(
- lcore_ids_list[current_core_index:]
- )
+ self._get_consecutive_lcores_range(lcore_ids_list[current_core_index:])
)
segment.clear()
break
@@ -172,9 +166,7 @@ def __init__(
self._filter_specifier = filter_specifier
# sorting by core is needed in case hyperthreading is enabled
- self._lcores_to_filter = sorted(
- lcore_list, key=lambda x: x.core, reverse=not ascending
- )
+ self._lcores_to_filter = sorted(lcore_list, key=lambda x: x.core, reverse=not ascending)
self.filter()
@abstractmethod
@@ -302,9 +294,7 @@ def _filter_cores_from_socket(
else:
# we have enough lcores per this core
continue
- elif self._filter_specifier.cores_per_socket > len(
- lcore_count_per_core_map
- ):
+ elif self._filter_specifier.cores_per_socket > len(lcore_count_per_core_map):
# only add cores if we need more
lcore_count_per_core_map[lcore.core] = 1
filtered_lcores.append(lcore)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index b316f23b4e..bda2d448f7 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -83,9 +83,7 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
"""Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
return dpdk_prefix
- def setup_hugepages(
- self, number_of: int, hugepage_size: int, force_first_numa: bool
- ) -> None:
+ def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
"""Overrides :meth:`~.os_session.OSSession.setup_hugepages`.
Raises:
@@ -133,9 +131,7 @@ def _mount_huge_pages(self) -> None:
if result.stdout == "":
remote_mount_path = "/mnt/huge"
self.send_command(f"mkdir -p {remote_mount_path}", privileged=True)
- self.send_command(
- f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True
- )
+ self.send_command(f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True)
def _supports_numa(self) -> bool:
# the system supports numa if self._numa_nodes is non-empty and there are more
@@ -143,13 +139,9 @@ def _supports_numa(self) -> bool:
# there's no reason to do any numa specific configuration)
return len(self._numa_nodes) > 1
- def _configure_huge_pages(
- self, number_of: int, size: int, force_first_numa: bool
- ) -> None:
+ def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: bool) -> None:
self._logger.info("Configuring Hugepages.")
- hugepage_config_path = (
- f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
- )
+ hugepage_config_path = f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
if force_first_numa and self._supports_numa():
# clear non-numa hugepages
self.send_command(f"echo 0 | tee {hugepage_config_path}", privileged=True)
@@ -158,25 +150,19 @@ def _configure_huge_pages(
f"/hugepages-{size}kB/nr_hugepages"
)
- self.send_command(
- f"echo {number_of} | tee {hugepage_config_path}", privileged=True
- )
+ self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
def update_ports(self, ports: list[Port]) -> None:
"""Overrides :meth:`~.os_session.OSSession.update_ports`."""
self._logger.debug("Gathering port info.")
for port in ports:
- assert (
- port.node == self.name
- ), "Attempted to gather port info on the wrong node"
+ assert port.node == self.name, "Attempted to gather port info on the wrong node"
port_info_list = self._get_lshw_info()
for port in ports:
for port_info in port_info_list:
if f"pci@{port.pci}" == port_info.get("businfo"):
- self._update_port_attr(
- port, port_info.get("logicalname"), "logical_name"
- )
+ self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
self._update_port_attr(port, port_info.get("serial"), "mac_address")
port_info_list.remove(port_info)
break
@@ -187,14 +173,10 @@ def _get_lshw_info(self) -> list[LshwOutput]:
output = self.send_command("lshw -quiet -json -C network", verify=True)
return json.loads(output.stdout)
- def _update_port_attr(
- self, port: Port, attr_value: str | None, attr_name: str
- ) -> None:
+ def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) -> None:
if attr_value:
setattr(port, attr_name, attr_value)
- self._logger.debug(
- f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
- )
+ self._logger.debug(f"Found '{attr_name}' of port {port.pci}: '{attr_value}'.")
else:
self._logger.warning(
f"Attempted to get '{attr_name}' of port {port.pci}, but it doesn't exist."
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e8021a4afe..c6f12319ca 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -198,9 +198,7 @@ def close(self) -> None:
session.close()
-def create_session(
- node_config: NodeConfiguration, name: str, logger: DTSLogger
-) -> OSSession:
+def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession:
"""Factory for OS-aware sessions.
Args:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 1b2885be5d..28eccc05ed 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -22,6 +22,7 @@
the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux
and other commands for other OSs. It also translates the path to match the underlying OS.
"""
+
from abc import ABC, abstractmethod
from collections.abc import Iterable
from dataclasses import dataclass
@@ -195,9 +196,7 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
"""
@abstractmethod
- def copy_from(
- self, source_file: str | PurePath, destination_dir: str | Path
- ) -> None:
+ def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
"""Copy a file from the remote node to the local filesystem.
Copy `source_file` from the remote node associated with this remote
@@ -303,9 +302,7 @@ def copy_dir_to(
"""
@abstractmethod
- def remove_remote_file(
- self, remote_file_path: str | PurePath, force: bool = True
- ) -> None:
+ def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
"""Remove remote file, by default remove forcefully.
Args:
@@ -479,9 +476,7 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
"""
@abstractmethod
- def setup_hugepages(
- self, number_of: int, hugepage_size: int, force_first_numa: bool
- ) -> None:
+ def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
"""Configure hugepages on the node.
Get the node's Hugepage Size, configure the specified count of hugepages
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 817405bea4..566f4c5b46 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -8,7 +8,6 @@
drivers and address.
"""
-
from dataclasses import dataclass
from framework.config import PortConfig
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index f707b6e17b..29e314db6e 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -96,9 +96,7 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
result = self.send_command(f"test -e {remote_path}")
return not result.return_code
- def copy_from(
- self, source_file: str | PurePath, destination_dir: str | Path
- ) -> None:
+ def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
"""Overrides :meth:`~.os_session.OSSession.copy_from`."""
self.remote_session.copy_from(source_file, destination_dir)
@@ -115,16 +113,12 @@ def copy_dir_from(
) -> None:
"""Overrides :meth:`~.os_session.OSSession.copy_dir_from`."""
source_dir = PurePath(source_dir)
- remote_tarball_path = self.create_remote_tarball(
- source_dir, compress_format, exclude
- )
+ remote_tarball_path = self.create_remote_tarball(source_dir, compress_format, exclude)
self.copy_from(remote_tarball_path, destination_dir)
self.remove_remote_file(remote_tarball_path)
- tarball_path = Path(
- destination_dir, f"{source_dir.name}.{compress_format.extension}"
- )
+ tarball_path = Path(destination_dir, f"{source_dir.name}.{compress_format.extension}")
extract_tarball(tarball_path)
tarball_path.unlink()
@@ -147,9 +141,7 @@ def copy_dir_to(
self.extract_remote_tarball(remote_tar_path)
self.remove_remote_file(remote_tar_path)
- def remove_remote_file(
- self, remote_file_path: str | PurePath, force: bool = True
- ) -> None:
+ def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
"""Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
opts = PosixSession.combine_short_options(f=force)
self.send_command(f"rm{opts} {remote_file_path}")
@@ -184,15 +176,11 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
"""
if exclude_patterns:
exclude_patterns = convert_to_list_of_string(exclude_patterns)
- return "".join(
- [f" --exclude={pattern}" for pattern in exclude_patterns]
- )
+ return "".join([f" --exclude={pattern}" for pattern in exclude_patterns])
return ""
posix_remote_dir_path = PurePosixPath(remote_dir_path)
- target_tarball_path = PurePosixPath(
- f"{remote_dir_path}.{compress_format.extension}"
- )
+ target_tarball_path = PurePosixPath(f"{remote_dir_path}.{compress_format.extension}")
self.send_command(
f"tar caf {target_tarball_path}{generate_tar_exclude_args(exclude)} "
@@ -285,9 +273,7 @@ def build_dpdk(
def get_dpdk_version(self, build_dir: str | PurePath) -> str:
"""Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
- out = self.send_command(
- f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
- )
+ out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
return out.stdout
def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
@@ -302,9 +288,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
self._check_dpdk_hugepages(dpdk_runtime_dirs)
self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
- def _get_dpdk_runtime_dirs(
- self, dpdk_prefix_list: Iterable[str]
- ) -> list[PurePosixPath]:
+ def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
"""Find runtime directories DPDK apps are currently using.
Args:
@@ -332,9 +316,7 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
Returns:
The contents of remote_path. If remote_path doesn't exist, return None.
"""
- out = self.send_command(
- f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
- ).stdout
+ out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
if "No such file or directory" in out:
return None
else:
@@ -365,9 +347,7 @@ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[in
pids.append(int(match.group(1)))
return pids
- def _check_dpdk_hugepages(
- self, dpdk_runtime_dirs: Iterable[str | PurePath]
- ) -> None:
+ def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
"""Check there aren't any leftover hugepages.
If any hugepages are found, emit a warning. The hugepages are investigated in the
@@ -386,9 +366,7 @@ def _check_dpdk_hugepages(
self._logger.warning(out)
self._logger.warning("*******************************************")
- def _remove_dpdk_runtime_dirs(
- self, dpdk_runtime_dirs: Iterable[str | PurePath]
- ) -> None:
+ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
for dpdk_runtime_dir in dpdk_runtime_dirs:
self.remove_remote_dir(dpdk_runtime_dir)
@@ -425,6 +403,4 @@ def get_node_info(self) -> OSSessionInfo:
SETTINGS.timeout,
).stdout.split("\n")
kernel_version = self.send_command("uname -r", SETTINGS.timeout).stdout
- return OSSessionInfo(
- os_release_info[0].strip(), os_release_info[1].strip(), kernel_version
- )
+ return OSSessionInfo(os_release_info[0].strip(), os_release_info[1].strip(), kernel_version)
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 6adcff01c2..a9dc0a474a 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -11,7 +11,6 @@
An SUT node is where this SUT runs.
"""
-
import os
import time
from dataclasses import dataclass
@@ -139,9 +138,7 @@ def remote_dpdk_build_dir(self) -> str | PurePath:
def dpdk_version(self) -> str | None:
"""Last built DPDK version."""
if self._dpdk_version is None:
- self._dpdk_version = self.main_session.get_dpdk_version(
- self._remote_dpdk_tree_path
- )
+ self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_tree_path)
return self._dpdk_version
@property
@@ -155,9 +152,7 @@ def node_info(self) -> OSSessionInfo:
def compiler_version(self) -> str | None:
"""The node's compiler version."""
if self._compiler_version is None:
- self._logger.warning(
- "The `compiler_version` is None because a pre-built DPDK is used."
- )
+ self._logger.warning("The `compiler_version` is None because a pre-built DPDK is used.")
return self._compiler_version
@@ -185,9 +180,7 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
Returns:
The DPDK build information,
"""
- return DPDKBuildInfo(
- dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
- )
+ return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
def set_up_test_run(
self,
@@ -277,9 +270,7 @@ def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
f"Remote DPDK source tree '{dpdk_tree}' not found in SUT node."
)
if not self.main_session.is_remote_dir(dpdk_tree):
- raise ConfigurationError(
- f"Remote DPDK source tree '{dpdk_tree}' must be a directory."
- )
+ raise ConfigurationError(f"Remote DPDK source tree '{dpdk_tree}' must be a directory.")
self.__remote_dpdk_tree_path = dpdk_tree
@@ -315,13 +306,9 @@ def _validate_remote_dpdk_tarball(self, dpdk_tarball: PurePath) -> None:
ConfigurationError: If the `dpdk_tarball` is a valid path but not a valid tar archive.
"""
if not self.main_session.remote_path_exists(dpdk_tarball):
- raise RemoteFileNotFoundError(
- f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT."
- )
+ raise RemoteFileNotFoundError(f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT.")
if not self.main_session.is_remote_tarfile(dpdk_tarball):
- raise ConfigurationError(
- f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive."
- )
+ raise ConfigurationError(f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive.")
def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
"""Copy the local DPDK tarball to the SUT node.
@@ -336,9 +323,7 @@ def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
f"Copying DPDK tarball to SUT: '{dpdk_tarball}' into '{self._remote_tmp_dir}'."
)
self.main_session.copy_to(dpdk_tarball, self._remote_tmp_dir)
- return self.main_session.join_remote_path(
- self._remote_tmp_dir, dpdk_tarball.name
- )
+ return self.main_session.join_remote_path(self._remote_tmp_dir, dpdk_tarball.name)
def _prepare_and_extract_dpdk_tarball(self, remote_tarball_path: PurePath) -> None:
"""Prepare the remote DPDK tree path and extract the tarball.
@@ -362,9 +347,7 @@ def remove_tarball_suffix(remote_tarball_path: PurePath) -> PurePath:
if len(remote_tarball_path.suffixes) > 1:
if remote_tarball_path.suffixes[-2] == ".tar":
suffixes_to_remove = "".join(remote_tarball_path.suffixes[-2:])
- return PurePath(
- str(remote_tarball_path).replace(suffixes_to_remove, "")
- )
+ return PurePath(str(remote_tarball_path).replace(suffixes_to_remove, ""))
return remote_tarball_path.with_suffix("")
tarball_top_dir = self.main_session.get_tarball_top_dir(remote_tarball_path)
@@ -407,9 +390,7 @@ def _set_remote_dpdk_build_dir(self, build_dir: str):
self._remote_dpdk_build_dir = PurePath(remote_dpdk_build_dir)
- def _configure_dpdk_build(
- self, dpdk_build_config: DPDKBuildOptionsConfiguration
- ) -> None:
+ def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration) -> None:
"""Populate common environment variables and set the DPDK build related properties.
This method sets `compiler_version` for additional information and `remote_dpdk_build_dir`
@@ -419,13 +400,9 @@ def _configure_dpdk_build(
dpdk_build_config: A DPDK build configuration to test.
"""
self._env_vars = {}
- self._env_vars.update(
- self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch)
- )
+ self._env_vars.update(self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch))
if compiler_wrapper := dpdk_build_config.compiler_wrapper:
- self._env_vars["CC"] = (
- f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
- )
+ self._env_vars["CC"] = f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
else:
self._env_vars["CC"] = dpdk_build_config.compiler.name
@@ -476,9 +453,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
)
if app_name == "all":
- return self.main_session.join_remote_path(
- self.remote_dpdk_build_dir, "examples"
- )
+ return self.main_session.join_remote_path(self.remote_dpdk_build_dir, "examples")
return self.main_session.join_remote_path(
self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
)
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 2c10aff4ef..0bad59d2a4 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -58,9 +58,7 @@ def get_from_value(cls, value: int) -> "TopologyType":
case 2:
return TopologyType.two_links
case _:
- raise ConfigurationError(
- "More than two links in a topology are not supported."
- )
+ raise ConfigurationError("More than two links in a topology are not supported.")
class Topology:
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index e7fd511a00..e501f6d5ee 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -39,10 +39,6 @@ def create_traffic_generator(
"""
match traffic_generator_config:
case ScapyTrafficGeneratorConfig():
- return ScapyTrafficGenerator(
- tg_node, traffic_generator_config, privileged=True
- )
+ return ScapyTrafficGenerator(tg_node, traffic_generator_config, privileged=True)
case _:
- raise ConfigurationError(
- f"Unknown traffic generator: {traffic_generator_config.type}"
- )
+ raise ConfigurationError(f"Unknown traffic generator: {traffic_generator_config.type}")
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 16cc361cab..07e1242548 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -12,7 +12,6 @@
implement the methods for handling packets by sending commands into the interactive shell.
"""
-
import re
import time
from typing import ClassVar
@@ -231,9 +230,7 @@ def _shell_start_and_stop_sniffing(self, duration: float) -> list[Packet]:
self.send_command(f"{self._sniffer_name}.start()")
# Insert a one second delay to prevent timeout errors from occurring
time.sleep(duration + 1)
- self.send_command(
- f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)"
- )
+ self.send_command(f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)")
# An extra newline is required here due to the nature of interactive Python shells
packet_strs = self.send_command(
f"for pakt in {sniffed_packets_name}: print(bytes_base64(pakt.build()))\n"
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 6ff9a485ba..6839bcf243 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -31,9 +31,7 @@
REGEX_FOR_PCI_ADDRESS: str = r"[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}"
_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
_REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
-REGEX_FOR_MAC_ADDRESS: str = (
- rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
-)
+REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
@@ -58,9 +56,7 @@ def expand_range(range_str: str) -> list[int]:
range_boundaries = range_str.split("-")
# will throw an exception when items in range_boundaries can't be converted,
# serving as type check
- expanded_range.extend(
- range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
- )
+ expanded_range.extend(range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1))
return expanded_range
@@ -77,9 +73,7 @@ def get_packet_summaries(packets: list[Packet]) -> str:
if len(packets) == 1:
packet_summaries = packets[0].summary()
else:
- packet_summaries = json.dumps(
- list(map(lambda pkt: pkt.summary(), packets)), indent=4
- )
+ packet_summaries = json.dumps(list(map(lambda pkt: pkt.summary(), packets)), indent=4)
return f"Packet contents: \n{packet_summaries}"
@@ -87,9 +81,7 @@ class StrEnum(Enum):
"""Enum with members stored as strings."""
@staticmethod
- def _generate_next_value_(
- name: str, start: int, count: int, last_values: object
- ) -> str:
+ def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
return name
def __str__(self) -> str:
@@ -116,9 +108,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
meson_args = MesonArgs(enable_kmods=True).
"""
- self._default_library = (
- f"--default-library={default_library}" if default_library else ""
- )
+ self._default_library = f"--default-library={default_library}" if default_library else ""
self._dpdk_args = " ".join(
(
f"-D{dpdk_arg_name}={dpdk_arg_value}"
@@ -159,9 +149,7 @@ def extension(self):
For other compression formats, the extension will be in the format
'tar.{compression format}'.
"""
- return (
- f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
- )
+ return f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
def convert_to_list_of_string(value: Any | list[Any]) -> list[str]:
@@ -206,9 +194,7 @@ def create_filter_function(
def filter_func(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo | None:
file_name = os.path.basename(tarinfo.name)
- if any(
- fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns
- ):
+ if any(fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns):
return None
return tarinfo
@@ -301,9 +287,7 @@ def _make_packet() -> Packet:
packet /= random.choice(l4_factories)(sport=src_port, dport=dst_port)
max_payload_size = mtu - len(packet)
- usable_payload_size = (
- payload_size if payload_size < max_payload_size else max_payload_size
- )
+ usable_payload_size = payload_size if payload_size < max_payload_size else max_payload_size
return packet / random.randbytes(usable_payload_size)
return [_make_packet() for _ in range(number_of)]
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index 524854ea89..35221fe362 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -38,9 +38,7 @@ class TestVlan(TestSuite):
tag when insertion is enabled.
"""
- def send_vlan_packet_and_verify(
- self, should_receive: bool, strip: bool, vlan_id: int
- ) -> None:
+ def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id: int) -> None:
"""Generate a VLAN packet, send and verify packet with same payload is received on the dut.
Args:
@@ -155,9 +153,7 @@ def test_vlan_no_receipt(self) -> None:
with TestPmdShell(node=self.sut_node) as testpmd:
self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
testpmd.start()
- self.send_vlan_packet_and_verify(
- should_receive=False, strip=False, vlan_id=2
- )
+ self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
@func_test
def test_vlan_header_insertion(self) -> None:
--
2.43.0
next prev parent reply other threads:[~2024-12-12 14:03 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
2024-12-10 10:32 ` [PATCH 1/6] dts: add Ruff as linter and formatter Luca Vizzarro
2024-12-10 10:32 ` [PATCH 2/6] dts: enable Ruff preview pydoclint rules Luca Vizzarro
2024-12-10 10:32 ` [PATCH 3/6] dts: fix docstring linter errors Luca Vizzarro
2024-12-10 10:32 ` [PATCH 4/6] dts: apply Ruff formatting Luca Vizzarro
2024-12-10 10:32 ` [PATCH 5/6] dts: update dts-check-format to use Ruff Luca Vizzarro
2024-12-10 10:32 ` [PATCH 6/6] dts: remove old linters and formatters Luca Vizzarro
2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
2024-12-12 14:00 ` [PATCH v2 1/7] dts: add Ruff as linter and formatter Luca Vizzarro
2024-12-12 14:00 ` [PATCH v2 2/7] dts: enable Ruff preview pydoclint rules Luca Vizzarro
2024-12-12 14:00 ` [PATCH v2 3/7] dts: resolve docstring linter errors Luca Vizzarro
2024-12-12 14:00 ` Luca Vizzarro [this message]
2024-12-12 14:00 ` [PATCH v2 5/7] dts: update dts-check-format to use Ruff Luca Vizzarro
2024-12-12 14:00 ` [PATCH v2 6/7] dts: remove old linters and formatters Luca Vizzarro
2024-12-12 14:00 ` [PATCH v2 7/7] dts: update linters in doc page Luca Vizzarro
2024-12-17 10:15 ` Xu, HailinX
2024-12-20 23:14 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Patrick Robb
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241212140013.17548-5-luca.vizzarro@arm.com \
--to=luca.vizzarro@arm.com \
--cc=dev@dpdk.org \
--cc=paul.szczepanek@arm.com \
--cc=probb@iol.unh.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).