DPDK patches and discussions
 help / color / mirror / Atom feed
From: Luca Vizzarro <luca.vizzarro@arm.com>
To: dev@dpdk.org, Patrick Robb <probb@iol.unh.edu>
Cc: Luca Vizzarro <luca.vizzarro@arm.com>,
	Paul Szczepanek <paul.szczepanek@arm.com>
Subject: [PATCH 3/6] dts: fix docstring linter errors
Date: Tue, 10 Dec 2024 10:32:50 +0000	[thread overview]
Message-ID: <20241210103253.3931003-4-luca.vizzarro@arm.com> (raw)
In-Reply-To: <20241210103253.3931003-1-luca.vizzarro@arm.com>

The addition of Ruff pydocstyle and pydoclint rules has raised new
problems in the docstrings which require to be fixed.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 .../single_active_interactive_shell.py        | 11 +++-
 dts/framework/runner.py                       | 48 ++++++++++----
 dts/framework/settings.py                     |  6 +-
 dts/framework/test_suite.py                   | 43 ++++++++++---
 dts/framework/testbed_model/capability.py     | 33 ++++++++--
 dts/framework/testbed_model/cpu.py            | 33 ++++++++--
 dts/framework/testbed_model/linux_session.py  | 42 +++++++++---
 dts/framework/testbed_model/node.py           |  7 +-
 dts/framework/testbed_model/os_session.py     | 14 ++--
 dts/framework/testbed_model/posix_session.py  | 64 ++++++++++++++-----
 dts/framework/testbed_model/sut_node.py       | 49 ++++++++++----
 dts/framework/testbed_model/topology.py       | 10 ++-
 .../traffic_generator/__init__.py             | 11 +++-
 .../testbed_model/traffic_generator/scapy.py  | 10 ++-
 .../traffic_generator/traffic_generator.py    |  3 +-
 dts/framework/utils.py                        | 38 ++++++++---
 dts/tests/TestSuite_vlan.py                   | 30 ++++++---
 17 files changed, 347 insertions(+), 105 deletions(-)

diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index e3f6424e97..a53e8fc6e1 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -110,6 +110,7 @@ def __init__(
             app_params: The command line parameters to be passed to the application on startup.
             name: Name for the interactive shell to use for logging. This name will be appended to
                 the name of the underlying node which it is running on.
+            **kwargs: Any additional arguments if any.
         """
         self._node = node
         if name is None:
@@ -120,10 +121,12 @@ def __init__(
         self._timeout = timeout
         # Ensure path is properly formatted for the host
         self._update_real_path(self.path)
-        super().__init__(node, **kwargs)
+        super().__init__()
 
     def _setup_ssh_channel(self):
-        self._ssh_channel = self._node.main_session.interactive_session.session.invoke_shell()
+        self._ssh_channel = (
+            self._node.main_session.interactive_session.session.invoke_shell()
+        )
         self._stdin = self._ssh_channel.makefile_stdin("w")
         self._stdout = self._ssh_channel.makefile("r")
         self._ssh_channel.settimeout(self._timeout)
@@ -133,7 +136,9 @@ def _make_start_command(self) -> str:
         """Makes the command that starts the interactive shell."""
         start_command = f"{self._real_path} {self._app_params or ''}"
         if self._privileged:
-            start_command = self._node.main_session._get_privileged_command(start_command)
+            start_command = self._node.main_session._get_privileged_command(
+                start_command
+            )
         return start_command
 
     def _start_application(self) -> None:
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index f91c462ce5..d228ed1b18 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -136,17 +136,25 @@ def run(self) -> None:
 
             # for all test run sections
             for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
-                test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
+                test_run_config, sut_node_config, tg_node_config = (
+                    test_run_with_nodes_config
+                )
                 self._logger.set_stage(DtsStage.test_run_setup)
-                self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
+                self._logger.info(
+                    f"Running test run with SUT '{sut_node_config.name}'."
+                )
                 self._init_random_seed(test_run_config)
                 test_run_result = self._result.add_test_run(test_run_config)
                 # we don't want to modify the original config, so create a copy
                 test_run_test_suites = list(
-                    SETTINGS.test_suites if SETTINGS.test_suites else test_run_config.test_suites
+                    SETTINGS.test_suites
+                    if SETTINGS.test_suites
+                    else test_run_config.test_suites
                 )
                 if not test_run_config.skip_smoke_tests:
-                    test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
+                    test_run_test_suites[:0] = [
+                        TestSuiteConfig(test_suite="smoke_tests")
+                    ]
                 try:
                     test_suites_with_cases = self._get_test_suites_with_cases(
                         test_run_test_suites, test_run_config.func, test_run_config.perf
@@ -154,7 +162,8 @@ def run(self) -> None:
                     test_run_result.test_suites_with_cases = test_suites_with_cases
                 except Exception as e:
                     self._logger.exception(
-                        f"Invalid test suite configuration found: " f"{test_run_test_suites}."
+                        f"Invalid test suite configuration found: "
+                        f"{test_run_test_suites}."
                     )
                     test_run_result.update_setup(Result.FAIL, e)
 
@@ -236,7 +245,9 @@ def _get_test_suites_with_cases(
                 test_cases.extend(perf_test_cases)
 
             test_suites_with_cases.append(
-                TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
+                TestSuiteWithCases(
+                    test_suite_class=test_suite_class, test_cases=test_cases
+                )
             )
         return test_suites_with_cases
 
@@ -285,7 +296,11 @@ def _connect_nodes_and_run_test_run(
 
         else:
             self._run_test_run(
-                sut_node, tg_node, test_run_config, test_run_result, test_suites_with_cases
+                sut_node,
+                tg_node,
+                test_run_config,
+                test_run_result,
+                test_suites_with_cases,
             )
 
     def _run_test_run(
@@ -324,7 +339,8 @@ def _run_test_run(
                 )
             if dir := SETTINGS.precompiled_build_dir:
                 dpdk_build_config = DPDKPrecompiledBuildConfiguration(
-                    dpdk_location=dpdk_build_config.dpdk_location, precompiled_build_dir=dir
+                    dpdk_location=dpdk_build_config.dpdk_location,
+                    precompiled_build_dir=dir,
                 )
             sut_node.set_up_test_run(test_run_config, dpdk_build_config)
             test_run_result.dpdk_build_info = sut_node.get_dpdk_build_info()
@@ -335,7 +351,9 @@ def _run_test_run(
             test_run_result.update_setup(Result.FAIL, e)
 
         else:
-            self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
+            self._run_test_suites(
+                sut_node, tg_node, test_run_result, test_suites_with_cases
+            )
 
         finally:
             try:
@@ -360,7 +378,9 @@ def _get_supported_capabilities(
 
         self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
 
-        return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
+        return get_supported_capabilities(
+            sut_node, topology_config, capabilities_to_check
+        )
 
     def _run_test_suites(
         self,
@@ -443,6 +463,7 @@ def _run_test_suite(
         Args:
             sut_node: The test run's SUT node.
             tg_node: The test run's TG node.
+            topology: The port topology of the nodes.
             test_suite_result: The test suite level result object associated
                 with the current test suite.
             test_suite_with_cases: The test suite with test cases to run.
@@ -585,6 +606,9 @@ def _execute_test_case(
             test_case: The test case function.
             test_case_result: The test case level result object associated
                 with the current test case.
+
+        Raises:
+            KeyboardInterrupt: If DTS has been interrupted by the user.
         """
         test_case_name = test_case.__name__
         try:
@@ -601,7 +625,9 @@ def _execute_test_case(
             self._logger.exception(f"Test case execution ERROR: {test_case_name}")
             test_case_result.update(Result.ERROR, e)
         except KeyboardInterrupt:
-            self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
+            self._logger.error(
+                f"Test case execution INTERRUPTED by user: {test_case_name}"
+            )
             test_case_result.update(Result.SKIP)
             raise KeyboardInterrupt("Stop DTS")
 
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 5a8e6e5aee..91f317105a 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -257,7 +257,9 @@ def _get_help_string(self, action):
         return help
 
 
-def _required_with_one_of(parser: _DTSArgumentParser, action: Action, *required_dests: str) -> None:
+def _required_with_one_of(
+    parser: _DTSArgumentParser, action: Action, *required_dests: str
+) -> None:
     """Verify that `action` is listed together with at least one of `required_dests`.
 
     Verify that when `action` is among the command-line arguments or
@@ -461,6 +463,7 @@ def _process_dpdk_location(
     any valid :class:`DPDKLocation` with the provided parameters if validation is successful.
 
     Args:
+        parser: The instance of the arguments parser.
         dpdk_tree: The path to the DPDK source tree directory.
         tarball: The path to the DPDK tarball.
         remote: If :data:`True`, `dpdk_tree` or `tarball` is located on the SUT node, instead of the
@@ -512,6 +515,7 @@ def _process_test_suites(
     """Process the given argument to a list of :class:`TestSuiteConfig` to execute.
 
     Args:
+        parser: The instance of the arguments parser.
         args: The arguments to process. The args is a string from an environment variable
               or a list of from the user input containing tests suites with tests cases,
               each of which is a list of [test_suite, test_case, test_case, ...].
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index adee01f031..fd6706289e 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -300,7 +300,9 @@ def get_expected_packet(self, packet: Packet) -> Packet:
         """
         return self.get_expected_packets([packet])[0]
 
-    def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> list[Packet]:
+    def _adjust_addresses(
+        self, packets: list[Packet], expected: bool = False
+    ) -> list[Packet]:
         """L2 and L3 address additions in both directions.
 
         Copies of `packets` will be made, modified and returned in this method.
@@ -378,15 +380,21 @@ def verify(self, condition: bool, failure_description: str) -> None:
             self._fail_test_case_verify(failure_description)
 
     def _fail_test_case_verify(self, failure_description: str) -> None:
-        self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
+        self._logger.debug(
+            "A test case failed, showing the last 10 commands executed on SUT:"
+        )
         for command_res in self.sut_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
-        self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
+        self._logger.debug(
+            "A test case failed, showing the last 10 commands executed on TG:"
+        )
         for command_res in self.tg_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
         raise TestCaseVerifyError(failure_description)
 
-    def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
+    def verify_packets(
+        self, expected_packet: Packet, received_packets: list[Packet]
+    ) -> None:
         """Verify that `expected_packet` has been received.
 
         Go through `received_packets` and check that `expected_packet` is among them.
@@ -408,7 +416,9 @@ def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]
                 f"The expected packet {get_packet_summaries(expected_packet)} "
                 f"not found among received {get_packet_summaries(received_packets)}"
             )
-            self._fail_test_case_verify("An expected packet not found among received packets.")
+            self._fail_test_case_verify(
+                "An expected packet not found among received packets."
+            )
 
     def match_all_packets(
         self, expected_packets: list[Packet], received_packets: list[Packet]
@@ -444,7 +454,9 @@ def match_all_packets(
                 f"but {missing_packets_count} were missing."
             )
 
-    def _compare_packets(self, expected_packet: Packet, received_packet: Packet) -> bool:
+    def _compare_packets(
+        self, expected_packet: Packet, received_packet: Packet
+    ) -> bool:
         self._logger.debug(
             f"Comparing packets: \n{expected_packet.summary()}\n{received_packet.summary()}"
         )
@@ -473,10 +485,14 @@ def _compare_packets(self, expected_packet: Packet, received_packet: Packet) ->
             expected_payload = expected_payload.payload
 
         if expected_payload:
-            self._logger.debug(f"The expected packet did not contain {expected_payload}.")
+            self._logger.debug(
+                f"The expected packet did not contain {expected_payload}."
+            )
             return False
         if received_payload and received_payload.__class__ != Padding:
-            self._logger.debug("The received payload had extra layers which were not padding.")
+            self._logger.debug(
+                "The received payload had extra layers which were not padding."
+            )
             return False
         return True
 
@@ -503,7 +519,10 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
 
     def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         self._logger.debug("Looking at the IP layer.")
-        if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst:
+        if (
+            received_packet.src != expected_packet.src
+            or received_packet.dst != expected_packet.dst
+        ):
             return False
         return True
 
@@ -615,7 +634,11 @@ def class_name(self) -> str:
 
     @cached_property
     def class_obj(self) -> type[TestSuite]:
-        """A reference to the test suite's class."""
+        """A reference to the test suite's class.
+
+        Raises:
+            InternalError: If the test suite class is missing from the module.
+        """
 
         def is_test_suite(obj) -> bool:
             """Check whether `obj` is a :class:`TestSuite`.
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 0d5f0e0b32..6e06c75c3d 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -130,7 +130,9 @@ def _get_and_reset(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
 
     @classmethod
     @abstractmethod
-    def get_supported_capabilities(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
+    def get_supported_capabilities(
+        cls, sut_node: SutNode, topology: "Topology"
+    ) -> set[Self]:
         """Get the support status of each registered capability.
 
         Each subclass must implement this method and return the subset of supported capabilities
@@ -224,7 +226,10 @@ def get_supported_capabilities(
             with TestPmdShell(
                 sut_node, privileged=True, disable_device_start=True
             ) as testpmd_shell:
-                for conditional_capability_fn, capabilities in capabilities_to_check_map.items():
+                for (
+                    conditional_capability_fn,
+                    capabilities,
+                ) in capabilities_to_check_map.items():
                     supported_capabilities: set[NicCapability] = set()
                     unsupported_capabilities: set[NicCapability] = set()
                     capability_fn = cls._reduce_capabilities(
@@ -237,14 +242,18 @@ def get_supported_capabilities(
                         if capability.nic_capability in supported_capabilities:
                             supported_conditional_capabilities.add(capability)
 
-        logger.debug(f"Found supported capabilities {supported_conditional_capabilities}.")
+        logger.debug(
+            f"Found supported capabilities {supported_conditional_capabilities}."
+        )
         return supported_conditional_capabilities
 
     @classmethod
     def _get_decorated_capabilities_map(
         cls,
     ) -> dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]]:
-        capabilities_map: dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]] = {}
+        capabilities_map: dict[
+            TestPmdShellDecorator | None, set["DecoratedNicCapability"]
+        ] = {}
         for capability in cls.capabilities_to_check:
             if capability.capability_decorator not in capabilities_map:
                 capabilities_map[capability.capability_decorator] = set()
@@ -307,7 +316,9 @@ class TopologyCapability(Capability):
     _unique_capabilities: ClassVar[dict[str, Self]] = {}
 
     def _preprocess_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
-        test_case_or_suite.required_capabilities.discard(test_case_or_suite.topology_type)
+        test_case_or_suite.required_capabilities.discard(
+            test_case_or_suite.topology_type
+        )
         test_case_or_suite.topology_type = self
 
     @classmethod
@@ -353,6 +364,10 @@ def set_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
         At that point, the test case topologies have been set by the :func:`requires` decorator.
         The test suite topology only affects the test case topologies
         if not :attr:`~.topology.TopologyType.default`.
+
+        Raises:
+            ConfigurationError: If the topology type requested by the test case is more complex than
+                the test suite's.
         """
         if inspect.isclass(test_case_or_suite):
             if self.topology_type is not TopologyType.default:
@@ -443,7 +458,9 @@ class TestProtocol(Protocol):
     #: The reason for skipping the test case or suite.
     skip_reason: ClassVar[str] = ""
     #: The topology type of the test case or suite.
-    topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
+    topology_type: ClassVar[TopologyCapability] = TopologyCapability(
+        TopologyType.default
+    )
     #: The capabilities the test case or suite requires in order to be executed.
     required_capabilities: ClassVar[set[Capability]] = set()
 
@@ -471,7 +488,9 @@ def requires(
         The decorated test case or test suite.
     """
 
-    def add_required_capability(test_case_or_suite: type[TestProtocol]) -> type[TestProtocol]:
+    def add_required_capability(
+        test_case_or_suite: type[TestProtocol],
+    ) -> type[TestProtocol]:
         for nic_capability in nic_capabilities:
             decorated_nic_capability = DecoratedNicCapability.get_unique(nic_capability)
             decorated_nic_capability.add_to_required(test_case_or_suite)
diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index a50cf44c19..0746878770 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -87,7 +87,9 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
         # the input lcores may not be sorted
         self._lcore_list.sort()
-        self._lcore_str = f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+        self._lcore_str = (
+            f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+        )
 
     @property
     def lcore_list(self) -> list[int]:
@@ -102,11 +104,15 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
                 segment.append(lcore_id)
             else:
                 formatted_core_list.append(
-                    f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
+                    f"{segment[0]}-{segment[-1]}"
+                    if len(segment) > 1
+                    else f"{segment[0]}"
                 )
                 current_core_index = lcore_ids_list.index(lcore_id)
                 formatted_core_list.extend(
-                    self._get_consecutive_lcores_range(lcore_ids_list[current_core_index:])
+                    self._get_consecutive_lcores_range(
+                        lcore_ids_list[current_core_index:]
+                    )
                 )
                 segment.clear()
                 break
@@ -166,7 +172,9 @@ def __init__(
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
-        self._lcores_to_filter = sorted(lcore_list, key=lambda x: x.core, reverse=not ascending)
+        self._lcores_to_filter = sorted(
+            lcore_list, key=lambda x: x.core, reverse=not ascending
+        )
         self.filter()
 
     @abstractmethod
@@ -231,6 +239,9 @@ def _filter_sockets(
 
         Returns:
             A list of lists of logical CPU cores. Each list contains cores from one socket.
+
+        Raises:
+            ValueError: If the number of the requested sockets by the filter can't be satisfied.
         """
         allowed_sockets: set[int] = set()
         socket_count = self._filter_specifier.socket_count
@@ -272,6 +283,10 @@ def _filter_cores_from_socket(
 
         Returns:
             The filtered logical CPU cores.
+
+        Raises:
+            ValueError: If the number of the requested cores per socket by the filter
+                can't be satisfied.
         """
         # no need to use ordered dict, from Python3.7 the dict
         # insertion order is preserved (LIFO).
@@ -287,7 +302,9 @@ def _filter_cores_from_socket(
                 else:
                     # we have enough lcores per this core
                     continue
-            elif self._filter_specifier.cores_per_socket > len(lcore_count_per_core_map):
+            elif self._filter_specifier.cores_per_socket > len(
+                lcore_count_per_core_map
+            ):
                 # only add cores if we need more
                 lcore_count_per_core_map[lcore.core] = 1
                 filtered_lcores.append(lcore)
@@ -327,6 +344,9 @@ def filter(self) -> list[LogicalCore]:
 
         Return:
             The filtered logical CPU cores.
+
+        Raises:
+            ValueError: If the specified lcore filter specifier is invalid.
         """
         if not len(self._filter_specifier.lcore_list):
             return self._lcores_to_filter
@@ -360,6 +380,9 @@ def lcore_filter(
 
     Returns:
         The filter that corresponds to `filter_specifier`.
+
+    Raises:
+        ValueError: If the supplied `filter_specifier` is invalid.
     """
     if isinstance(filter_specifier, LogicalCoreList):
         return LogicalCoreListFilter(core_list, filter_specifier, ascending)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index f87efb8f18..b316f23b4e 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -83,8 +83,14 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
-        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
+    def setup_hugepages(
+        self, number_of: int, hugepage_size: int, force_first_numa: bool
+    ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.
+
+        Raises:
+            ConfigurationError: If the given `hugepage_size` is not supported by the OS.
+        """
         self._logger.info("Getting Hugepage information.")
         hugepages_total = self._get_hugepages_total(hugepage_size)
         if (
@@ -127,7 +133,9 @@ def _mount_huge_pages(self) -> None:
         if result.stdout == "":
             remote_mount_path = "/mnt/huge"
             self.send_command(f"mkdir -p {remote_mount_path}", privileged=True)
-            self.send_command(f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True)
+            self.send_command(
+                f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True
+            )
 
     def _supports_numa(self) -> bool:
         # the system supports numa if self._numa_nodes is non-empty and there are more
@@ -135,9 +143,13 @@ def _supports_numa(self) -> bool:
         # there's no reason to do any numa specific configuration)
         return len(self._numa_nodes) > 1
 
-    def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: bool) -> None:
+    def _configure_huge_pages(
+        self, number_of: int, size: int, force_first_numa: bool
+    ) -> None:
         self._logger.info("Configuring Hugepages.")
-        hugepage_config_path = f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+        hugepage_config_path = (
+            f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+        )
         if force_first_numa and self._supports_numa():
             # clear non-numa hugepages
             self.send_command(f"echo 0 | tee {hugepage_config_path}", privileged=True)
@@ -146,19 +158,25 @@ def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: boo
                 f"/hugepages-{size}kB/nr_hugepages"
             )
 
-        self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
+        self.send_command(
+            f"echo {number_of} | tee {hugepage_config_path}", privileged=True
+        )
 
     def update_ports(self, ports: list[Port]) -> None:
         """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
-            assert port.node == self.name, "Attempted to gather port info on the wrong node"
+            assert (
+                port.node == self.name
+            ), "Attempted to gather port info on the wrong node"
 
         port_info_list = self._get_lshw_info()
         for port in ports:
             for port_info in port_info_list:
                 if f"pci@{port.pci}" == port_info.get("businfo"):
-                    self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
+                    self._update_port_attr(
+                        port, port_info.get("logicalname"), "logical_name"
+                    )
                     self._update_port_attr(port, port_info.get("serial"), "mac_address")
                     port_info_list.remove(port_info)
                     break
@@ -169,10 +187,14 @@ def _get_lshw_info(self) -> list[LshwOutput]:
         output = self.send_command("lshw -quiet -json -C network", verify=True)
         return json.loads(output.stdout)
 
-    def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) -> None:
+    def _update_port_attr(
+        self, port: Port, attr_value: str | None, attr_name: str
+    ) -> None:
         if attr_value:
             setattr(port, attr_name, attr_value)
-            self._logger.debug(f"Found '{attr_name}' of port {port.pci}: '{attr_value}'.")
+            self._logger.debug(
+                f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
+            )
         else:
             self._logger.warning(
                 f"Attempted to get '{attr_name}' of port {port.pci}, but it doesn't exist."
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index c1844ecd5d..e8021a4afe 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -198,13 +198,18 @@ def close(self) -> None:
             session.close()
 
 
-def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession:
+def create_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLogger
+) -> OSSession:
     """Factory for OS-aware sessions.
 
     Args:
         node_config: The test run configuration of the node to connect to.
         name: The name of the session.
         logger: The logger instance this session will use.
+
+    Raises:
+        ConfigurationError: If the node's OS is unsupported.
     """
     match node_config.os:
         case OS.linux:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 62add7a4df..1b2885be5d 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -195,7 +195,9 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         """
 
     @abstractmethod
-    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
+    def copy_from(
+        self, source_file: str | PurePath, destination_dir: str | Path
+    ) -> None:
         """Copy a file from the remote node to the local filesystem.
 
         Copy `source_file` from the remote node associated with this remote
@@ -301,7 +303,9 @@ def copy_dir_to(
         """
 
     @abstractmethod
-    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
+    def remove_remote_file(
+        self, remote_file_path: str | PurePath, force: bool = True
+    ) -> None:
         """Remove remote file, by default remove forcefully.
 
         Args:
@@ -366,7 +370,7 @@ def is_remote_dir(self, remote_path: PurePath) -> bool:
         """Check if the `remote_path` is a directory.
 
         Args:
-            remote_tarball_path: The path to the remote tarball.
+            remote_path: The path to the remote tarball.
 
         Returns:
             If :data:`True` the `remote_path` is a directory, otherwise :data:`False`.
@@ -475,7 +479,9 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
 
     @abstractmethod
-    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
+    def setup_hugepages(
+        self, number_of: int, hugepage_size: int, force_first_numa: bool
+    ) -> None:
         """Configure hugepages on the node.
 
         Get the node's Hugepage Size, configure the specified count of hugepages
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index c0cca2ac50..f707b6e17b 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -96,7 +96,9 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         result = self.send_command(f"test -e {remote_path}")
         return not result.return_code
 
-    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
+    def copy_from(
+        self, source_file: str | PurePath, destination_dir: str | Path
+    ) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_dir)
 
@@ -113,12 +115,16 @@ def copy_dir_from(
     ) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_dir_from`."""
         source_dir = PurePath(source_dir)
-        remote_tarball_path = self.create_remote_tarball(source_dir, compress_format, exclude)
+        remote_tarball_path = self.create_remote_tarball(
+            source_dir, compress_format, exclude
+        )
 
         self.copy_from(remote_tarball_path, destination_dir)
         self.remove_remote_file(remote_tarball_path)
 
-        tarball_path = Path(destination_dir, f"{source_dir.name}.{compress_format.extension}")
+        tarball_path = Path(
+            destination_dir, f"{source_dir.name}.{compress_format.extension}"
+        )
         extract_tarball(tarball_path)
         tarball_path.unlink()
 
@@ -141,7 +147,9 @@ def copy_dir_to(
         self.extract_remote_tarball(remote_tar_path)
         self.remove_remote_file(remote_tar_path)
 
-    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
+    def remove_remote_file(
+        self, remote_file_path: str | PurePath, force: bool = True
+    ) -> None:
         """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(f=force)
         self.send_command(f"rm{opts} {remote_file_path}")
@@ -176,11 +184,15 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
             """
             if exclude_patterns:
                 exclude_patterns = convert_to_list_of_string(exclude_patterns)
-                return "".join([f" --exclude={pattern}" for pattern in exclude_patterns])
+                return "".join(
+                    [f" --exclude={pattern}" for pattern in exclude_patterns]
+                )
             return ""
 
         posix_remote_dir_path = PurePosixPath(remote_dir_path)
-        target_tarball_path = PurePosixPath(f"{remote_dir_path}.{compress_format.extension}")
+        target_tarball_path = PurePosixPath(
+            f"{remote_dir_path}.{compress_format.extension}"
+        )
 
         self.send_command(
             f"tar caf {target_tarball_path}{generate_tar_exclude_args(exclude)} "
@@ -191,7 +203,9 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
         return target_tarball_path
 
     def extract_remote_tarball(
-        self, remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None
+        self,
+        remote_tarball_path: str | PurePath,
+        expected_dir: str | PurePath | None = None,
     ) -> None:
         """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
         self.send_command(
@@ -236,7 +250,11 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
-        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
+        """Overrides :meth:`~.os_session.OSSession.build_dpdk`.
+
+        Raises:
+            DPDKBuildError: If the DPDK build failed.
+        """
         try:
             if rebuild:
                 # reconfigure, then build
@@ -267,7 +285,9 @@ def build_dpdk(
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
-        out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
+        out = self.send_command(
+            f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
+        )
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
@@ -282,7 +302,9 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
             self._check_dpdk_hugepages(dpdk_runtime_dirs)
             self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
 
-    def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
+    def _get_dpdk_runtime_dirs(
+        self, dpdk_prefix_list: Iterable[str]
+    ) -> list[PurePosixPath]:
         """Find runtime directories DPDK apps are currently using.
 
         Args:
@@ -310,7 +332,9 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
         Returns:
             The contents of remote_path. If remote_path doesn't exist, return None.
         """
-        out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
+        out = self.send_command(
+            f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
+        ).stdout
         if "No such file or directory" in out:
             return None
         else:
@@ -341,7 +365,9 @@ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[in
                             pids.append(int(match.group(1)))
         return pids
 
-    def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
+    def _check_dpdk_hugepages(
+        self, dpdk_runtime_dirs: Iterable[str | PurePath]
+    ) -> None:
         """Check there aren't any leftover hugepages.
 
         If any hugepages are found, emit a warning. The hugepages are investigated in the
@@ -360,7 +386,9 @@ def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) ->
                     self._logger.warning(out)
                     self._logger.warning("*******************************************")
 
-    def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
+    def _remove_dpdk_runtime_dirs(
+        self, dpdk_runtime_dirs: Iterable[str | PurePath]
+    ) -> None:
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
@@ -369,7 +397,11 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
-        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
+        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`.
+
+        Raises:
+            ValueError: If the given `compiler_name` is invalid.
+        """
         match compiler_name:
             case "gcc":
                 return self.send_command(
@@ -393,4 +425,6 @@ def get_node_info(self) -> OSSessionInfo:
             SETTINGS.timeout,
         ).stdout.split("\n")
         kernel_version = self.send_command("uname -r", SETTINGS.timeout).stdout
-        return OSSessionInfo(os_release_info[0].strip(), os_release_info[1].strip(), kernel_version)
+        return OSSessionInfo(
+            os_release_info[0].strip(), os_release_info[1].strip(), kernel_version
+        )
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 14d77c50a6..6adcff01c2 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -139,7 +139,9 @@ def remote_dpdk_build_dir(self) -> str | PurePath:
     def dpdk_version(self) -> str | None:
         """Last built DPDK version."""
         if self._dpdk_version is None:
-            self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_tree_path)
+            self._dpdk_version = self.main_session.get_dpdk_version(
+                self._remote_dpdk_tree_path
+            )
         return self._dpdk_version
 
     @property
@@ -153,7 +155,9 @@ def node_info(self) -> OSSessionInfo:
     def compiler_version(self) -> str | None:
         """The node's compiler version."""
         if self._compiler_version is None:
-            self._logger.warning("The `compiler_version` is None because a pre-built DPDK is used.")
+            self._logger.warning(
+                "The `compiler_version` is None because a pre-built DPDK is used."
+            )
 
         return self._compiler_version
 
@@ -181,7 +185,9 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
         Returns:
             The DPDK build information,
         """
-        return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
+        return DPDKBuildInfo(
+            dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
+        )
 
     def set_up_test_run(
         self,
@@ -264,13 +270,16 @@ def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
         Raises:
             RemoteFileNotFoundError: If the DPDK source tree is expected to be on the SUT node but
                 is not found.
+            ConfigurationError: If the remote DPDK source tree specified is not a valid directory.
         """
         if not self.main_session.remote_path_exists(dpdk_tree):
             raise RemoteFileNotFoundError(
                 f"Remote DPDK source tree '{dpdk_tree}' not found in SUT node."
             )
         if not self.main_session.is_remote_dir(dpdk_tree):
-            raise ConfigurationError(f"Remote DPDK source tree '{dpdk_tree}' must be a directory.")
+            raise ConfigurationError(
+                f"Remote DPDK source tree '{dpdk_tree}' must be a directory."
+            )
 
         self.__remote_dpdk_tree_path = dpdk_tree
 
@@ -306,9 +315,13 @@ def _validate_remote_dpdk_tarball(self, dpdk_tarball: PurePath) -> None:
             ConfigurationError: If the `dpdk_tarball` is a valid path but not a valid tar archive.
         """
         if not self.main_session.remote_path_exists(dpdk_tarball):
-            raise RemoteFileNotFoundError(f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT.")
+            raise RemoteFileNotFoundError(
+                f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT."
+            )
         if not self.main_session.is_remote_tarfile(dpdk_tarball):
-            raise ConfigurationError(f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive.")
+            raise ConfigurationError(
+                f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive."
+            )
 
     def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
         """Copy the local DPDK tarball to the SUT node.
@@ -323,7 +336,9 @@ def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
             f"Copying DPDK tarball to SUT: '{dpdk_tarball}' into '{self._remote_tmp_dir}'."
         )
         self.main_session.copy_to(dpdk_tarball, self._remote_tmp_dir)
-        return self.main_session.join_remote_path(self._remote_tmp_dir, dpdk_tarball.name)
+        return self.main_session.join_remote_path(
+            self._remote_tmp_dir, dpdk_tarball.name
+        )
 
     def _prepare_and_extract_dpdk_tarball(self, remote_tarball_path: PurePath) -> None:
         """Prepare the remote DPDK tree path and extract the tarball.
@@ -347,7 +362,9 @@ def remove_tarball_suffix(remote_tarball_path: PurePath) -> PurePath:
             if len(remote_tarball_path.suffixes) > 1:
                 if remote_tarball_path.suffixes[-2] == ".tar":
                     suffixes_to_remove = "".join(remote_tarball_path.suffixes[-2:])
-                    return PurePath(str(remote_tarball_path).replace(suffixes_to_remove, ""))
+                    return PurePath(
+                        str(remote_tarball_path).replace(suffixes_to_remove, "")
+                    )
             return remote_tarball_path.with_suffix("")
 
         tarball_top_dir = self.main_session.get_tarball_top_dir(remote_tarball_path)
@@ -390,7 +407,9 @@ def _set_remote_dpdk_build_dir(self, build_dir: str):
 
         self._remote_dpdk_build_dir = PurePath(remote_dpdk_build_dir)
 
-    def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration) -> None:
+    def _configure_dpdk_build(
+        self, dpdk_build_config: DPDKBuildOptionsConfiguration
+    ) -> None:
         """Populate common environment variables and set the DPDK build related properties.
 
         This method sets `compiler_version` for additional information and `remote_dpdk_build_dir`
@@ -400,9 +419,13 @@ def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration
             dpdk_build_config: A DPDK build configuration to test.
         """
         self._env_vars = {}
-        self._env_vars.update(self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch))
+        self._env_vars.update(
+            self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch)
+        )
         if compiler_wrapper := dpdk_build_config.compiler_wrapper:
-            self._env_vars["CC"] = f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
+            self._env_vars["CC"] = (
+                f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
+            )
         else:
             self._env_vars["CC"] = dpdk_build_config.compiler.name
 
@@ -453,7 +476,9 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
         if app_name == "all":
-            return self.main_session.join_remote_path(self.remote_dpdk_build_dir, "examples")
+            return self.main_session.join_remote_path(
+                self.remote_dpdk_build_dir, "examples"
+            )
         return self.main_session.join_remote_path(
             self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
         )
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 3824804310..2c10aff4ef 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -43,6 +43,12 @@ def get_from_value(cls, value: int) -> "TopologyType":
         :class:`TopologyType` is a regular :class:`~enum.Enum`.
         When getting an instance from value, we're not interested in the default,
         since we already know the value, allowing us to remove the ambiguity.
+
+        Args:
+            value: The value of the requested enum.
+
+        Raises:
+            ConfigurationError: If an unsupported link topology is supplied.
         """
         match value:
             case 0:
@@ -52,7 +58,9 @@ def get_from_value(cls, value: int) -> "TopologyType":
             case 2:
                 return TopologyType.two_links
             case _:
-                raise ConfigurationError("More than two links in a topology are not supported.")
+                raise ConfigurationError(
+                    "More than two links in a topology are not supported."
+                )
 
 
 class Topology:
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 945f6bbbbb..e7fd511a00 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -33,9 +33,16 @@ def create_traffic_generator(
 
     Returns:
         A traffic generator capable of capturing received packets.
+
+    Raises:
+        ConfigurationError: If an unknown traffic generator has been setup.
     """
     match traffic_generator_config:
         case ScapyTrafficGeneratorConfig():
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config, privileged=True)
+            return ScapyTrafficGenerator(
+                tg_node, traffic_generator_config, privileged=True
+            )
         case _:
-            raise ConfigurationError(f"Unknown traffic generator: {traffic_generator_config.type}")
+            raise ConfigurationError(
+                f"Unknown traffic generator: {traffic_generator_config.type}"
+            )
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 1251ca65a0..16cc361cab 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -173,7 +173,11 @@ def _create_packet_filter(self, filter_config: PacketFilteringConfig) -> str:
         return " && ".join(bpf_filter)
 
     def _shell_create_sniffer(
-        self, packets_to_send: list[Packet], send_port: Port, recv_port: Port, filter_config: str
+        self,
+        packets_to_send: list[Packet],
+        send_port: Port,
+        recv_port: Port,
+        filter_config: str,
     ) -> None:
         """Create an asynchronous sniffer in the shell.
 
@@ -227,7 +231,9 @@ def _shell_start_and_stop_sniffing(self, duration: float) -> list[Packet]:
         self.send_command(f"{self._sniffer_name}.start()")
         # Insert a one second delay to prevent timeout errors from occurring
         time.sleep(duration + 1)
-        self.send_command(f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)")
+        self.send_command(
+            f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)"
+        )
         # An extra newline is required here due to the nature of interactive Python shells
         packet_strs = self.send_command(
             f"for pakt in {sniffed_packets_name}: print(bytes_base64(pakt.build()))\n"
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 5ac61cd4e1..42b6735646 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -42,11 +42,12 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig, **kwargs):
         Args:
             tg_node: The traffic generator node where the created traffic generator will be running.
             config: The traffic generator's test run configuration.
+            **kwargs: Any additional arguments if any.
         """
         self._config = config
         self._tg_node = tg_node
         self._logger = get_dts_logger(f"{self._tg_node.name} {self._config.type}")
-        super().__init__(tg_node, **kwargs)
+        super().__init__()
 
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send `packet` and block until it is fully sent.
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index bc3f8d6d0f..6ff9a485ba 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -31,7 +31,9 @@
 REGEX_FOR_PCI_ADDRESS: str = r"[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}"
 _REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
 _REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
-REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
+REGEX_FOR_MAC_ADDRESS: str = (
+    rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
+)
 REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
 
 
@@ -56,7 +58,9 @@ def expand_range(range_str: str) -> list[int]:
         range_boundaries = range_str.split("-")
         # will throw an exception when items in range_boundaries can't be converted,
         # serving as type check
-        expanded_range.extend(range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1))
+        expanded_range.extend(
+            range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
+        )
 
     return expanded_range
 
@@ -73,7 +77,9 @@ def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
-        packet_summaries = json.dumps(list(map(lambda pkt: pkt.summary(), packets)), indent=4)
+        packet_summaries = json.dumps(
+            list(map(lambda pkt: pkt.summary(), packets)), indent=4
+        )
     return f"Packet contents: \n{packet_summaries}"
 
 
@@ -81,7 +87,9 @@ class StrEnum(Enum):
     """Enum with members stored as strings."""
 
     @staticmethod
-    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
+    def _generate_next_value_(
+        name: str, start: int, count: int, last_values: object
+    ) -> str:
         return name
 
     def __str__(self) -> str:
@@ -108,7 +116,9 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
 
                 meson_args = MesonArgs(enable_kmods=True).
         """
-        self._default_library = f"--default-library={default_library}" if default_library else ""
+        self._default_library = (
+            f"--default-library={default_library}" if default_library else ""
+        )
         self._dpdk_args = " ".join(
             (
                 f"-D{dpdk_arg_name}={dpdk_arg_value}"
@@ -149,7 +159,9 @@ def extension(self):
         For other compression formats, the extension will be in the format
         'tar.{compression format}'.
         """
-        return f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
+        return (
+            f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
+        )
 
 
 def convert_to_list_of_string(value: Any | list[Any]) -> list[str]:
@@ -177,7 +189,9 @@ def create_tarball(
         The path to the created tarball.
     """
 
-    def create_filter_function(exclude_patterns: str | list[str] | None) -> Callable | None:
+    def create_filter_function(
+        exclude_patterns: str | list[str] | None,
+    ) -> Callable | None:
         """Create a filter function based on the provided exclude patterns.
 
         Args:
@@ -192,7 +206,9 @@ def create_filter_function(exclude_patterns: str | list[str] | None) -> Callable
 
             def filter_func(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo | None:
                 file_name = os.path.basename(tarinfo.name)
-                if any(fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns):
+                if any(
+                    fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns
+                ):
                     return None
                 return tarinfo
 
@@ -285,7 +301,9 @@ def _make_packet() -> Packet:
             packet /= random.choice(l4_factories)(sport=src_port, dport=dst_port)
 
         max_payload_size = mtu - len(packet)
-        usable_payload_size = payload_size if payload_size < max_payload_size else max_payload_size
+        usable_payload_size = (
+            payload_size if payload_size < max_payload_size else max_payload_size
+        )
         return packet / random.randbytes(usable_payload_size)
 
     return [_make_packet() for _ in range(number_of)]
@@ -300,7 +318,7 @@ class MultiInheritanceBaseClass:
     :meth:`super.__init__` without repercussion.
     """
 
-    def __init__(self, *args, **kwargs) -> None:
+    def __init__(self) -> None:
         """Call the init method of :class:`object`."""
         super().__init__()
 
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index 7cfbd7ea00..524854ea89 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -38,7 +38,9 @@ class TestVlan(TestSuite):
     tag when insertion is enabled.
     """
 
-    def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id: int) -> None:
+    def send_vlan_packet_and_verify(
+        self, should_receive: bool, strip: bool, vlan_id: int
+    ) -> None:
         """Generate a VLAN packet, send and verify packet with same payload is received on the dut.
 
         Args:
@@ -57,12 +59,14 @@ def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id
                 break
         if should_receive:
             self.verify(
-                test_packet is not None, "Packet was dropped when it should have been received"
+                test_packet is not None,
+                "Packet was dropped when it should have been received",
             )
             if test_packet is not None:
                 if strip:
                     self.verify(
-                        not test_packet.haslayer(Dot1Q), "VLAN tag was not stripped successfully"
+                        not test_packet.haslayer(Dot1Q),
+                        "VLAN tag was not stripped successfully",
                     )
                 else:
                     self.verify(
@@ -88,11 +92,18 @@ def send_packet_and_verify_insertion(self, expected_id: int) -> None:
             if hasattr(packet, "load") and b"xxxxx" in packet.load:
                 test_packet = packet
                 break
-        self.verify(test_packet is not None, "Packet was dropped when it should have been received")
+        self.verify(
+            test_packet is not None,
+            "Packet was dropped when it should have been received",
+        )
         if test_packet is not None:
-            self.verify(test_packet.haslayer(Dot1Q), "The received packet did not have a VLAN tag")
             self.verify(
-                test_packet.vlan == expected_id, "The received tag did not match the expected tag"
+                test_packet.haslayer(Dot1Q),
+                "The received packet did not have a VLAN tag",
+            )
+            self.verify(
+                test_packet.vlan == expected_id,
+                "The received tag did not match the expected tag",
             )
 
     def vlan_setup(self, testpmd: TestPmdShell, port_id: int, filtered_id: int) -> None:
@@ -102,9 +113,6 @@ def vlan_setup(self, testpmd: TestPmdShell, port_id: int, filtered_id: int) -> N
             testpmd: Testpmd shell session to send commands to.
             port_id: Number of port to use for setup.
             filtered_id: ID to be added to the VLAN filter list.
-
-        Returns:
-            TestPmdShell: Testpmd session being configured.
         """
         testpmd.set_forward_mode(SimpleForwardingModes.mac)
         testpmd.set_promisc(port_id, False)
@@ -147,7 +155,9 @@ def test_vlan_no_receipt(self) -> None:
         with TestPmdShell(node=self.sut_node) as testpmd:
             self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
             testpmd.start()
-            self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
+            self.send_vlan_packet_and_verify(
+                should_receive=False, strip=False, vlan_id=2
+            )
 
     @func_test
     def test_vlan_header_insertion(self) -> None:
-- 
2.43.0


  parent reply	other threads:[~2024-12-10 10:34 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
2024-12-10 10:32 ` [PATCH 1/6] dts: add Ruff as linter and formatter Luca Vizzarro
2024-12-10 10:32 ` [PATCH 2/6] dts: enable Ruff preview pydoclint rules Luca Vizzarro
2024-12-10 10:32 ` Luca Vizzarro [this message]
2024-12-10 10:32 ` [PATCH 4/6] dts: apply Ruff formatting Luca Vizzarro
2024-12-10 10:32 ` [PATCH 5/6] dts: update dts-check-format to use Ruff Luca Vizzarro
2024-12-10 10:32 ` [PATCH 6/6] dts: remove old linters and formatters Luca Vizzarro
2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 1/7] dts: add Ruff as linter and formatter Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 2/7] dts: enable Ruff preview pydoclint rules Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 3/7] dts: resolve docstring linter errors Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 4/7] dts: apply Ruff formatting Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 5/7] dts: update dts-check-format to use Ruff Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 6/7] dts: remove old linters and formatters Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 7/7] dts: update linters in doc page Luca Vizzarro
2024-12-17 10:15     ` Xu, HailinX
2024-12-20 23:14   ` [PATCH v2 0/7] dts: add Ruff and docstring linting Patrick Robb

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241210103253.3931003-4-luca.vizzarro@arm.com \
    --to=luca.vizzarro@arm.com \
    --cc=dev@dpdk.org \
    --cc=paul.szczepanek@arm.com \
    --cc=probb@iol.unh.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).