DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/6] dts: add Ruff and docstring linting
@ 2024-12-10 10:32 Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 1/6] dts: add Ruff as linter and formatter Luca Vizzarro
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-10 10:32 UTC (permalink / raw)
  To: dev, Patrick Robb; +Cc: Luca Vizzarro

Hi there,

sending a new patchset to cover the replacement of all the current
linters with Ruff. The configuration of Ruff was attempted to be 1:1,
but there are slight differences meaning that Ruff did not purposely
implement all the rules. Either way, at the moment it should be a near
perfect match and satisfy our requirements.

I've also took the chance to enable some new docstring linting rules
which mimic the new pydoclint project.

Best,
Luca

Luca Vizzarro (6):
  dts: add Ruff as linter and formatter
  dts: enable Ruff preview pydoclint rules
  dts: fix docstring linter errors
  dts: apply Ruff formatting
  dts: update dts-check-format to use Ruff
  dts: remove old linters and formatters

 devtools/dts-check-format.sh                  |  30 +--
 dts/framework/params/eal.py                   |   5 +-
 dts/framework/remote_session/dpdk_shell.py    |   1 -
 dts/framework/remote_session/python_shell.py  |   1 +
 .../single_active_interactive_shell.py        |   3 +-
 dts/framework/runner.py                       |  14 +-
 dts/framework/settings.py                     |   3 +
 dts/framework/test_suite.py                   |   6 +-
 dts/framework/testbed_model/capability.py     |  13 +-
 dts/framework/testbed_model/cpu.py            |  21 +-
 dts/framework/testbed_model/linux_session.py  |   6 +-
 dts/framework/testbed_model/node.py           |   3 +
 dts/framework/testbed_model/os_session.py     |   3 +-
 dts/framework/testbed_model/port.py           |   1 -
 dts/framework/testbed_model/posix_session.py  |  16 +-
 dts/framework/testbed_model/sut_node.py       |   2 +-
 dts/framework/testbed_model/topology.py       |   6 +
 .../traffic_generator/__init__.py             |   3 +
 .../testbed_model/traffic_generator/scapy.py  |   7 +-
 .../traffic_generator/traffic_generator.py    |   3 +-
 dts/framework/utils.py                        |   6 +-
 dts/poetry.lock                               | 197 +++---------------
 dts/pyproject.toml                            |  40 ++--
 dts/tests/TestSuite_vlan.py                   |  22 +-
 24 files changed, 172 insertions(+), 240 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/6] dts: add Ruff as linter and formatter
  2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
@ 2024-12-10 10:32 ` Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 2/6] dts: enable Ruff preview pydoclint rules Luca Vizzarro
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-10 10:32 UTC (permalink / raw)
  To: dev, Patrick Robb; +Cc: Luca Vizzarro, Paul Szczepanek

To improve and streamline the development process, Ruff presents itself
as a very fast all-in-one linter that is able to apply fixes and
formatter compatible with Black. Ruff implements all the rules that DTS
currently use and expands on them, leaving space to easily enable more
checks in the future.

Bugzilla ID: 1358

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/poetry.lock    | 29 ++++++++++++++++++++++++++++-
 dts/pyproject.toml | 20 ++++++++++++++++++++
 2 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index ee564676b4..aa821f0101 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1073,6 +1073,33 @@ urllib3 = ">=1.21.1,<3"
 socks = ["PySocks (>=1.5.6,!=1.5.7)"]
 use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
 
+[[package]]
+name = "ruff"
+version = "0.8.1"
+description = "An extremely fast Python linter and code formatter, written in Rust."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "ruff-0.8.1-py3-none-linux_armv6l.whl", hash = "sha256:fae0805bd514066f20309f6742f6ee7904a773eb9e6c17c45d6b1600ca65c9b5"},
+    {file = "ruff-0.8.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8a4f7385c2285c30f34b200ca5511fcc865f17578383db154e098150ce0a087"},
+    {file = "ruff-0.8.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:cd054486da0c53e41e0086e1730eb77d1f698154f910e0cd9e0d64274979a209"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2029b8c22da147c50ae577e621a5bfbc5d1fed75d86af53643d7a7aee1d23871"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2666520828dee7dfc7e47ee4ea0d928f40de72056d929a7c5292d95071d881d1"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:333c57013ef8c97a53892aa56042831c372e0bb1785ab7026187b7abd0135ad5"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:288326162804f34088ac007139488dcb43de590a5ccfec3166396530b58fb89d"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b12c39b9448632284561cbf4191aa1b005882acbc81900ffa9f9f471c8ff7e26"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:364e6674450cbac8e998f7b30639040c99d81dfb5bbc6dfad69bc7a8f916b3d1"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b22346f845fec132aa39cd29acb94451d030c10874408dbf776af3aaeb53284c"},
+    {file = "ruff-0.8.1-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:b2f2f7a7e7648a2bfe6ead4e0a16745db956da0e3a231ad443d2a66a105c04fa"},
+    {file = "ruff-0.8.1-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:adf314fc458374c25c5c4a4a9270c3e8a6a807b1bec018cfa2813d6546215540"},
+    {file = "ruff-0.8.1-py3-none-musllinux_1_2_i686.whl", hash = "sha256:a885d68342a231b5ba4d30b8c6e1b1ee3a65cf37e3d29b3c74069cdf1ee1e3c9"},
+    {file = "ruff-0.8.1-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:d2c16e3508c8cc73e96aa5127d0df8913d2290098f776416a4b157657bee44c5"},
+    {file = "ruff-0.8.1-py3-none-win32.whl", hash = "sha256:93335cd7c0eaedb44882d75a7acb7df4b77cd7cd0d2255c93b28791716e81790"},
+    {file = "ruff-0.8.1-py3-none-win_amd64.whl", hash = "sha256:2954cdbe8dfd8ab359d4a30cd971b589d335a44d444b6ca2cb3d1da21b75e4b6"},
+    {file = "ruff-0.8.1-py3-none-win_arm64.whl", hash = "sha256:55873cc1a473e5ac129d15eccb3c008c096b94809d693fc7053f588b67822737"},
+    {file = "ruff-0.8.1.tar.gz", hash = "sha256:3583db9a6450364ed5ca3f3b4225958b24f78178908d5c4bc0f46251ccca898f"},
+]
+
 [[package]]
 name = "scapy"
 version = "2.5.0"
@@ -1361,4 +1388,4 @@ zstd = ["zstandard (>=0.18.0)"]
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "fe9a9fdf7b43e8dce2fb5ee600921d4047fef2f4037a78bbd150f71df202493e"
+content-hash = "5f9b61492d95b09c717325396e981bb526fac9b0c16869f1aebc3a57b7b80e49"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index f69c70877a..3436d82116 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -36,6 +36,7 @@ isort = "^5.10.1"
 pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
+ruff = "^0.8.1"
 
 [tool.poetry.group.docs]
 optional = true
@@ -50,6 +51,25 @@ autodoc-pydantic = "^2.2.0"
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
+[tool.ruff]
+target-version = "py310"
+line-length = 100
+
+[tool.ruff.format]
+docstring-code-format = true
+
+[tool.ruff.lint]
+select = [
+    "F",      # pyflakes
+    "E", "W", # pycodestyle
+    "D",      # pydocstyle
+    "C90",    # mccabe
+    "I",      # isort
+]
+
+[tool.ruff.lint.pydocstyle]
+convention = "google"
+
 [tool.pylama]
 linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 2/6] dts: enable Ruff preview pydoclint rules
  2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 1/6] dts: add Ruff as linter and formatter Luca Vizzarro
@ 2024-12-10 10:32 ` Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 3/6] dts: fix docstring linter errors Luca Vizzarro
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-10 10:32 UTC (permalink / raw)
  To: dev, Patrick Robb; +Cc: Luca Vizzarro, Paul Szczepanek

DTS requires a linter for docstrings but the current selection is
limited. The most promising docstring linter is pydoclint.
On the other hand, Ruff is currently in the process of implementing
pydoclint rules. This would spare the project from supporting yet
another linter, without any loss of benefit.

This commit enables a selection of pydoclint rules in Ruff, which while
still in preview they are already capable of aiding the process.

DOC201 was omitted because it currently does not support one-line
docstrings, trying to enforce full ones even when not needed.

DOC502 was omitted because it complains for exceptions that were
reported but not present in the body of the function. While treating
documented exceptions when they appear to not be raised as an error
is a sound argument, it currently doesn't work well with inherited
class methods, which parent does raise exceptions.

Bugzilla ID: 1455

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/pyproject.toml | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 3436d82116..2658a3d22c 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -65,7 +65,11 @@ select = [
     "D",      # pydocstyle
     "C90",    # mccabe
     "I",      # isort
+    # pydoclint
+    "DOC202", "DOC402", "DOC403", "DOC501"
 ]
+preview = true # enable to get early access to pydoclint rules
+explicit-preview-rules = true # enable ONLY the explicitly selected preview rules
 
 [tool.ruff.lint.pydocstyle]
 convention = "google"
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 3/6] dts: fix docstring linter errors
  2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 1/6] dts: add Ruff as linter and formatter Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 2/6] dts: enable Ruff preview pydoclint rules Luca Vizzarro
@ 2024-12-10 10:32 ` Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 4/6] dts: apply Ruff formatting Luca Vizzarro
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-10 10:32 UTC (permalink / raw)
  To: dev, Patrick Robb; +Cc: Luca Vizzarro, Paul Szczepanek

The addition of Ruff pydocstyle and pydoclint rules has raised new
problems in the docstrings which require to be fixed.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 .../single_active_interactive_shell.py        | 11 +++-
 dts/framework/runner.py                       | 48 ++++++++++----
 dts/framework/settings.py                     |  6 +-
 dts/framework/test_suite.py                   | 43 ++++++++++---
 dts/framework/testbed_model/capability.py     | 33 ++++++++--
 dts/framework/testbed_model/cpu.py            | 33 ++++++++--
 dts/framework/testbed_model/linux_session.py  | 42 +++++++++---
 dts/framework/testbed_model/node.py           |  7 +-
 dts/framework/testbed_model/os_session.py     | 14 ++--
 dts/framework/testbed_model/posix_session.py  | 64 ++++++++++++++-----
 dts/framework/testbed_model/sut_node.py       | 49 ++++++++++----
 dts/framework/testbed_model/topology.py       | 10 ++-
 .../traffic_generator/__init__.py             | 11 +++-
 .../testbed_model/traffic_generator/scapy.py  | 10 ++-
 .../traffic_generator/traffic_generator.py    |  3 +-
 dts/framework/utils.py                        | 38 ++++++++---
 dts/tests/TestSuite_vlan.py                   | 30 ++++++---
 17 files changed, 347 insertions(+), 105 deletions(-)

diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index e3f6424e97..a53e8fc6e1 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -110,6 +110,7 @@ def __init__(
             app_params: The command line parameters to be passed to the application on startup.
             name: Name for the interactive shell to use for logging. This name will be appended to
                 the name of the underlying node which it is running on.
+            **kwargs: Any additional arguments if any.
         """
         self._node = node
         if name is None:
@@ -120,10 +121,12 @@ def __init__(
         self._timeout = timeout
         # Ensure path is properly formatted for the host
         self._update_real_path(self.path)
-        super().__init__(node, **kwargs)
+        super().__init__()
 
     def _setup_ssh_channel(self):
-        self._ssh_channel = self._node.main_session.interactive_session.session.invoke_shell()
+        self._ssh_channel = (
+            self._node.main_session.interactive_session.session.invoke_shell()
+        )
         self._stdin = self._ssh_channel.makefile_stdin("w")
         self._stdout = self._ssh_channel.makefile("r")
         self._ssh_channel.settimeout(self._timeout)
@@ -133,7 +136,9 @@ def _make_start_command(self) -> str:
         """Makes the command that starts the interactive shell."""
         start_command = f"{self._real_path} {self._app_params or ''}"
         if self._privileged:
-            start_command = self._node.main_session._get_privileged_command(start_command)
+            start_command = self._node.main_session._get_privileged_command(
+                start_command
+            )
         return start_command
 
     def _start_application(self) -> None:
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index f91c462ce5..d228ed1b18 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -136,17 +136,25 @@ def run(self) -> None:
 
             # for all test run sections
             for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
-                test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
+                test_run_config, sut_node_config, tg_node_config = (
+                    test_run_with_nodes_config
+                )
                 self._logger.set_stage(DtsStage.test_run_setup)
-                self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
+                self._logger.info(
+                    f"Running test run with SUT '{sut_node_config.name}'."
+                )
                 self._init_random_seed(test_run_config)
                 test_run_result = self._result.add_test_run(test_run_config)
                 # we don't want to modify the original config, so create a copy
                 test_run_test_suites = list(
-                    SETTINGS.test_suites if SETTINGS.test_suites else test_run_config.test_suites
+                    SETTINGS.test_suites
+                    if SETTINGS.test_suites
+                    else test_run_config.test_suites
                 )
                 if not test_run_config.skip_smoke_tests:
-                    test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
+                    test_run_test_suites[:0] = [
+                        TestSuiteConfig(test_suite="smoke_tests")
+                    ]
                 try:
                     test_suites_with_cases = self._get_test_suites_with_cases(
                         test_run_test_suites, test_run_config.func, test_run_config.perf
@@ -154,7 +162,8 @@ def run(self) -> None:
                     test_run_result.test_suites_with_cases = test_suites_with_cases
                 except Exception as e:
                     self._logger.exception(
-                        f"Invalid test suite configuration found: " f"{test_run_test_suites}."
+                        f"Invalid test suite configuration found: "
+                        f"{test_run_test_suites}."
                     )
                     test_run_result.update_setup(Result.FAIL, e)
 
@@ -236,7 +245,9 @@ def _get_test_suites_with_cases(
                 test_cases.extend(perf_test_cases)
 
             test_suites_with_cases.append(
-                TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
+                TestSuiteWithCases(
+                    test_suite_class=test_suite_class, test_cases=test_cases
+                )
             )
         return test_suites_with_cases
 
@@ -285,7 +296,11 @@ def _connect_nodes_and_run_test_run(
 
         else:
             self._run_test_run(
-                sut_node, tg_node, test_run_config, test_run_result, test_suites_with_cases
+                sut_node,
+                tg_node,
+                test_run_config,
+                test_run_result,
+                test_suites_with_cases,
             )
 
     def _run_test_run(
@@ -324,7 +339,8 @@ def _run_test_run(
                 )
             if dir := SETTINGS.precompiled_build_dir:
                 dpdk_build_config = DPDKPrecompiledBuildConfiguration(
-                    dpdk_location=dpdk_build_config.dpdk_location, precompiled_build_dir=dir
+                    dpdk_location=dpdk_build_config.dpdk_location,
+                    precompiled_build_dir=dir,
                 )
             sut_node.set_up_test_run(test_run_config, dpdk_build_config)
             test_run_result.dpdk_build_info = sut_node.get_dpdk_build_info()
@@ -335,7 +351,9 @@ def _run_test_run(
             test_run_result.update_setup(Result.FAIL, e)
 
         else:
-            self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
+            self._run_test_suites(
+                sut_node, tg_node, test_run_result, test_suites_with_cases
+            )
 
         finally:
             try:
@@ -360,7 +378,9 @@ def _get_supported_capabilities(
 
         self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
 
-        return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
+        return get_supported_capabilities(
+            sut_node, topology_config, capabilities_to_check
+        )
 
     def _run_test_suites(
         self,
@@ -443,6 +463,7 @@ def _run_test_suite(
         Args:
             sut_node: The test run's SUT node.
             tg_node: The test run's TG node.
+            topology: The port topology of the nodes.
             test_suite_result: The test suite level result object associated
                 with the current test suite.
             test_suite_with_cases: The test suite with test cases to run.
@@ -585,6 +606,9 @@ def _execute_test_case(
             test_case: The test case function.
             test_case_result: The test case level result object associated
                 with the current test case.
+
+        Raises:
+            KeyboardInterrupt: If DTS has been interrupted by the user.
         """
         test_case_name = test_case.__name__
         try:
@@ -601,7 +625,9 @@ def _execute_test_case(
             self._logger.exception(f"Test case execution ERROR: {test_case_name}")
             test_case_result.update(Result.ERROR, e)
         except KeyboardInterrupt:
-            self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
+            self._logger.error(
+                f"Test case execution INTERRUPTED by user: {test_case_name}"
+            )
             test_case_result.update(Result.SKIP)
             raise KeyboardInterrupt("Stop DTS")
 
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 5a8e6e5aee..91f317105a 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -257,7 +257,9 @@ def _get_help_string(self, action):
         return help
 
 
-def _required_with_one_of(parser: _DTSArgumentParser, action: Action, *required_dests: str) -> None:
+def _required_with_one_of(
+    parser: _DTSArgumentParser, action: Action, *required_dests: str
+) -> None:
     """Verify that `action` is listed together with at least one of `required_dests`.
 
     Verify that when `action` is among the command-line arguments or
@@ -461,6 +463,7 @@ def _process_dpdk_location(
     any valid :class:`DPDKLocation` with the provided parameters if validation is successful.
 
     Args:
+        parser: The instance of the arguments parser.
         dpdk_tree: The path to the DPDK source tree directory.
         tarball: The path to the DPDK tarball.
         remote: If :data:`True`, `dpdk_tree` or `tarball` is located on the SUT node, instead of the
@@ -512,6 +515,7 @@ def _process_test_suites(
     """Process the given argument to a list of :class:`TestSuiteConfig` to execute.
 
     Args:
+        parser: The instance of the arguments parser.
         args: The arguments to process. The args is a string from an environment variable
               or a list of from the user input containing tests suites with tests cases,
               each of which is a list of [test_suite, test_case, test_case, ...].
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index adee01f031..fd6706289e 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -300,7 +300,9 @@ def get_expected_packet(self, packet: Packet) -> Packet:
         """
         return self.get_expected_packets([packet])[0]
 
-    def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> list[Packet]:
+    def _adjust_addresses(
+        self, packets: list[Packet], expected: bool = False
+    ) -> list[Packet]:
         """L2 and L3 address additions in both directions.
 
         Copies of `packets` will be made, modified and returned in this method.
@@ -378,15 +380,21 @@ def verify(self, condition: bool, failure_description: str) -> None:
             self._fail_test_case_verify(failure_description)
 
     def _fail_test_case_verify(self, failure_description: str) -> None:
-        self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
+        self._logger.debug(
+            "A test case failed, showing the last 10 commands executed on SUT:"
+        )
         for command_res in self.sut_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
-        self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
+        self._logger.debug(
+            "A test case failed, showing the last 10 commands executed on TG:"
+        )
         for command_res in self.tg_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
         raise TestCaseVerifyError(failure_description)
 
-    def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
+    def verify_packets(
+        self, expected_packet: Packet, received_packets: list[Packet]
+    ) -> None:
         """Verify that `expected_packet` has been received.
 
         Go through `received_packets` and check that `expected_packet` is among them.
@@ -408,7 +416,9 @@ def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]
                 f"The expected packet {get_packet_summaries(expected_packet)} "
                 f"not found among received {get_packet_summaries(received_packets)}"
             )
-            self._fail_test_case_verify("An expected packet not found among received packets.")
+            self._fail_test_case_verify(
+                "An expected packet not found among received packets."
+            )
 
     def match_all_packets(
         self, expected_packets: list[Packet], received_packets: list[Packet]
@@ -444,7 +454,9 @@ def match_all_packets(
                 f"but {missing_packets_count} were missing."
             )
 
-    def _compare_packets(self, expected_packet: Packet, received_packet: Packet) -> bool:
+    def _compare_packets(
+        self, expected_packet: Packet, received_packet: Packet
+    ) -> bool:
         self._logger.debug(
             f"Comparing packets: \n{expected_packet.summary()}\n{received_packet.summary()}"
         )
@@ -473,10 +485,14 @@ def _compare_packets(self, expected_packet: Packet, received_packet: Packet) ->
             expected_payload = expected_payload.payload
 
         if expected_payload:
-            self._logger.debug(f"The expected packet did not contain {expected_payload}.")
+            self._logger.debug(
+                f"The expected packet did not contain {expected_payload}."
+            )
             return False
         if received_payload and received_payload.__class__ != Padding:
-            self._logger.debug("The received payload had extra layers which were not padding.")
+            self._logger.debug(
+                "The received payload had extra layers which were not padding."
+            )
             return False
         return True
 
@@ -503,7 +519,10 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
 
     def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         self._logger.debug("Looking at the IP layer.")
-        if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst:
+        if (
+            received_packet.src != expected_packet.src
+            or received_packet.dst != expected_packet.dst
+        ):
             return False
         return True
 
@@ -615,7 +634,11 @@ def class_name(self) -> str:
 
     @cached_property
     def class_obj(self) -> type[TestSuite]:
-        """A reference to the test suite's class."""
+        """A reference to the test suite's class.
+
+        Raises:
+            InternalError: If the test suite class is missing from the module.
+        """
 
         def is_test_suite(obj) -> bool:
             """Check whether `obj` is a :class:`TestSuite`.
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 0d5f0e0b32..6e06c75c3d 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -130,7 +130,9 @@ def _get_and_reset(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
 
     @classmethod
     @abstractmethod
-    def get_supported_capabilities(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
+    def get_supported_capabilities(
+        cls, sut_node: SutNode, topology: "Topology"
+    ) -> set[Self]:
         """Get the support status of each registered capability.
 
         Each subclass must implement this method and return the subset of supported capabilities
@@ -224,7 +226,10 @@ def get_supported_capabilities(
             with TestPmdShell(
                 sut_node, privileged=True, disable_device_start=True
             ) as testpmd_shell:
-                for conditional_capability_fn, capabilities in capabilities_to_check_map.items():
+                for (
+                    conditional_capability_fn,
+                    capabilities,
+                ) in capabilities_to_check_map.items():
                     supported_capabilities: set[NicCapability] = set()
                     unsupported_capabilities: set[NicCapability] = set()
                     capability_fn = cls._reduce_capabilities(
@@ -237,14 +242,18 @@ def get_supported_capabilities(
                         if capability.nic_capability in supported_capabilities:
                             supported_conditional_capabilities.add(capability)
 
-        logger.debug(f"Found supported capabilities {supported_conditional_capabilities}.")
+        logger.debug(
+            f"Found supported capabilities {supported_conditional_capabilities}."
+        )
         return supported_conditional_capabilities
 
     @classmethod
     def _get_decorated_capabilities_map(
         cls,
     ) -> dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]]:
-        capabilities_map: dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]] = {}
+        capabilities_map: dict[
+            TestPmdShellDecorator | None, set["DecoratedNicCapability"]
+        ] = {}
         for capability in cls.capabilities_to_check:
             if capability.capability_decorator not in capabilities_map:
                 capabilities_map[capability.capability_decorator] = set()
@@ -307,7 +316,9 @@ class TopologyCapability(Capability):
     _unique_capabilities: ClassVar[dict[str, Self]] = {}
 
     def _preprocess_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
-        test_case_or_suite.required_capabilities.discard(test_case_or_suite.topology_type)
+        test_case_or_suite.required_capabilities.discard(
+            test_case_or_suite.topology_type
+        )
         test_case_or_suite.topology_type = self
 
     @classmethod
@@ -353,6 +364,10 @@ def set_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
         At that point, the test case topologies have been set by the :func:`requires` decorator.
         The test suite topology only affects the test case topologies
         if not :attr:`~.topology.TopologyType.default`.
+
+        Raises:
+            ConfigurationError: If the topology type requested by the test case is more complex than
+                the test suite's.
         """
         if inspect.isclass(test_case_or_suite):
             if self.topology_type is not TopologyType.default:
@@ -443,7 +458,9 @@ class TestProtocol(Protocol):
     #: The reason for skipping the test case or suite.
     skip_reason: ClassVar[str] = ""
     #: The topology type of the test case or suite.
-    topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
+    topology_type: ClassVar[TopologyCapability] = TopologyCapability(
+        TopologyType.default
+    )
     #: The capabilities the test case or suite requires in order to be executed.
     required_capabilities: ClassVar[set[Capability]] = set()
 
@@ -471,7 +488,9 @@ def requires(
         The decorated test case or test suite.
     """
 
-    def add_required_capability(test_case_or_suite: type[TestProtocol]) -> type[TestProtocol]:
+    def add_required_capability(
+        test_case_or_suite: type[TestProtocol],
+    ) -> type[TestProtocol]:
         for nic_capability in nic_capabilities:
             decorated_nic_capability = DecoratedNicCapability.get_unique(nic_capability)
             decorated_nic_capability.add_to_required(test_case_or_suite)
diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index a50cf44c19..0746878770 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -87,7 +87,9 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
         # the input lcores may not be sorted
         self._lcore_list.sort()
-        self._lcore_str = f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+        self._lcore_str = (
+            f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+        )
 
     @property
     def lcore_list(self) -> list[int]:
@@ -102,11 +104,15 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
                 segment.append(lcore_id)
             else:
                 formatted_core_list.append(
-                    f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
+                    f"{segment[0]}-{segment[-1]}"
+                    if len(segment) > 1
+                    else f"{segment[0]}"
                 )
                 current_core_index = lcore_ids_list.index(lcore_id)
                 formatted_core_list.extend(
-                    self._get_consecutive_lcores_range(lcore_ids_list[current_core_index:])
+                    self._get_consecutive_lcores_range(
+                        lcore_ids_list[current_core_index:]
+                    )
                 )
                 segment.clear()
                 break
@@ -166,7 +172,9 @@ def __init__(
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
-        self._lcores_to_filter = sorted(lcore_list, key=lambda x: x.core, reverse=not ascending)
+        self._lcores_to_filter = sorted(
+            lcore_list, key=lambda x: x.core, reverse=not ascending
+        )
         self.filter()
 
     @abstractmethod
@@ -231,6 +239,9 @@ def _filter_sockets(
 
         Returns:
             A list of lists of logical CPU cores. Each list contains cores from one socket.
+
+        Raises:
+            ValueError: If the number of the requested sockets by the filter can't be satisfied.
         """
         allowed_sockets: set[int] = set()
         socket_count = self._filter_specifier.socket_count
@@ -272,6 +283,10 @@ def _filter_cores_from_socket(
 
         Returns:
             The filtered logical CPU cores.
+
+        Raises:
+            ValueError: If the number of the requested cores per socket by the filter
+                can't be satisfied.
         """
         # no need to use ordered dict, from Python3.7 the dict
         # insertion order is preserved (LIFO).
@@ -287,7 +302,9 @@ def _filter_cores_from_socket(
                 else:
                     # we have enough lcores per this core
                     continue
-            elif self._filter_specifier.cores_per_socket > len(lcore_count_per_core_map):
+            elif self._filter_specifier.cores_per_socket > len(
+                lcore_count_per_core_map
+            ):
                 # only add cores if we need more
                 lcore_count_per_core_map[lcore.core] = 1
                 filtered_lcores.append(lcore)
@@ -327,6 +344,9 @@ def filter(self) -> list[LogicalCore]:
 
         Return:
             The filtered logical CPU cores.
+
+        Raises:
+            ValueError: If the specified lcore filter specifier is invalid.
         """
         if not len(self._filter_specifier.lcore_list):
             return self._lcores_to_filter
@@ -360,6 +380,9 @@ def lcore_filter(
 
     Returns:
         The filter that corresponds to `filter_specifier`.
+
+    Raises:
+        ValueError: If the supplied `filter_specifier` is invalid.
     """
     if isinstance(filter_specifier, LogicalCoreList):
         return LogicalCoreListFilter(core_list, filter_specifier, ascending)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index f87efb8f18..b316f23b4e 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -83,8 +83,14 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
-        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
+    def setup_hugepages(
+        self, number_of: int, hugepage_size: int, force_first_numa: bool
+    ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.
+
+        Raises:
+            ConfigurationError: If the given `hugepage_size` is not supported by the OS.
+        """
         self._logger.info("Getting Hugepage information.")
         hugepages_total = self._get_hugepages_total(hugepage_size)
         if (
@@ -127,7 +133,9 @@ def _mount_huge_pages(self) -> None:
         if result.stdout == "":
             remote_mount_path = "/mnt/huge"
             self.send_command(f"mkdir -p {remote_mount_path}", privileged=True)
-            self.send_command(f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True)
+            self.send_command(
+                f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True
+            )
 
     def _supports_numa(self) -> bool:
         # the system supports numa if self._numa_nodes is non-empty and there are more
@@ -135,9 +143,13 @@ def _supports_numa(self) -> bool:
         # there's no reason to do any numa specific configuration)
         return len(self._numa_nodes) > 1
 
-    def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: bool) -> None:
+    def _configure_huge_pages(
+        self, number_of: int, size: int, force_first_numa: bool
+    ) -> None:
         self._logger.info("Configuring Hugepages.")
-        hugepage_config_path = f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+        hugepage_config_path = (
+            f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+        )
         if force_first_numa and self._supports_numa():
             # clear non-numa hugepages
             self.send_command(f"echo 0 | tee {hugepage_config_path}", privileged=True)
@@ -146,19 +158,25 @@ def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: boo
                 f"/hugepages-{size}kB/nr_hugepages"
             )
 
-        self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
+        self.send_command(
+            f"echo {number_of} | tee {hugepage_config_path}", privileged=True
+        )
 
     def update_ports(self, ports: list[Port]) -> None:
         """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
-            assert port.node == self.name, "Attempted to gather port info on the wrong node"
+            assert (
+                port.node == self.name
+            ), "Attempted to gather port info on the wrong node"
 
         port_info_list = self._get_lshw_info()
         for port in ports:
             for port_info in port_info_list:
                 if f"pci@{port.pci}" == port_info.get("businfo"):
-                    self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
+                    self._update_port_attr(
+                        port, port_info.get("logicalname"), "logical_name"
+                    )
                     self._update_port_attr(port, port_info.get("serial"), "mac_address")
                     port_info_list.remove(port_info)
                     break
@@ -169,10 +187,14 @@ def _get_lshw_info(self) -> list[LshwOutput]:
         output = self.send_command("lshw -quiet -json -C network", verify=True)
         return json.loads(output.stdout)
 
-    def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) -> None:
+    def _update_port_attr(
+        self, port: Port, attr_value: str | None, attr_name: str
+    ) -> None:
         if attr_value:
             setattr(port, attr_name, attr_value)
-            self._logger.debug(f"Found '{attr_name}' of port {port.pci}: '{attr_value}'.")
+            self._logger.debug(
+                f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
+            )
         else:
             self._logger.warning(
                 f"Attempted to get '{attr_name}' of port {port.pci}, but it doesn't exist."
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index c1844ecd5d..e8021a4afe 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -198,13 +198,18 @@ def close(self) -> None:
             session.close()
 
 
-def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession:
+def create_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLogger
+) -> OSSession:
     """Factory for OS-aware sessions.
 
     Args:
         node_config: The test run configuration of the node to connect to.
         name: The name of the session.
         logger: The logger instance this session will use.
+
+    Raises:
+        ConfigurationError: If the node's OS is unsupported.
     """
     match node_config.os:
         case OS.linux:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 62add7a4df..1b2885be5d 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -195,7 +195,9 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         """
 
     @abstractmethod
-    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
+    def copy_from(
+        self, source_file: str | PurePath, destination_dir: str | Path
+    ) -> None:
         """Copy a file from the remote node to the local filesystem.
 
         Copy `source_file` from the remote node associated with this remote
@@ -301,7 +303,9 @@ def copy_dir_to(
         """
 
     @abstractmethod
-    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
+    def remove_remote_file(
+        self, remote_file_path: str | PurePath, force: bool = True
+    ) -> None:
         """Remove remote file, by default remove forcefully.
 
         Args:
@@ -366,7 +370,7 @@ def is_remote_dir(self, remote_path: PurePath) -> bool:
         """Check if the `remote_path` is a directory.
 
         Args:
-            remote_tarball_path: The path to the remote tarball.
+            remote_path: The path to the remote tarball.
 
         Returns:
             If :data:`True` the `remote_path` is a directory, otherwise :data:`False`.
@@ -475,7 +479,9 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
 
     @abstractmethod
-    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
+    def setup_hugepages(
+        self, number_of: int, hugepage_size: int, force_first_numa: bool
+    ) -> None:
         """Configure hugepages on the node.
 
         Get the node's Hugepage Size, configure the specified count of hugepages
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index c0cca2ac50..f707b6e17b 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -96,7 +96,9 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         result = self.send_command(f"test -e {remote_path}")
         return not result.return_code
 
-    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
+    def copy_from(
+        self, source_file: str | PurePath, destination_dir: str | Path
+    ) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_dir)
 
@@ -113,12 +115,16 @@ def copy_dir_from(
     ) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_dir_from`."""
         source_dir = PurePath(source_dir)
-        remote_tarball_path = self.create_remote_tarball(source_dir, compress_format, exclude)
+        remote_tarball_path = self.create_remote_tarball(
+            source_dir, compress_format, exclude
+        )
 
         self.copy_from(remote_tarball_path, destination_dir)
         self.remove_remote_file(remote_tarball_path)
 
-        tarball_path = Path(destination_dir, f"{source_dir.name}.{compress_format.extension}")
+        tarball_path = Path(
+            destination_dir, f"{source_dir.name}.{compress_format.extension}"
+        )
         extract_tarball(tarball_path)
         tarball_path.unlink()
 
@@ -141,7 +147,9 @@ def copy_dir_to(
         self.extract_remote_tarball(remote_tar_path)
         self.remove_remote_file(remote_tar_path)
 
-    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
+    def remove_remote_file(
+        self, remote_file_path: str | PurePath, force: bool = True
+    ) -> None:
         """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(f=force)
         self.send_command(f"rm{opts} {remote_file_path}")
@@ -176,11 +184,15 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
             """
             if exclude_patterns:
                 exclude_patterns = convert_to_list_of_string(exclude_patterns)
-                return "".join([f" --exclude={pattern}" for pattern in exclude_patterns])
+                return "".join(
+                    [f" --exclude={pattern}" for pattern in exclude_patterns]
+                )
             return ""
 
         posix_remote_dir_path = PurePosixPath(remote_dir_path)
-        target_tarball_path = PurePosixPath(f"{remote_dir_path}.{compress_format.extension}")
+        target_tarball_path = PurePosixPath(
+            f"{remote_dir_path}.{compress_format.extension}"
+        )
 
         self.send_command(
             f"tar caf {target_tarball_path}{generate_tar_exclude_args(exclude)} "
@@ -191,7 +203,9 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
         return target_tarball_path
 
     def extract_remote_tarball(
-        self, remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None
+        self,
+        remote_tarball_path: str | PurePath,
+        expected_dir: str | PurePath | None = None,
     ) -> None:
         """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
         self.send_command(
@@ -236,7 +250,11 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
-        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
+        """Overrides :meth:`~.os_session.OSSession.build_dpdk`.
+
+        Raises:
+            DPDKBuildError: If the DPDK build failed.
+        """
         try:
             if rebuild:
                 # reconfigure, then build
@@ -267,7 +285,9 @@ def build_dpdk(
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
-        out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
+        out = self.send_command(
+            f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
+        )
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
@@ -282,7 +302,9 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
             self._check_dpdk_hugepages(dpdk_runtime_dirs)
             self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
 
-    def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
+    def _get_dpdk_runtime_dirs(
+        self, dpdk_prefix_list: Iterable[str]
+    ) -> list[PurePosixPath]:
         """Find runtime directories DPDK apps are currently using.
 
         Args:
@@ -310,7 +332,9 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
         Returns:
             The contents of remote_path. If remote_path doesn't exist, return None.
         """
-        out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
+        out = self.send_command(
+            f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
+        ).stdout
         if "No such file or directory" in out:
             return None
         else:
@@ -341,7 +365,9 @@ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[in
                             pids.append(int(match.group(1)))
         return pids
 
-    def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
+    def _check_dpdk_hugepages(
+        self, dpdk_runtime_dirs: Iterable[str | PurePath]
+    ) -> None:
         """Check there aren't any leftover hugepages.
 
         If any hugepages are found, emit a warning. The hugepages are investigated in the
@@ -360,7 +386,9 @@ def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) ->
                     self._logger.warning(out)
                     self._logger.warning("*******************************************")
 
-    def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
+    def _remove_dpdk_runtime_dirs(
+        self, dpdk_runtime_dirs: Iterable[str | PurePath]
+    ) -> None:
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
@@ -369,7 +397,11 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
-        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
+        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`.
+
+        Raises:
+            ValueError: If the given `compiler_name` is invalid.
+        """
         match compiler_name:
             case "gcc":
                 return self.send_command(
@@ -393,4 +425,6 @@ def get_node_info(self) -> OSSessionInfo:
             SETTINGS.timeout,
         ).stdout.split("\n")
         kernel_version = self.send_command("uname -r", SETTINGS.timeout).stdout
-        return OSSessionInfo(os_release_info[0].strip(), os_release_info[1].strip(), kernel_version)
+        return OSSessionInfo(
+            os_release_info[0].strip(), os_release_info[1].strip(), kernel_version
+        )
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 14d77c50a6..6adcff01c2 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -139,7 +139,9 @@ def remote_dpdk_build_dir(self) -> str | PurePath:
     def dpdk_version(self) -> str | None:
         """Last built DPDK version."""
         if self._dpdk_version is None:
-            self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_tree_path)
+            self._dpdk_version = self.main_session.get_dpdk_version(
+                self._remote_dpdk_tree_path
+            )
         return self._dpdk_version
 
     @property
@@ -153,7 +155,9 @@ def node_info(self) -> OSSessionInfo:
     def compiler_version(self) -> str | None:
         """The node's compiler version."""
         if self._compiler_version is None:
-            self._logger.warning("The `compiler_version` is None because a pre-built DPDK is used.")
+            self._logger.warning(
+                "The `compiler_version` is None because a pre-built DPDK is used."
+            )
 
         return self._compiler_version
 
@@ -181,7 +185,9 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
         Returns:
             The DPDK build information,
         """
-        return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
+        return DPDKBuildInfo(
+            dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
+        )
 
     def set_up_test_run(
         self,
@@ -264,13 +270,16 @@ def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
         Raises:
             RemoteFileNotFoundError: If the DPDK source tree is expected to be on the SUT node but
                 is not found.
+            ConfigurationError: If the remote DPDK source tree specified is not a valid directory.
         """
         if not self.main_session.remote_path_exists(dpdk_tree):
             raise RemoteFileNotFoundError(
                 f"Remote DPDK source tree '{dpdk_tree}' not found in SUT node."
             )
         if not self.main_session.is_remote_dir(dpdk_tree):
-            raise ConfigurationError(f"Remote DPDK source tree '{dpdk_tree}' must be a directory.")
+            raise ConfigurationError(
+                f"Remote DPDK source tree '{dpdk_tree}' must be a directory."
+            )
 
         self.__remote_dpdk_tree_path = dpdk_tree
 
@@ -306,9 +315,13 @@ def _validate_remote_dpdk_tarball(self, dpdk_tarball: PurePath) -> None:
             ConfigurationError: If the `dpdk_tarball` is a valid path but not a valid tar archive.
         """
         if not self.main_session.remote_path_exists(dpdk_tarball):
-            raise RemoteFileNotFoundError(f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT.")
+            raise RemoteFileNotFoundError(
+                f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT."
+            )
         if not self.main_session.is_remote_tarfile(dpdk_tarball):
-            raise ConfigurationError(f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive.")
+            raise ConfigurationError(
+                f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive."
+            )
 
     def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
         """Copy the local DPDK tarball to the SUT node.
@@ -323,7 +336,9 @@ def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
             f"Copying DPDK tarball to SUT: '{dpdk_tarball}' into '{self._remote_tmp_dir}'."
         )
         self.main_session.copy_to(dpdk_tarball, self._remote_tmp_dir)
-        return self.main_session.join_remote_path(self._remote_tmp_dir, dpdk_tarball.name)
+        return self.main_session.join_remote_path(
+            self._remote_tmp_dir, dpdk_tarball.name
+        )
 
     def _prepare_and_extract_dpdk_tarball(self, remote_tarball_path: PurePath) -> None:
         """Prepare the remote DPDK tree path and extract the tarball.
@@ -347,7 +362,9 @@ def remove_tarball_suffix(remote_tarball_path: PurePath) -> PurePath:
             if len(remote_tarball_path.suffixes) > 1:
                 if remote_tarball_path.suffixes[-2] == ".tar":
                     suffixes_to_remove = "".join(remote_tarball_path.suffixes[-2:])
-                    return PurePath(str(remote_tarball_path).replace(suffixes_to_remove, ""))
+                    return PurePath(
+                        str(remote_tarball_path).replace(suffixes_to_remove, "")
+                    )
             return remote_tarball_path.with_suffix("")
 
         tarball_top_dir = self.main_session.get_tarball_top_dir(remote_tarball_path)
@@ -390,7 +407,9 @@ def _set_remote_dpdk_build_dir(self, build_dir: str):
 
         self._remote_dpdk_build_dir = PurePath(remote_dpdk_build_dir)
 
-    def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration) -> None:
+    def _configure_dpdk_build(
+        self, dpdk_build_config: DPDKBuildOptionsConfiguration
+    ) -> None:
         """Populate common environment variables and set the DPDK build related properties.
 
         This method sets `compiler_version` for additional information and `remote_dpdk_build_dir`
@@ -400,9 +419,13 @@ def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration
             dpdk_build_config: A DPDK build configuration to test.
         """
         self._env_vars = {}
-        self._env_vars.update(self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch))
+        self._env_vars.update(
+            self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch)
+        )
         if compiler_wrapper := dpdk_build_config.compiler_wrapper:
-            self._env_vars["CC"] = f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
+            self._env_vars["CC"] = (
+                f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
+            )
         else:
             self._env_vars["CC"] = dpdk_build_config.compiler.name
 
@@ -453,7 +476,9 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
         if app_name == "all":
-            return self.main_session.join_remote_path(self.remote_dpdk_build_dir, "examples")
+            return self.main_session.join_remote_path(
+                self.remote_dpdk_build_dir, "examples"
+            )
         return self.main_session.join_remote_path(
             self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
         )
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 3824804310..2c10aff4ef 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -43,6 +43,12 @@ def get_from_value(cls, value: int) -> "TopologyType":
         :class:`TopologyType` is a regular :class:`~enum.Enum`.
         When getting an instance from value, we're not interested in the default,
         since we already know the value, allowing us to remove the ambiguity.
+
+        Args:
+            value: The value of the requested enum.
+
+        Raises:
+            ConfigurationError: If an unsupported link topology is supplied.
         """
         match value:
             case 0:
@@ -52,7 +58,9 @@ def get_from_value(cls, value: int) -> "TopologyType":
             case 2:
                 return TopologyType.two_links
             case _:
-                raise ConfigurationError("More than two links in a topology are not supported.")
+                raise ConfigurationError(
+                    "More than two links in a topology are not supported."
+                )
 
 
 class Topology:
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 945f6bbbbb..e7fd511a00 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -33,9 +33,16 @@ def create_traffic_generator(
 
     Returns:
         A traffic generator capable of capturing received packets.
+
+    Raises:
+        ConfigurationError: If an unknown traffic generator has been setup.
     """
     match traffic_generator_config:
         case ScapyTrafficGeneratorConfig():
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config, privileged=True)
+            return ScapyTrafficGenerator(
+                tg_node, traffic_generator_config, privileged=True
+            )
         case _:
-            raise ConfigurationError(f"Unknown traffic generator: {traffic_generator_config.type}")
+            raise ConfigurationError(
+                f"Unknown traffic generator: {traffic_generator_config.type}"
+            )
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 1251ca65a0..16cc361cab 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -173,7 +173,11 @@ def _create_packet_filter(self, filter_config: PacketFilteringConfig) -> str:
         return " && ".join(bpf_filter)
 
     def _shell_create_sniffer(
-        self, packets_to_send: list[Packet], send_port: Port, recv_port: Port, filter_config: str
+        self,
+        packets_to_send: list[Packet],
+        send_port: Port,
+        recv_port: Port,
+        filter_config: str,
     ) -> None:
         """Create an asynchronous sniffer in the shell.
 
@@ -227,7 +231,9 @@ def _shell_start_and_stop_sniffing(self, duration: float) -> list[Packet]:
         self.send_command(f"{self._sniffer_name}.start()")
         # Insert a one second delay to prevent timeout errors from occurring
         time.sleep(duration + 1)
-        self.send_command(f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)")
+        self.send_command(
+            f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)"
+        )
         # An extra newline is required here due to the nature of interactive Python shells
         packet_strs = self.send_command(
             f"for pakt in {sniffed_packets_name}: print(bytes_base64(pakt.build()))\n"
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 5ac61cd4e1..42b6735646 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -42,11 +42,12 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig, **kwargs):
         Args:
             tg_node: The traffic generator node where the created traffic generator will be running.
             config: The traffic generator's test run configuration.
+            **kwargs: Any additional arguments if any.
         """
         self._config = config
         self._tg_node = tg_node
         self._logger = get_dts_logger(f"{self._tg_node.name} {self._config.type}")
-        super().__init__(tg_node, **kwargs)
+        super().__init__()
 
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send `packet` and block until it is fully sent.
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index bc3f8d6d0f..6ff9a485ba 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -31,7 +31,9 @@
 REGEX_FOR_PCI_ADDRESS: str = r"[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}"
 _REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
 _REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
-REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
+REGEX_FOR_MAC_ADDRESS: str = (
+    rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
+)
 REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
 
 
@@ -56,7 +58,9 @@ def expand_range(range_str: str) -> list[int]:
         range_boundaries = range_str.split("-")
         # will throw an exception when items in range_boundaries can't be converted,
         # serving as type check
-        expanded_range.extend(range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1))
+        expanded_range.extend(
+            range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
+        )
 
     return expanded_range
 
@@ -73,7 +77,9 @@ def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
-        packet_summaries = json.dumps(list(map(lambda pkt: pkt.summary(), packets)), indent=4)
+        packet_summaries = json.dumps(
+            list(map(lambda pkt: pkt.summary(), packets)), indent=4
+        )
     return f"Packet contents: \n{packet_summaries}"
 
 
@@ -81,7 +87,9 @@ class StrEnum(Enum):
     """Enum with members stored as strings."""
 
     @staticmethod
-    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
+    def _generate_next_value_(
+        name: str, start: int, count: int, last_values: object
+    ) -> str:
         return name
 
     def __str__(self) -> str:
@@ -108,7 +116,9 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
 
                 meson_args = MesonArgs(enable_kmods=True).
         """
-        self._default_library = f"--default-library={default_library}" if default_library else ""
+        self._default_library = (
+            f"--default-library={default_library}" if default_library else ""
+        )
         self._dpdk_args = " ".join(
             (
                 f"-D{dpdk_arg_name}={dpdk_arg_value}"
@@ -149,7 +159,9 @@ def extension(self):
         For other compression formats, the extension will be in the format
         'tar.{compression format}'.
         """
-        return f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
+        return (
+            f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
+        )
 
 
 def convert_to_list_of_string(value: Any | list[Any]) -> list[str]:
@@ -177,7 +189,9 @@ def create_tarball(
         The path to the created tarball.
     """
 
-    def create_filter_function(exclude_patterns: str | list[str] | None) -> Callable | None:
+    def create_filter_function(
+        exclude_patterns: str | list[str] | None,
+    ) -> Callable | None:
         """Create a filter function based on the provided exclude patterns.
 
         Args:
@@ -192,7 +206,9 @@ def create_filter_function(exclude_patterns: str | list[str] | None) -> Callable
 
             def filter_func(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo | None:
                 file_name = os.path.basename(tarinfo.name)
-                if any(fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns):
+                if any(
+                    fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns
+                ):
                     return None
                 return tarinfo
 
@@ -285,7 +301,9 @@ def _make_packet() -> Packet:
             packet /= random.choice(l4_factories)(sport=src_port, dport=dst_port)
 
         max_payload_size = mtu - len(packet)
-        usable_payload_size = payload_size if payload_size < max_payload_size else max_payload_size
+        usable_payload_size = (
+            payload_size if payload_size < max_payload_size else max_payload_size
+        )
         return packet / random.randbytes(usable_payload_size)
 
     return [_make_packet() for _ in range(number_of)]
@@ -300,7 +318,7 @@ class MultiInheritanceBaseClass:
     :meth:`super.__init__` without repercussion.
     """
 
-    def __init__(self, *args, **kwargs) -> None:
+    def __init__(self) -> None:
         """Call the init method of :class:`object`."""
         super().__init__()
 
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index 7cfbd7ea00..524854ea89 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -38,7 +38,9 @@ class TestVlan(TestSuite):
     tag when insertion is enabled.
     """
 
-    def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id: int) -> None:
+    def send_vlan_packet_and_verify(
+        self, should_receive: bool, strip: bool, vlan_id: int
+    ) -> None:
         """Generate a VLAN packet, send and verify packet with same payload is received on the dut.
 
         Args:
@@ -57,12 +59,14 @@ def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id
                 break
         if should_receive:
             self.verify(
-                test_packet is not None, "Packet was dropped when it should have been received"
+                test_packet is not None,
+                "Packet was dropped when it should have been received",
             )
             if test_packet is not None:
                 if strip:
                     self.verify(
-                        not test_packet.haslayer(Dot1Q), "VLAN tag was not stripped successfully"
+                        not test_packet.haslayer(Dot1Q),
+                        "VLAN tag was not stripped successfully",
                     )
                 else:
                     self.verify(
@@ -88,11 +92,18 @@ def send_packet_and_verify_insertion(self, expected_id: int) -> None:
             if hasattr(packet, "load") and b"xxxxx" in packet.load:
                 test_packet = packet
                 break
-        self.verify(test_packet is not None, "Packet was dropped when it should have been received")
+        self.verify(
+            test_packet is not None,
+            "Packet was dropped when it should have been received",
+        )
         if test_packet is not None:
-            self.verify(test_packet.haslayer(Dot1Q), "The received packet did not have a VLAN tag")
             self.verify(
-                test_packet.vlan == expected_id, "The received tag did not match the expected tag"
+                test_packet.haslayer(Dot1Q),
+                "The received packet did not have a VLAN tag",
+            )
+            self.verify(
+                test_packet.vlan == expected_id,
+                "The received tag did not match the expected tag",
             )
 
     def vlan_setup(self, testpmd: TestPmdShell, port_id: int, filtered_id: int) -> None:
@@ -102,9 +113,6 @@ def vlan_setup(self, testpmd: TestPmdShell, port_id: int, filtered_id: int) -> N
             testpmd: Testpmd shell session to send commands to.
             port_id: Number of port to use for setup.
             filtered_id: ID to be added to the VLAN filter list.
-
-        Returns:
-            TestPmdShell: Testpmd session being configured.
         """
         testpmd.set_forward_mode(SimpleForwardingModes.mac)
         testpmd.set_promisc(port_id, False)
@@ -147,7 +155,9 @@ def test_vlan_no_receipt(self) -> None:
         with TestPmdShell(node=self.sut_node) as testpmd:
             self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
             testpmd.start()
-            self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
+            self.send_vlan_packet_and_verify(
+                should_receive=False, strip=False, vlan_id=2
+            )
 
     @func_test
     def test_vlan_header_insertion(self) -> None:
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 4/6] dts: apply Ruff formatting
  2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
                   ` (2 preceding siblings ...)
  2024-12-10 10:32 ` [PATCH 3/6] dts: fix docstring linter errors Luca Vizzarro
@ 2024-12-10 10:32 ` Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 5/6] dts: update dts-check-format to use Ruff Luca Vizzarro
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-10 10:32 UTC (permalink / raw)
  To: dev, Patrick Robb; +Cc: Luca Vizzarro, Paul Szczepanek

While Ruff formatting is Black-compatible and is near-identical, it
still requires formatting for a small set of elements.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/framework/params/eal.py                   |  5 +-
 dts/framework/remote_session/dpdk_shell.py    |  1 -
 dts/framework/remote_session/python_shell.py  |  1 +
 .../single_active_interactive_shell.py        |  8 +--
 dts/framework/runner.py                       | 36 ++++----------
 dts/framework/settings.py                     |  5 +-
 dts/framework/test_suite.py                   | 37 ++++----------
 dts/framework/testbed_model/capability.py     | 20 ++------
 dts/framework/testbed_model/cpu.py            | 28 ++++-------
 dts/framework/testbed_model/linux_session.py  | 36 ++++----------
 dts/framework/testbed_model/node.py           |  4 +-
 dts/framework/testbed_model/os_session.py     | 13 ++---
 dts/framework/testbed_model/port.py           |  1 -
 dts/framework/testbed_model/posix_session.py  | 48 +++++-------------
 dts/framework/testbed_model/sut_node.py       | 49 +++++--------------
 dts/framework/testbed_model/topology.py       |  4 +-
 .../traffic_generator/__init__.py             |  8 +--
 .../testbed_model/traffic_generator/scapy.py  |  5 +-
 dts/framework/utils.py                        | 32 +++---------
 dts/tests/TestSuite_vlan.py                   |  8 +--
 20 files changed, 90 insertions(+), 259 deletions(-)

diff --git a/dts/framework/params/eal.py b/dts/framework/params/eal.py
index 71bc781eab..b90ff33dcf 100644
--- a/dts/framework/params/eal.py
+++ b/dts/framework/params/eal.py
@@ -27,10 +27,7 @@ class EalParams(Params):
         no_pci: Switch to disable PCI bus, e.g.: ``no_pci=True``.
         vdevs: Virtual devices, e.g.::
 
-            vdevs=[
-                VirtualDevice('net_ring0'),
-                VirtualDevice('net_ring1')
-            ]
+            vdevs = [VirtualDevice("net_ring0"), VirtualDevice("net_ring1")]
 
         ports: The list of ports to allow.
         other_eal_param: user defined DPDK EAL parameters, e.g.::
diff --git a/dts/framework/remote_session/dpdk_shell.py b/dts/framework/remote_session/dpdk_shell.py
index 82fa4755f0..c11d9ab81c 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -6,7 +6,6 @@
 Provides a base class to create interactive shells based on DPDK.
 """
 
-
 from abc import ABC
 from pathlib import PurePath
 
diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
index 953ed100df..9d4abab12c 100644
--- a/dts/framework/remote_session/python_shell.py
+++ b/dts/framework/remote_session/python_shell.py
@@ -6,6 +6,7 @@
 Typical usage example in a TestSuite::
 
     from framework.remote_session import PythonShell
+
     python_shell = PythonShell(self.tg_node, timeout=5, privileged=True)
     python_shell.send_command("print('Hello World')")
     python_shell.close()
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index a53e8fc6e1..3539f634f9 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -124,9 +124,7 @@ def __init__(
         super().__init__()
 
     def _setup_ssh_channel(self):
-        self._ssh_channel = (
-            self._node.main_session.interactive_session.session.invoke_shell()
-        )
+        self._ssh_channel = self._node.main_session.interactive_session.session.invoke_shell()
         self._stdin = self._ssh_channel.makefile_stdin("w")
         self._stdout = self._ssh_channel.makefile("r")
         self._ssh_channel.settimeout(self._timeout)
@@ -136,9 +134,7 @@ def _make_start_command(self) -> str:
         """Makes the command that starts the interactive shell."""
         start_command = f"{self._real_path} {self._app_params or ''}"
         if self._privileged:
-            start_command = self._node.main_session._get_privileged_command(
-                start_command
-            )
+            start_command = self._node.main_session._get_privileged_command(start_command)
         return start_command
 
     def _start_application(self) -> None:
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index d228ed1b18..510be1a870 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -136,25 +136,17 @@ def run(self) -> None:
 
             # for all test run sections
             for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
-                test_run_config, sut_node_config, tg_node_config = (
-                    test_run_with_nodes_config
-                )
+                test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
                 self._logger.set_stage(DtsStage.test_run_setup)
-                self._logger.info(
-                    f"Running test run with SUT '{sut_node_config.name}'."
-                )
+                self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
                 self._init_random_seed(test_run_config)
                 test_run_result = self._result.add_test_run(test_run_config)
                 # we don't want to modify the original config, so create a copy
                 test_run_test_suites = list(
-                    SETTINGS.test_suites
-                    if SETTINGS.test_suites
-                    else test_run_config.test_suites
+                    SETTINGS.test_suites if SETTINGS.test_suites else test_run_config.test_suites
                 )
                 if not test_run_config.skip_smoke_tests:
-                    test_run_test_suites[:0] = [
-                        TestSuiteConfig(test_suite="smoke_tests")
-                    ]
+                    test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
                 try:
                     test_suites_with_cases = self._get_test_suites_with_cases(
                         test_run_test_suites, test_run_config.func, test_run_config.perf
@@ -162,8 +154,7 @@ def run(self) -> None:
                     test_run_result.test_suites_with_cases = test_suites_with_cases
                 except Exception as e:
                     self._logger.exception(
-                        f"Invalid test suite configuration found: "
-                        f"{test_run_test_suites}."
+                        f"Invalid test suite configuration found: " f"{test_run_test_suites}."
                     )
                     test_run_result.update_setup(Result.FAIL, e)
 
@@ -245,9 +236,7 @@ def _get_test_suites_with_cases(
                 test_cases.extend(perf_test_cases)
 
             test_suites_with_cases.append(
-                TestSuiteWithCases(
-                    test_suite_class=test_suite_class, test_cases=test_cases
-                )
+                TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
             )
         return test_suites_with_cases
 
@@ -351,9 +340,7 @@ def _run_test_run(
             test_run_result.update_setup(Result.FAIL, e)
 
         else:
-            self._run_test_suites(
-                sut_node, tg_node, test_run_result, test_suites_with_cases
-            )
+            self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
 
         finally:
             try:
@@ -371,16 +358,13 @@ def _get_supported_capabilities(
         topology_config: Topology,
         test_suites_with_cases: Iterable[TestSuiteWithCases],
     ) -> set[Capability]:
-
         capabilities_to_check = set()
         for test_suite_with_cases in test_suites_with_cases:
             capabilities_to_check.update(test_suite_with_cases.required_capabilities)
 
         self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
 
-        return get_supported_capabilities(
-            sut_node, topology_config, capabilities_to_check
-        )
+        return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
 
     def _run_test_suites(
         self,
@@ -625,9 +609,7 @@ def _execute_test_case(
             self._logger.exception(f"Test case execution ERROR: {test_case_name}")
             test_case_result.update(Result.ERROR, e)
         except KeyboardInterrupt:
-            self._logger.error(
-                f"Test case execution INTERRUPTED by user: {test_case_name}"
-            )
+            self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
             test_case_result.update(Result.SKIP)
             raise KeyboardInterrupt("Stop DTS")
 
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 91f317105a..873d400bec 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -88,6 +88,7 @@
 Typical usage example::
 
   from framework.settings import SETTINGS
+
   foo = SETTINGS.foo
 """
 
@@ -257,9 +258,7 @@ def _get_help_string(self, action):
         return help
 
 
-def _required_with_one_of(
-    parser: _DTSArgumentParser, action: Action, *required_dests: str
-) -> None:
+def _required_with_one_of(parser: _DTSArgumentParser, action: Action, *required_dests: str) -> None:
     """Verify that `action` is listed together with at least one of `required_dests`.
 
     Verify that when `action` is among the command-line arguments or
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index fd6706289e..161bb10066 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -300,9 +300,7 @@ def get_expected_packet(self, packet: Packet) -> Packet:
         """
         return self.get_expected_packets([packet])[0]
 
-    def _adjust_addresses(
-        self, packets: list[Packet], expected: bool = False
-    ) -> list[Packet]:
+    def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> list[Packet]:
         """L2 and L3 address additions in both directions.
 
         Copies of `packets` will be made, modified and returned in this method.
@@ -380,21 +378,15 @@ def verify(self, condition: bool, failure_description: str) -> None:
             self._fail_test_case_verify(failure_description)
 
     def _fail_test_case_verify(self, failure_description: str) -> None:
-        self._logger.debug(
-            "A test case failed, showing the last 10 commands executed on SUT:"
-        )
+        self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
         for command_res in self.sut_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
-        self._logger.debug(
-            "A test case failed, showing the last 10 commands executed on TG:"
-        )
+        self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
         for command_res in self.tg_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
         raise TestCaseVerifyError(failure_description)
 
-    def verify_packets(
-        self, expected_packet: Packet, received_packets: list[Packet]
-    ) -> None:
+    def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
         """Verify that `expected_packet` has been received.
 
         Go through `received_packets` and check that `expected_packet` is among them.
@@ -416,9 +408,7 @@ def verify_packets(
                 f"The expected packet {get_packet_summaries(expected_packet)} "
                 f"not found among received {get_packet_summaries(received_packets)}"
             )
-            self._fail_test_case_verify(
-                "An expected packet not found among received packets."
-            )
+            self._fail_test_case_verify("An expected packet not found among received packets.")
 
     def match_all_packets(
         self, expected_packets: list[Packet], received_packets: list[Packet]
@@ -454,9 +444,7 @@ def match_all_packets(
                 f"but {missing_packets_count} were missing."
             )
 
-    def _compare_packets(
-        self, expected_packet: Packet, received_packet: Packet
-    ) -> bool:
+    def _compare_packets(self, expected_packet: Packet, received_packet: Packet) -> bool:
         self._logger.debug(
             f"Comparing packets: \n{expected_packet.summary()}\n{received_packet.summary()}"
         )
@@ -485,14 +473,10 @@ def _compare_packets(
             expected_payload = expected_payload.payload
 
         if expected_payload:
-            self._logger.debug(
-                f"The expected packet did not contain {expected_payload}."
-            )
+            self._logger.debug(f"The expected packet did not contain {expected_payload}.")
             return False
         if received_payload and received_payload.__class__ != Padding:
-            self._logger.debug(
-                "The received payload had extra layers which were not padding."
-            )
+            self._logger.debug("The received payload had extra layers which were not padding.")
             return False
         return True
 
@@ -519,10 +503,7 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
 
     def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         self._logger.debug("Looking at the IP layer.")
-        if (
-            received_packet.src != expected_packet.src
-            or received_packet.dst != expected_packet.dst
-        ):
+        if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst:
             return False
         return True
 
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 6e06c75c3d..63f99c4479 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -130,9 +130,7 @@ def _get_and_reset(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
 
     @classmethod
     @abstractmethod
-    def get_supported_capabilities(
-        cls, sut_node: SutNode, topology: "Topology"
-    ) -> set[Self]:
+    def get_supported_capabilities(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
         """Get the support status of each registered capability.
 
         Each subclass must implement this method and return the subset of supported capabilities
@@ -242,18 +240,14 @@ def get_supported_capabilities(
                         if capability.nic_capability in supported_capabilities:
                             supported_conditional_capabilities.add(capability)
 
-        logger.debug(
-            f"Found supported capabilities {supported_conditional_capabilities}."
-        )
+        logger.debug(f"Found supported capabilities {supported_conditional_capabilities}.")
         return supported_conditional_capabilities
 
     @classmethod
     def _get_decorated_capabilities_map(
         cls,
     ) -> dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]]:
-        capabilities_map: dict[
-            TestPmdShellDecorator | None, set["DecoratedNicCapability"]
-        ] = {}
+        capabilities_map: dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]] = {}
         for capability in cls.capabilities_to_check:
             if capability.capability_decorator not in capabilities_map:
                 capabilities_map[capability.capability_decorator] = set()
@@ -316,9 +310,7 @@ class TopologyCapability(Capability):
     _unique_capabilities: ClassVar[dict[str, Self]] = {}
 
     def _preprocess_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
-        test_case_or_suite.required_capabilities.discard(
-            test_case_or_suite.topology_type
-        )
+        test_case_or_suite.required_capabilities.discard(test_case_or_suite.topology_type)
         test_case_or_suite.topology_type = self
 
     @classmethod
@@ -458,9 +450,7 @@ class TestProtocol(Protocol):
     #: The reason for skipping the test case or suite.
     skip_reason: ClassVar[str] = ""
     #: The topology type of the test case or suite.
-    topology_type: ClassVar[TopologyCapability] = TopologyCapability(
-        TopologyType.default
-    )
+    topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
     #: The capabilities the test case or suite requires in order to be executed.
     required_capabilities: ClassVar[set[Capability]] = set()
 
diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index 0746878770..46bf13960d 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -67,10 +67,10 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
         There are four supported logical core list formats::
 
-            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
-            lcore_list=[0,1,2,3]        # a list of int indices
-            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
-            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
+            lcore_list = [LogicalCore1, LogicalCore2]  # a list of LogicalCores
+            lcore_list = [0, 1, 2, 3]  # a list of int indices
+            lcore_list = ["0", "1", "2-3"]  # a list of str indices; ranges are supported
+            lcore_list = "0,1,2-3"  # a comma delimited str of indices; ranges are supported
 
         Args:
             lcore_list: Various ways to represent multiple logical cores.
@@ -87,9 +87,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
         # the input lcores may not be sorted
         self._lcore_list.sort()
-        self._lcore_str = (
-            f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
-        )
+        self._lcore_str = f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
 
     @property
     def lcore_list(self) -> list[int]:
@@ -104,15 +102,11 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
                 segment.append(lcore_id)
             else:
                 formatted_core_list.append(
-                    f"{segment[0]}-{segment[-1]}"
-                    if len(segment) > 1
-                    else f"{segment[0]}"
+                    f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
                 )
                 current_core_index = lcore_ids_list.index(lcore_id)
                 formatted_core_list.extend(
-                    self._get_consecutive_lcores_range(
-                        lcore_ids_list[current_core_index:]
-                    )
+                    self._get_consecutive_lcores_range(lcore_ids_list[current_core_index:])
                 )
                 segment.clear()
                 break
@@ -172,9 +166,7 @@ def __init__(
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
-        self._lcores_to_filter = sorted(
-            lcore_list, key=lambda x: x.core, reverse=not ascending
-        )
+        self._lcores_to_filter = sorted(lcore_list, key=lambda x: x.core, reverse=not ascending)
         self.filter()
 
     @abstractmethod
@@ -302,9 +294,7 @@ def _filter_cores_from_socket(
                 else:
                     # we have enough lcores per this core
                     continue
-            elif self._filter_specifier.cores_per_socket > len(
-                lcore_count_per_core_map
-            ):
+            elif self._filter_specifier.cores_per_socket > len(lcore_count_per_core_map):
                 # only add cores if we need more
                 lcore_count_per_core_map[lcore.core] = 1
                 filtered_lcores.append(lcore)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index b316f23b4e..bda2d448f7 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -83,9 +83,7 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(
-        self, number_of: int, hugepage_size: int, force_first_numa: bool
-    ) -> None:
+    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
         """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.
 
         Raises:
@@ -133,9 +131,7 @@ def _mount_huge_pages(self) -> None:
         if result.stdout == "":
             remote_mount_path = "/mnt/huge"
             self.send_command(f"mkdir -p {remote_mount_path}", privileged=True)
-            self.send_command(
-                f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True
-            )
+            self.send_command(f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True)
 
     def _supports_numa(self) -> bool:
         # the system supports numa if self._numa_nodes is non-empty and there are more
@@ -143,13 +139,9 @@ def _supports_numa(self) -> bool:
         # there's no reason to do any numa specific configuration)
         return len(self._numa_nodes) > 1
 
-    def _configure_huge_pages(
-        self, number_of: int, size: int, force_first_numa: bool
-    ) -> None:
+    def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: bool) -> None:
         self._logger.info("Configuring Hugepages.")
-        hugepage_config_path = (
-            f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
-        )
+        hugepage_config_path = f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
         if force_first_numa and self._supports_numa():
             # clear non-numa hugepages
             self.send_command(f"echo 0 | tee {hugepage_config_path}", privileged=True)
@@ -158,25 +150,19 @@ def _configure_huge_pages(
                 f"/hugepages-{size}kB/nr_hugepages"
             )
 
-        self.send_command(
-            f"echo {number_of} | tee {hugepage_config_path}", privileged=True
-        )
+        self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
 
     def update_ports(self, ports: list[Port]) -> None:
         """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
-            assert (
-                port.node == self.name
-            ), "Attempted to gather port info on the wrong node"
+            assert port.node == self.name, "Attempted to gather port info on the wrong node"
 
         port_info_list = self._get_lshw_info()
         for port in ports:
             for port_info in port_info_list:
                 if f"pci@{port.pci}" == port_info.get("businfo"):
-                    self._update_port_attr(
-                        port, port_info.get("logicalname"), "logical_name"
-                    )
+                    self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
                     self._update_port_attr(port, port_info.get("serial"), "mac_address")
                     port_info_list.remove(port_info)
                     break
@@ -187,14 +173,10 @@ def _get_lshw_info(self) -> list[LshwOutput]:
         output = self.send_command("lshw -quiet -json -C network", verify=True)
         return json.loads(output.stdout)
 
-    def _update_port_attr(
-        self, port: Port, attr_value: str | None, attr_name: str
-    ) -> None:
+    def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) -> None:
         if attr_value:
             setattr(port, attr_name, attr_value)
-            self._logger.debug(
-                f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
-            )
+            self._logger.debug(f"Found '{attr_name}' of port {port.pci}: '{attr_value}'.")
         else:
             self._logger.warning(
                 f"Attempted to get '{attr_name}' of port {port.pci}, but it doesn't exist."
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e8021a4afe..c6f12319ca 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -198,9 +198,7 @@ def close(self) -> None:
             session.close()
 
 
-def create_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLogger
-) -> OSSession:
+def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession:
     """Factory for OS-aware sessions.
 
     Args:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 1b2885be5d..28eccc05ed 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -22,6 +22,7 @@
     the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux
     and other commands for other OSs. It also translates the path to match the underlying OS.
 """
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from dataclasses import dataclass
@@ -195,9 +196,7 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         """
 
     @abstractmethod
-    def copy_from(
-        self, source_file: str | PurePath, destination_dir: str | Path
-    ) -> None:
+    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
         """Copy a file from the remote node to the local filesystem.
 
         Copy `source_file` from the remote node associated with this remote
@@ -303,9 +302,7 @@ def copy_dir_to(
         """
 
     @abstractmethod
-    def remove_remote_file(
-        self, remote_file_path: str | PurePath, force: bool = True
-    ) -> None:
+    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
         """Remove remote file, by default remove forcefully.
 
         Args:
@@ -479,9 +476,7 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
 
     @abstractmethod
-    def setup_hugepages(
-        self, number_of: int, hugepage_size: int, force_first_numa: bool
-    ) -> None:
+    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
         """Configure hugepages on the node.
 
         Get the node's Hugepage Size, configure the specified count of hugepages
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 817405bea4..566f4c5b46 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -8,7 +8,6 @@
 drivers and address.
 """
 
-
 from dataclasses import dataclass
 
 from framework.config import PortConfig
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index f707b6e17b..29e314db6e 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -96,9 +96,7 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         result = self.send_command(f"test -e {remote_path}")
         return not result.return_code
 
-    def copy_from(
-        self, source_file: str | PurePath, destination_dir: str | Path
-    ) -> None:
+    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_dir)
 
@@ -115,16 +113,12 @@ def copy_dir_from(
     ) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_dir_from`."""
         source_dir = PurePath(source_dir)
-        remote_tarball_path = self.create_remote_tarball(
-            source_dir, compress_format, exclude
-        )
+        remote_tarball_path = self.create_remote_tarball(source_dir, compress_format, exclude)
 
         self.copy_from(remote_tarball_path, destination_dir)
         self.remove_remote_file(remote_tarball_path)
 
-        tarball_path = Path(
-            destination_dir, f"{source_dir.name}.{compress_format.extension}"
-        )
+        tarball_path = Path(destination_dir, f"{source_dir.name}.{compress_format.extension}")
         extract_tarball(tarball_path)
         tarball_path.unlink()
 
@@ -147,9 +141,7 @@ def copy_dir_to(
         self.extract_remote_tarball(remote_tar_path)
         self.remove_remote_file(remote_tar_path)
 
-    def remove_remote_file(
-        self, remote_file_path: str | PurePath, force: bool = True
-    ) -> None:
+    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
         """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(f=force)
         self.send_command(f"rm{opts} {remote_file_path}")
@@ -184,15 +176,11 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
             """
             if exclude_patterns:
                 exclude_patterns = convert_to_list_of_string(exclude_patterns)
-                return "".join(
-                    [f" --exclude={pattern}" for pattern in exclude_patterns]
-                )
+                return "".join([f" --exclude={pattern}" for pattern in exclude_patterns])
             return ""
 
         posix_remote_dir_path = PurePosixPath(remote_dir_path)
-        target_tarball_path = PurePosixPath(
-            f"{remote_dir_path}.{compress_format.extension}"
-        )
+        target_tarball_path = PurePosixPath(f"{remote_dir_path}.{compress_format.extension}")
 
         self.send_command(
             f"tar caf {target_tarball_path}{generate_tar_exclude_args(exclude)} "
@@ -285,9 +273,7 @@ def build_dpdk(
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
-        out = self.send_command(
-            f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
-        )
+        out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
@@ -302,9 +288,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
             self._check_dpdk_hugepages(dpdk_runtime_dirs)
             self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
 
-    def _get_dpdk_runtime_dirs(
-        self, dpdk_prefix_list: Iterable[str]
-    ) -> list[PurePosixPath]:
+    def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
         """Find runtime directories DPDK apps are currently using.
 
         Args:
@@ -332,9 +316,7 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
         Returns:
             The contents of remote_path. If remote_path doesn't exist, return None.
         """
-        out = self.send_command(
-            f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
-        ).stdout
+        out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
         if "No such file or directory" in out:
             return None
         else:
@@ -365,9 +347,7 @@ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[in
                             pids.append(int(match.group(1)))
         return pids
 
-    def _check_dpdk_hugepages(
-        self, dpdk_runtime_dirs: Iterable[str | PurePath]
-    ) -> None:
+    def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
         """Check there aren't any leftover hugepages.
 
         If any hugepages are found, emit a warning. The hugepages are investigated in the
@@ -386,9 +366,7 @@ def _check_dpdk_hugepages(
                     self._logger.warning(out)
                     self._logger.warning("*******************************************")
 
-    def _remove_dpdk_runtime_dirs(
-        self, dpdk_runtime_dirs: Iterable[str | PurePath]
-    ) -> None:
+    def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
@@ -425,6 +403,4 @@ def get_node_info(self) -> OSSessionInfo:
             SETTINGS.timeout,
         ).stdout.split("\n")
         kernel_version = self.send_command("uname -r", SETTINGS.timeout).stdout
-        return OSSessionInfo(
-            os_release_info[0].strip(), os_release_info[1].strip(), kernel_version
-        )
+        return OSSessionInfo(os_release_info[0].strip(), os_release_info[1].strip(), kernel_version)
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 6adcff01c2..a9dc0a474a 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -11,7 +11,6 @@
 An SUT node is where this SUT runs.
 """
 
-
 import os
 import time
 from dataclasses import dataclass
@@ -139,9 +138,7 @@ def remote_dpdk_build_dir(self) -> str | PurePath:
     def dpdk_version(self) -> str | None:
         """Last built DPDK version."""
         if self._dpdk_version is None:
-            self._dpdk_version = self.main_session.get_dpdk_version(
-                self._remote_dpdk_tree_path
-            )
+            self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_tree_path)
         return self._dpdk_version
 
     @property
@@ -155,9 +152,7 @@ def node_info(self) -> OSSessionInfo:
     def compiler_version(self) -> str | None:
         """The node's compiler version."""
         if self._compiler_version is None:
-            self._logger.warning(
-                "The `compiler_version` is None because a pre-built DPDK is used."
-            )
+            self._logger.warning("The `compiler_version` is None because a pre-built DPDK is used.")
 
         return self._compiler_version
 
@@ -185,9 +180,7 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
         Returns:
             The DPDK build information,
         """
-        return DPDKBuildInfo(
-            dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
-        )
+        return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
 
     def set_up_test_run(
         self,
@@ -277,9 +270,7 @@ def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
                 f"Remote DPDK source tree '{dpdk_tree}' not found in SUT node."
             )
         if not self.main_session.is_remote_dir(dpdk_tree):
-            raise ConfigurationError(
-                f"Remote DPDK source tree '{dpdk_tree}' must be a directory."
-            )
+            raise ConfigurationError(f"Remote DPDK source tree '{dpdk_tree}' must be a directory.")
 
         self.__remote_dpdk_tree_path = dpdk_tree
 
@@ -315,13 +306,9 @@ def _validate_remote_dpdk_tarball(self, dpdk_tarball: PurePath) -> None:
             ConfigurationError: If the `dpdk_tarball` is a valid path but not a valid tar archive.
         """
         if not self.main_session.remote_path_exists(dpdk_tarball):
-            raise RemoteFileNotFoundError(
-                f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT."
-            )
+            raise RemoteFileNotFoundError(f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT.")
         if not self.main_session.is_remote_tarfile(dpdk_tarball):
-            raise ConfigurationError(
-                f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive."
-            )
+            raise ConfigurationError(f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive.")
 
     def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
         """Copy the local DPDK tarball to the SUT node.
@@ -336,9 +323,7 @@ def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
             f"Copying DPDK tarball to SUT: '{dpdk_tarball}' into '{self._remote_tmp_dir}'."
         )
         self.main_session.copy_to(dpdk_tarball, self._remote_tmp_dir)
-        return self.main_session.join_remote_path(
-            self._remote_tmp_dir, dpdk_tarball.name
-        )
+        return self.main_session.join_remote_path(self._remote_tmp_dir, dpdk_tarball.name)
 
     def _prepare_and_extract_dpdk_tarball(self, remote_tarball_path: PurePath) -> None:
         """Prepare the remote DPDK tree path and extract the tarball.
@@ -362,9 +347,7 @@ def remove_tarball_suffix(remote_tarball_path: PurePath) -> PurePath:
             if len(remote_tarball_path.suffixes) > 1:
                 if remote_tarball_path.suffixes[-2] == ".tar":
                     suffixes_to_remove = "".join(remote_tarball_path.suffixes[-2:])
-                    return PurePath(
-                        str(remote_tarball_path).replace(suffixes_to_remove, "")
-                    )
+                    return PurePath(str(remote_tarball_path).replace(suffixes_to_remove, ""))
             return remote_tarball_path.with_suffix("")
 
         tarball_top_dir = self.main_session.get_tarball_top_dir(remote_tarball_path)
@@ -407,9 +390,7 @@ def _set_remote_dpdk_build_dir(self, build_dir: str):
 
         self._remote_dpdk_build_dir = PurePath(remote_dpdk_build_dir)
 
-    def _configure_dpdk_build(
-        self, dpdk_build_config: DPDKBuildOptionsConfiguration
-    ) -> None:
+    def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration) -> None:
         """Populate common environment variables and set the DPDK build related properties.
 
         This method sets `compiler_version` for additional information and `remote_dpdk_build_dir`
@@ -419,13 +400,9 @@ def _configure_dpdk_build(
             dpdk_build_config: A DPDK build configuration to test.
         """
         self._env_vars = {}
-        self._env_vars.update(
-            self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch)
-        )
+        self._env_vars.update(self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch))
         if compiler_wrapper := dpdk_build_config.compiler_wrapper:
-            self._env_vars["CC"] = (
-                f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
-            )
+            self._env_vars["CC"] = f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
         else:
             self._env_vars["CC"] = dpdk_build_config.compiler.name
 
@@ -476,9 +453,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
         if app_name == "all":
-            return self.main_session.join_remote_path(
-                self.remote_dpdk_build_dir, "examples"
-            )
+            return self.main_session.join_remote_path(self.remote_dpdk_build_dir, "examples")
         return self.main_session.join_remote_path(
             self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
         )
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 2c10aff4ef..0bad59d2a4 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -58,9 +58,7 @@ def get_from_value(cls, value: int) -> "TopologyType":
             case 2:
                 return TopologyType.two_links
             case _:
-                raise ConfigurationError(
-                    "More than two links in a topology are not supported."
-                )
+                raise ConfigurationError("More than two links in a topology are not supported.")
 
 
 class Topology:
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index e7fd511a00..e501f6d5ee 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -39,10 +39,6 @@ def create_traffic_generator(
     """
     match traffic_generator_config:
         case ScapyTrafficGeneratorConfig():
-            return ScapyTrafficGenerator(
-                tg_node, traffic_generator_config, privileged=True
-            )
+            return ScapyTrafficGenerator(tg_node, traffic_generator_config, privileged=True)
         case _:
-            raise ConfigurationError(
-                f"Unknown traffic generator: {traffic_generator_config.type}"
-            )
+            raise ConfigurationError(f"Unknown traffic generator: {traffic_generator_config.type}")
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 16cc361cab..07e1242548 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -12,7 +12,6 @@
 implement the methods for handling packets by sending commands into the interactive shell.
 """
 
-
 import re
 import time
 from typing import ClassVar
@@ -231,9 +230,7 @@ def _shell_start_and_stop_sniffing(self, duration: float) -> list[Packet]:
         self.send_command(f"{self._sniffer_name}.start()")
         # Insert a one second delay to prevent timeout errors from occurring
         time.sleep(duration + 1)
-        self.send_command(
-            f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)"
-        )
+        self.send_command(f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)")
         # An extra newline is required here due to the nature of interactive Python shells
         packet_strs = self.send_command(
             f"for pakt in {sniffed_packets_name}: print(bytes_base64(pakt.build()))\n"
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 6ff9a485ba..6839bcf243 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -31,9 +31,7 @@
 REGEX_FOR_PCI_ADDRESS: str = r"[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}"
 _REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
 _REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
-REGEX_FOR_MAC_ADDRESS: str = (
-    rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
-)
+REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
 REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
 
 
@@ -58,9 +56,7 @@ def expand_range(range_str: str) -> list[int]:
         range_boundaries = range_str.split("-")
         # will throw an exception when items in range_boundaries can't be converted,
         # serving as type check
-        expanded_range.extend(
-            range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
-        )
+        expanded_range.extend(range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1))
 
     return expanded_range
 
@@ -77,9 +73,7 @@ def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
-        packet_summaries = json.dumps(
-            list(map(lambda pkt: pkt.summary(), packets)), indent=4
-        )
+        packet_summaries = json.dumps(list(map(lambda pkt: pkt.summary(), packets)), indent=4)
     return f"Packet contents: \n{packet_summaries}"
 
 
@@ -87,9 +81,7 @@ class StrEnum(Enum):
     """Enum with members stored as strings."""
 
     @staticmethod
-    def _generate_next_value_(
-        name: str, start: int, count: int, last_values: object
-    ) -> str:
+    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
         return name
 
     def __str__(self) -> str:
@@ -116,9 +108,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
 
                 meson_args = MesonArgs(enable_kmods=True).
         """
-        self._default_library = (
-            f"--default-library={default_library}" if default_library else ""
-        )
+        self._default_library = f"--default-library={default_library}" if default_library else ""
         self._dpdk_args = " ".join(
             (
                 f"-D{dpdk_arg_name}={dpdk_arg_value}"
@@ -159,9 +149,7 @@ def extension(self):
         For other compression formats, the extension will be in the format
         'tar.{compression format}'.
         """
-        return (
-            f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
-        )
+        return f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
 
 
 def convert_to_list_of_string(value: Any | list[Any]) -> list[str]:
@@ -206,9 +194,7 @@ def create_filter_function(
 
             def filter_func(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo | None:
                 file_name = os.path.basename(tarinfo.name)
-                if any(
-                    fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns
-                ):
+                if any(fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns):
                     return None
                 return tarinfo
 
@@ -301,9 +287,7 @@ def _make_packet() -> Packet:
             packet /= random.choice(l4_factories)(sport=src_port, dport=dst_port)
 
         max_payload_size = mtu - len(packet)
-        usable_payload_size = (
-            payload_size if payload_size < max_payload_size else max_payload_size
-        )
+        usable_payload_size = payload_size if payload_size < max_payload_size else max_payload_size
         return packet / random.randbytes(usable_payload_size)
 
     return [_make_packet() for _ in range(number_of)]
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index 524854ea89..35221fe362 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -38,9 +38,7 @@ class TestVlan(TestSuite):
     tag when insertion is enabled.
     """
 
-    def send_vlan_packet_and_verify(
-        self, should_receive: bool, strip: bool, vlan_id: int
-    ) -> None:
+    def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id: int) -> None:
         """Generate a VLAN packet, send and verify packet with same payload is received on the dut.
 
         Args:
@@ -155,9 +153,7 @@ def test_vlan_no_receipt(self) -> None:
         with TestPmdShell(node=self.sut_node) as testpmd:
             self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
             testpmd.start()
-            self.send_vlan_packet_and_verify(
-                should_receive=False, strip=False, vlan_id=2
-            )
+            self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
 
     @func_test
     def test_vlan_header_insertion(self) -> None:
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 5/6] dts: update dts-check-format to use Ruff
  2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
                   ` (3 preceding siblings ...)
  2024-12-10 10:32 ` [PATCH 4/6] dts: apply Ruff formatting Luca Vizzarro
@ 2024-12-10 10:32 ` Luca Vizzarro
  2024-12-10 10:32 ` [PATCH 6/6] dts: remove old linters and formatters Luca Vizzarro
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
  6 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-10 10:32 UTC (permalink / raw)
  To: dev, Patrick Robb; +Cc: Luca Vizzarro, Paul Szczepanek

Replace the current linters and formatter in favour of Ruff in the
dts-check-format tool.

Bugzilla ID: 1358
Bugzilla ID: 1455

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 devtools/dts-check-format.sh | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/devtools/dts-check-format.sh b/devtools/dts-check-format.sh
index 3f43e17e88..44501f6d3b 100755
--- a/devtools/dts-check-format.sh
+++ b/devtools/dts-check-format.sh
@@ -52,18 +52,11 @@ if $format; then
 	if command -v git > /dev/null; then
 		if git rev-parse --is-inside-work-tree >&-; then
 			heading "Formatting in $directory/"
-			if command -v black > /dev/null; then
-				echo "Formatting code with black:"
-				black .
+			if command -v ruff > /dev/null; then
+				echo "Formatting code with ruff:"
+				ruff format
 			else
-				echo "black is not installed, not formatting"
-				errors=$((errors + 1))
-			fi
-			if command -v isort > /dev/null; then
-				echo "Sorting imports with isort:"
-				isort .
-			else
-				echo "isort is not installed, not sorting imports"
+				echo "ruff is not installed, not formatting"
 				errors=$((errors + 1))
 			fi
 
@@ -89,11 +82,18 @@ if $lint; then
 		echo
 	fi
 	heading "Linting in $directory/"
-	if command -v pylama > /dev/null; then
-		pylama .
-		errors=$((errors + $?))
+	if command -v ruff > /dev/null; then
+		ruff check --fix
+
+		git update-index --refresh
+		retval=$?
+		if [ $retval -ne 0 ]; then
+			echo 'The "needs update" files have been fixed by the linter.'
+			echo 'Please update your commit.'
+		fi
+		errors=$((errors + retval))
 	else
-		echo "pylama not found, unable to run linter"
+		echo "ruff not found, unable to run linter"
 		errors=$((errors + 1))
 	fi
 fi
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 6/6] dts: remove old linters and formatters
  2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
                   ` (4 preceding siblings ...)
  2024-12-10 10:32 ` [PATCH 5/6] dts: update dts-check-format to use Ruff Luca Vizzarro
@ 2024-12-10 10:32 ` Luca Vizzarro
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
  6 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-10 10:32 UTC (permalink / raw)
  To: dev, Patrick Robb; +Cc: Luca Vizzarro, Paul Szczepanek

Since the addition of Ruff, all the previously used linters and
formatters are no longer used, therefore remove.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/poetry.lock    | 170 +--------------------------------------------
 dts/pyproject.toml |  24 -------
 2 files changed, 1 insertion(+), 193 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index aa821f0101..a53bbe03b8 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -108,40 +108,6 @@ files = [
 tests = ["pytest (>=3.2.1,!=3.3.0)"]
 typecheck = ["mypy"]
 
-[[package]]
-name = "black"
-version = "22.12.0"
-description = "The uncompromising code formatter."
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"},
-    {file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"},
-    {file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"},
-    {file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"},
-    {file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"},
-    {file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"},
-    {file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"},
-    {file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"},
-    {file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"},
-    {file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"},
-    {file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"},
-    {file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"},
-]
-
-[package.dependencies]
-click = ">=8.0.0"
-mypy-extensions = ">=0.4.3"
-pathspec = ">=0.9.0"
-platformdirs = ">=2"
-tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
-
-[package.extras]
-colorama = ["colorama (>=0.4.3)"]
-d = ["aiohttp (>=3.7.4)"]
-jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
-uvloop = ["uvloop (>=0.15.2)"]
-
 [[package]]
 name = "certifi"
 version = "2023.7.22"
@@ -328,20 +294,6 @@ files = [
     {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
 ]
 
-[[package]]
-name = "click"
-version = "8.1.6"
-description = "Composable command line interface toolkit"
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "click-8.1.6-py3-none-any.whl", hash = "sha256:fa244bb30b3b5ee2cae3da8f55c9e5e0c0e86093306301fb418eb9dc40fbded5"},
-    {file = "click-8.1.6.tar.gz", hash = "sha256:48ee849951919527a045bfe3bf7baa8a959c423134e1a5b98c05c20ba75a1cbd"},
-]
-
-[package.dependencies]
-colorama = {version = "*", markers = "platform_system == \"Windows\""}
-
 [[package]]
 name = "colorama"
 version = "0.4.6"
@@ -462,23 +414,6 @@ files = [
     {file = "invoke-1.7.3.tar.gz", hash = "sha256:41b428342d466a82135d5ab37119685a989713742be46e42a3a399d685579314"},
 ]
 
-[[package]]
-name = "isort"
-version = "5.12.0"
-description = "A Python utility / library to sort Python imports."
-optional = false
-python-versions = ">=3.8.0"
-files = [
-    {file = "isort-5.12.0-py3-none-any.whl", hash = "sha256:f84c2818376e66cf843d497486ea8fed8700b340f308f076c6fb1229dff318b6"},
-    {file = "isort-5.12.0.tar.gz", hash = "sha256:8bef7dde241278824a6d83f44a544709b065191b95b6e50894bdc722fcba0504"},
-]
-
-[package.extras]
-colors = ["colorama (>=0.4.3)"]
-pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"]
-plugins = ["setuptools"]
-requirements-deprecated-finder = ["pip-api", "pipreqs"]
-
 [[package]]
 name = "jinja2"
 version = "3.1.2"
@@ -565,17 +500,6 @@ files = [
     {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
 ]
 
-[[package]]
-name = "mccabe"
-version = "0.7.0"
-description = "McCabe checker, plugin for flake8"
-optional = false
-python-versions = ">=3.6"
-files = [
-    {file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"},
-    {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
-]
-
 [[package]]
 name = "mypy"
 version = "1.10.0"
@@ -680,43 +604,6 @@ files = [
 [package.dependencies]
 six = "*"
 
-[[package]]
-name = "pathspec"
-version = "0.11.1"
-description = "Utility library for gitignore style pattern matching of file paths."
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "pathspec-0.11.1-py3-none-any.whl", hash = "sha256:d8af70af76652554bd134c22b3e8a1cc46ed7d91edcdd721ef1a0c51a84a5293"},
-    {file = "pathspec-0.11.1.tar.gz", hash = "sha256:2798de800fa92780e33acca925945e9a19a133b715067cf165b8866c15a31687"},
-]
-
-[[package]]
-name = "platformdirs"
-version = "3.9.1"
-description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "platformdirs-3.9.1-py3-none-any.whl", hash = "sha256:ad8291ae0ae5072f66c16945166cb11c63394c7a3ad1b1bc9828ca3162da8c2f"},
-    {file = "platformdirs-3.9.1.tar.gz", hash = "sha256:1b42b450ad933e981d56e59f1b97495428c9bd60698baab9f3eb3d00d5822421"},
-]
-
-[package.extras]
-docs = ["furo (>=2023.5.20)", "proselint (>=0.13)", "sphinx (>=7.0.1)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"]
-test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.3.1)", "pytest-cov (>=4.1)", "pytest-mock (>=3.10)"]
-
-[[package]]
-name = "pycodestyle"
-version = "2.10.0"
-description = "Python style guide checker"
-optional = false
-python-versions = ">=3.6"
-files = [
-    {file = "pycodestyle-2.10.0-py2.py3-none-any.whl", hash = "sha256:8a4eaf0d0495c7395bdab3589ac2db602797d76207242c17d470186815706610"},
-    {file = "pycodestyle-2.10.0.tar.gz", hash = "sha256:347187bdb476329d98f695c213d7295a846d1152ff4fe9bacb8a9590b8ee7053"},
-]
-
 [[package]]
 name = "pycparser"
 version = "2.21"
@@ -872,23 +759,6 @@ azure-key-vault = ["azure-identity (>=1.16.0)", "azure-keyvault-secrets (>=4.8.0
 toml = ["tomli (>=2.0.1)"]
 yaml = ["pyyaml (>=6.0.1)"]
 
-[[package]]
-name = "pydocstyle"
-version = "6.1.1"
-description = "Python docstring style checker"
-optional = false
-python-versions = ">=3.6"
-files = [
-    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
-    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
-]
-
-[package.dependencies]
-snowballstemmer = "*"
-
-[package.extras]
-toml = ["toml"]
-
 [[package]]
 name = "pyelftools"
 version = "0.31"
@@ -900,17 +770,6 @@ files = [
     {file = "pyelftools-0.31.tar.gz", hash = "sha256:c774416b10310156879443b81187d182d8d9ee499660380e645918b50bc88f99"},
 ]
 
-[[package]]
-name = "pyflakes"
-version = "2.5.0"
-description = "passive checker of Python programs"
-optional = false
-python-versions = ">=3.6"
-files = [
-    {file = "pyflakes-2.5.0-py2.py3-none-any.whl", hash = "sha256:4579f67d887f804e67edb544428f264b7b24f435b263c4614f384135cea553d2"},
-    {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
-]
-
 [[package]]
 name = "pygments"
 version = "2.16.1"
@@ -925,33 +784,6 @@ files = [
 [package.extras]
 plugins = ["importlib-metadata"]
 
-[[package]]
-name = "pylama"
-version = "8.4.1"
-description = "Code audit tool for python"
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "pylama-8.4.1-py3-none-any.whl", hash = "sha256:5bbdbf5b620aba7206d688ed9fc917ecd3d73e15ec1a89647037a09fa3a86e60"},
-    {file = "pylama-8.4.1.tar.gz", hash = "sha256:2d4f7aecfb5b7466216d48610c7d6bad1c3990c29cdd392ad08259b161e486f6"},
-]
-
-[package.dependencies]
-mccabe = ">=0.7.0"
-pycodestyle = ">=2.9.1"
-pydocstyle = ">=6.1.1"
-pyflakes = ">=2.5.0"
-
-[package.extras]
-all = ["eradicate", "mypy", "pylint", "radon", "vulture"]
-eradicate = ["eradicate"]
-mypy = ["mypy"]
-pylint = ["pylint"]
-radon = ["radon"]
-tests = ["eradicate (>=2.0.0)", "mypy", "pylama-quotes", "pylint (>=2.11.1)", "pytest (>=7.1.2)", "pytest-mypy", "radon (>=5.1.0)", "toml", "types-setuptools", "types-toml", "vulture"]
-toml = ["toml (>=0.10.2)"]
-vulture = ["vulture"]
-
 [[package]]
 name = "pynacl"
 version = "1.5.0"
@@ -1388,4 +1220,4 @@ zstd = ["zstandard (>=0.18.0)"]
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "5f9b61492d95b09c717325396e981bb526fac9b0c16869f1aebc3a57b7b80e49"
+content-hash = "eb9976250d5022a9036bf2b7630ce71e95152d0440a9cb1f127b3b691429777b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 2658a3d22c..8ba76bd3a7 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -24,17 +24,12 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
-pydocstyle = "6.1.1"
 typing-extensions = "^4.11.0"
 aenum = "^3.1.15"
 pydantic = "^2.9.2"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^1.10.0"
-black = "^22.6.0"
-isort = "^5.10.1"
-pylama = "^8.4.1"
-pyflakes = "^2.5.0"
 toml = "^0.10.2"
 ruff = "^0.8.1"
 
@@ -74,27 +69,8 @@ explicit-preview-rules = true # enable ONLY the explicitly selected preview rule
 [tool.ruff.lint.pydocstyle]
 convention = "google"
 
-[tool.pylama]
-linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
-format = "pylint"
-max_line_length = 100
-
-[tool.pylama.linter.pycodestyle]
-ignore = "E203,W503"
-
-[tool.pylama.linter.pydocstyle]
-convention = "google"
-
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
 show_error_codes = true
 warn_unused_ignores = true
-
-[tool.isort]
-profile = "black"
-
-[tool.black]
-target-version = ['py310']
-include = '\.pyi?$'
-line-length = 100
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 0/7] dts: add Ruff and docstring linting
  2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
                   ` (5 preceding siblings ...)
  2024-12-10 10:32 ` [PATCH 6/6] dts: remove old linters and formatters Luca Vizzarro
@ 2024-12-12 14:00 ` Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 1/7] dts: add Ruff as linter and formatter Luca Vizzarro
                     ` (7 more replies)
  6 siblings, 8 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-12 14:00 UTC (permalink / raw)
  To: dev; +Cc: Patrick Robb, Luca Vizzarro

v2:
- updated the doc page

Luca Vizzarro (7):
  dts: add Ruff as linter and formatter
  dts: enable Ruff preview pydoclint rules
  dts: resolve docstring linter errors
  dts: apply Ruff formatting
  dts: update dts-check-format to use Ruff
  dts: remove old linters and formatters
  dts: update linters in doc page

 devtools/dts-check-format.sh                  |  30 +--
 doc/guides/tools/dts.rst                      |  26 +--
 dts/framework/params/eal.py                   |   5 +-
 dts/framework/remote_session/dpdk_shell.py    |   1 -
 dts/framework/remote_session/python_shell.py  |   1 +
 .../single_active_interactive_shell.py        |   3 +-
 dts/framework/runner.py                       |  14 +-
 dts/framework/settings.py                     |   3 +
 dts/framework/test_suite.py                   |   6 +-
 dts/framework/testbed_model/capability.py     |  13 +-
 dts/framework/testbed_model/cpu.py            |  21 +-
 dts/framework/testbed_model/linux_session.py  |   6 +-
 dts/framework/testbed_model/node.py           |   3 +
 dts/framework/testbed_model/os_session.py     |   3 +-
 dts/framework/testbed_model/port.py           |   1 -
 dts/framework/testbed_model/posix_session.py  |  16 +-
 dts/framework/testbed_model/sut_node.py       |   2 +-
 dts/framework/testbed_model/topology.py       |   6 +
 .../traffic_generator/__init__.py             |   3 +
 .../testbed_model/traffic_generator/scapy.py  |   7 +-
 .../traffic_generator/traffic_generator.py    |   3 +-
 dts/framework/utils.py                        |   6 +-
 dts/poetry.lock                               | 197 +++---------------
 dts/pyproject.toml                            |  40 ++--
 dts/tests/TestSuite_vlan.py                   |  22 +-
 25 files changed, 179 insertions(+), 259 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/7] dts: add Ruff as linter and formatter
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
@ 2024-12-12 14:00   ` Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 2/7] dts: enable Ruff preview pydoclint rules Luca Vizzarro
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-12 14:00 UTC (permalink / raw)
  To: dev; +Cc: Patrick Robb, Luca Vizzarro, Paul Szczepanek

To improve and streamline the development process, Ruff presents itself
as a very fast all-in-one linter that is able to apply fixes and
formatter compatible with Black. Ruff implements all the rules that DTS
currently use and expands on them, leaving space to easily enable more
checks in the future.

Bugzilla ID: 1358

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/poetry.lock    | 29 ++++++++++++++++++++++++++++-
 dts/pyproject.toml | 20 ++++++++++++++++++++
 2 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index ee564676b4..aa821f0101 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1073,6 +1073,33 @@ urllib3 = ">=1.21.1,<3"
 socks = ["PySocks (>=1.5.6,!=1.5.7)"]
 use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
 
+[[package]]
+name = "ruff"
+version = "0.8.1"
+description = "An extremely fast Python linter and code formatter, written in Rust."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "ruff-0.8.1-py3-none-linux_armv6l.whl", hash = "sha256:fae0805bd514066f20309f6742f6ee7904a773eb9e6c17c45d6b1600ca65c9b5"},
+    {file = "ruff-0.8.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8a4f7385c2285c30f34b200ca5511fcc865f17578383db154e098150ce0a087"},
+    {file = "ruff-0.8.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:cd054486da0c53e41e0086e1730eb77d1f698154f910e0cd9e0d64274979a209"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2029b8c22da147c50ae577e621a5bfbc5d1fed75d86af53643d7a7aee1d23871"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2666520828dee7dfc7e47ee4ea0d928f40de72056d929a7c5292d95071d881d1"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:333c57013ef8c97a53892aa56042831c372e0bb1785ab7026187b7abd0135ad5"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:288326162804f34088ac007139488dcb43de590a5ccfec3166396530b58fb89d"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b12c39b9448632284561cbf4191aa1b005882acbc81900ffa9f9f471c8ff7e26"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:364e6674450cbac8e998f7b30639040c99d81dfb5bbc6dfad69bc7a8f916b3d1"},
+    {file = "ruff-0.8.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b22346f845fec132aa39cd29acb94451d030c10874408dbf776af3aaeb53284c"},
+    {file = "ruff-0.8.1-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:b2f2f7a7e7648a2bfe6ead4e0a16745db956da0e3a231ad443d2a66a105c04fa"},
+    {file = "ruff-0.8.1-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:adf314fc458374c25c5c4a4a9270c3e8a6a807b1bec018cfa2813d6546215540"},
+    {file = "ruff-0.8.1-py3-none-musllinux_1_2_i686.whl", hash = "sha256:a885d68342a231b5ba4d30b8c6e1b1ee3a65cf37e3d29b3c74069cdf1ee1e3c9"},
+    {file = "ruff-0.8.1-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:d2c16e3508c8cc73e96aa5127d0df8913d2290098f776416a4b157657bee44c5"},
+    {file = "ruff-0.8.1-py3-none-win32.whl", hash = "sha256:93335cd7c0eaedb44882d75a7acb7df4b77cd7cd0d2255c93b28791716e81790"},
+    {file = "ruff-0.8.1-py3-none-win_amd64.whl", hash = "sha256:2954cdbe8dfd8ab359d4a30cd971b589d335a44d444b6ca2cb3d1da21b75e4b6"},
+    {file = "ruff-0.8.1-py3-none-win_arm64.whl", hash = "sha256:55873cc1a473e5ac129d15eccb3c008c096b94809d693fc7053f588b67822737"},
+    {file = "ruff-0.8.1.tar.gz", hash = "sha256:3583db9a6450364ed5ca3f3b4225958b24f78178908d5c4bc0f46251ccca898f"},
+]
+
 [[package]]
 name = "scapy"
 version = "2.5.0"
@@ -1361,4 +1388,4 @@ zstd = ["zstandard (>=0.18.0)"]
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "fe9a9fdf7b43e8dce2fb5ee600921d4047fef2f4037a78bbd150f71df202493e"
+content-hash = "5f9b61492d95b09c717325396e981bb526fac9b0c16869f1aebc3a57b7b80e49"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index f69c70877a..3436d82116 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -36,6 +36,7 @@ isort = "^5.10.1"
 pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
+ruff = "^0.8.1"
 
 [tool.poetry.group.docs]
 optional = true
@@ -50,6 +51,25 @@ autodoc-pydantic = "^2.2.0"
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
+[tool.ruff]
+target-version = "py310"
+line-length = 100
+
+[tool.ruff.format]
+docstring-code-format = true
+
+[tool.ruff.lint]
+select = [
+    "F",      # pyflakes
+    "E", "W", # pycodestyle
+    "D",      # pydocstyle
+    "C90",    # mccabe
+    "I",      # isort
+]
+
+[tool.ruff.lint.pydocstyle]
+convention = "google"
+
 [tool.pylama]
 linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 2/7] dts: enable Ruff preview pydoclint rules
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 1/7] dts: add Ruff as linter and formatter Luca Vizzarro
@ 2024-12-12 14:00   ` Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 3/7] dts: resolve docstring linter errors Luca Vizzarro
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-12 14:00 UTC (permalink / raw)
  To: dev; +Cc: Patrick Robb, Luca Vizzarro, Paul Szczepanek

DTS requires a linter for docstrings but the current selection is
limited. The most promising docstring linter is pydoclint.
On the other hand, Ruff is currently in the process of implementing
pydoclint rules. This would spare the project from supporting yet
another linter, without any loss of benefit.

This commit enables a selection of pydoclint rules in Ruff, which while
still in preview they are already capable of aiding the process.

DOC201 was omitted because it currently does not support one-line
docstrings, trying to enforce full ones even when not needed.

DOC502 was omitted because it complains for exceptions that were
reported but not present in the body of the function. While treating
documented exceptions when they appear to not be raised as an error
is a sound argument, it currently doesn't work well with inherited
class methods, which parent does raise exceptions.

Bugzilla ID: 1455

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/pyproject.toml | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 3436d82116..2658a3d22c 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -65,7 +65,11 @@ select = [
     "D",      # pydocstyle
     "C90",    # mccabe
     "I",      # isort
+    # pydoclint
+    "DOC202", "DOC402", "DOC403", "DOC501"
 ]
+preview = true # enable to get early access to pydoclint rules
+explicit-preview-rules = true # enable ONLY the explicitly selected preview rules
 
 [tool.ruff.lint.pydocstyle]
 convention = "google"
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 3/7] dts: resolve docstring linter errors
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 1/7] dts: add Ruff as linter and formatter Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 2/7] dts: enable Ruff preview pydoclint rules Luca Vizzarro
@ 2024-12-12 14:00   ` Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 4/7] dts: apply Ruff formatting Luca Vizzarro
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-12 14:00 UTC (permalink / raw)
  To: dev; +Cc: Patrick Robb, Luca Vizzarro, Paul Szczepanek

The addition of Ruff pydocstyle and pydoclint rules has raised new
problems in the docstrings which require to be fixed.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 .../single_active_interactive_shell.py        | 11 +++-
 dts/framework/runner.py                       | 48 ++++++++++----
 dts/framework/settings.py                     |  6 +-
 dts/framework/test_suite.py                   | 43 ++++++++++---
 dts/framework/testbed_model/capability.py     | 33 ++++++++--
 dts/framework/testbed_model/cpu.py            | 33 ++++++++--
 dts/framework/testbed_model/linux_session.py  | 42 +++++++++---
 dts/framework/testbed_model/node.py           |  7 +-
 dts/framework/testbed_model/os_session.py     | 14 ++--
 dts/framework/testbed_model/posix_session.py  | 64 ++++++++++++++-----
 dts/framework/testbed_model/sut_node.py       | 49 ++++++++++----
 dts/framework/testbed_model/topology.py       | 10 ++-
 .../traffic_generator/__init__.py             | 11 +++-
 .../testbed_model/traffic_generator/scapy.py  | 10 ++-
 .../traffic_generator/traffic_generator.py    |  3 +-
 dts/framework/utils.py                        | 38 ++++++++---
 dts/tests/TestSuite_vlan.py                   | 30 ++++++---
 17 files changed, 347 insertions(+), 105 deletions(-)

diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index e3f6424e97..a53e8fc6e1 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -110,6 +110,7 @@ def __init__(
             app_params: The command line parameters to be passed to the application on startup.
             name: Name for the interactive shell to use for logging. This name will be appended to
                 the name of the underlying node which it is running on.
+            **kwargs: Any additional arguments if any.
         """
         self._node = node
         if name is None:
@@ -120,10 +121,12 @@ def __init__(
         self._timeout = timeout
         # Ensure path is properly formatted for the host
         self._update_real_path(self.path)
-        super().__init__(node, **kwargs)
+        super().__init__()
 
     def _setup_ssh_channel(self):
-        self._ssh_channel = self._node.main_session.interactive_session.session.invoke_shell()
+        self._ssh_channel = (
+            self._node.main_session.interactive_session.session.invoke_shell()
+        )
         self._stdin = self._ssh_channel.makefile_stdin("w")
         self._stdout = self._ssh_channel.makefile("r")
         self._ssh_channel.settimeout(self._timeout)
@@ -133,7 +136,9 @@ def _make_start_command(self) -> str:
         """Makes the command that starts the interactive shell."""
         start_command = f"{self._real_path} {self._app_params or ''}"
         if self._privileged:
-            start_command = self._node.main_session._get_privileged_command(start_command)
+            start_command = self._node.main_session._get_privileged_command(
+                start_command
+            )
         return start_command
 
     def _start_application(self) -> None:
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index f91c462ce5..d228ed1b18 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -136,17 +136,25 @@ def run(self) -> None:
 
             # for all test run sections
             for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
-                test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
+                test_run_config, sut_node_config, tg_node_config = (
+                    test_run_with_nodes_config
+                )
                 self._logger.set_stage(DtsStage.test_run_setup)
-                self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
+                self._logger.info(
+                    f"Running test run with SUT '{sut_node_config.name}'."
+                )
                 self._init_random_seed(test_run_config)
                 test_run_result = self._result.add_test_run(test_run_config)
                 # we don't want to modify the original config, so create a copy
                 test_run_test_suites = list(
-                    SETTINGS.test_suites if SETTINGS.test_suites else test_run_config.test_suites
+                    SETTINGS.test_suites
+                    if SETTINGS.test_suites
+                    else test_run_config.test_suites
                 )
                 if not test_run_config.skip_smoke_tests:
-                    test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
+                    test_run_test_suites[:0] = [
+                        TestSuiteConfig(test_suite="smoke_tests")
+                    ]
                 try:
                     test_suites_with_cases = self._get_test_suites_with_cases(
                         test_run_test_suites, test_run_config.func, test_run_config.perf
@@ -154,7 +162,8 @@ def run(self) -> None:
                     test_run_result.test_suites_with_cases = test_suites_with_cases
                 except Exception as e:
                     self._logger.exception(
-                        f"Invalid test suite configuration found: " f"{test_run_test_suites}."
+                        f"Invalid test suite configuration found: "
+                        f"{test_run_test_suites}."
                     )
                     test_run_result.update_setup(Result.FAIL, e)
 
@@ -236,7 +245,9 @@ def _get_test_suites_with_cases(
                 test_cases.extend(perf_test_cases)
 
             test_suites_with_cases.append(
-                TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
+                TestSuiteWithCases(
+                    test_suite_class=test_suite_class, test_cases=test_cases
+                )
             )
         return test_suites_with_cases
 
@@ -285,7 +296,11 @@ def _connect_nodes_and_run_test_run(
 
         else:
             self._run_test_run(
-                sut_node, tg_node, test_run_config, test_run_result, test_suites_with_cases
+                sut_node,
+                tg_node,
+                test_run_config,
+                test_run_result,
+                test_suites_with_cases,
             )
 
     def _run_test_run(
@@ -324,7 +339,8 @@ def _run_test_run(
                 )
             if dir := SETTINGS.precompiled_build_dir:
                 dpdk_build_config = DPDKPrecompiledBuildConfiguration(
-                    dpdk_location=dpdk_build_config.dpdk_location, precompiled_build_dir=dir
+                    dpdk_location=dpdk_build_config.dpdk_location,
+                    precompiled_build_dir=dir,
                 )
             sut_node.set_up_test_run(test_run_config, dpdk_build_config)
             test_run_result.dpdk_build_info = sut_node.get_dpdk_build_info()
@@ -335,7 +351,9 @@ def _run_test_run(
             test_run_result.update_setup(Result.FAIL, e)
 
         else:
-            self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
+            self._run_test_suites(
+                sut_node, tg_node, test_run_result, test_suites_with_cases
+            )
 
         finally:
             try:
@@ -360,7 +378,9 @@ def _get_supported_capabilities(
 
         self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
 
-        return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
+        return get_supported_capabilities(
+            sut_node, topology_config, capabilities_to_check
+        )
 
     def _run_test_suites(
         self,
@@ -443,6 +463,7 @@ def _run_test_suite(
         Args:
             sut_node: The test run's SUT node.
             tg_node: The test run's TG node.
+            topology: The port topology of the nodes.
             test_suite_result: The test suite level result object associated
                 with the current test suite.
             test_suite_with_cases: The test suite with test cases to run.
@@ -585,6 +606,9 @@ def _execute_test_case(
             test_case: The test case function.
             test_case_result: The test case level result object associated
                 with the current test case.
+
+        Raises:
+            KeyboardInterrupt: If DTS has been interrupted by the user.
         """
         test_case_name = test_case.__name__
         try:
@@ -601,7 +625,9 @@ def _execute_test_case(
             self._logger.exception(f"Test case execution ERROR: {test_case_name}")
             test_case_result.update(Result.ERROR, e)
         except KeyboardInterrupt:
-            self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
+            self._logger.error(
+                f"Test case execution INTERRUPTED by user: {test_case_name}"
+            )
             test_case_result.update(Result.SKIP)
             raise KeyboardInterrupt("Stop DTS")
 
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 5a8e6e5aee..91f317105a 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -257,7 +257,9 @@ def _get_help_string(self, action):
         return help
 
 
-def _required_with_one_of(parser: _DTSArgumentParser, action: Action, *required_dests: str) -> None:
+def _required_with_one_of(
+    parser: _DTSArgumentParser, action: Action, *required_dests: str
+) -> None:
     """Verify that `action` is listed together with at least one of `required_dests`.
 
     Verify that when `action` is among the command-line arguments or
@@ -461,6 +463,7 @@ def _process_dpdk_location(
     any valid :class:`DPDKLocation` with the provided parameters if validation is successful.
 
     Args:
+        parser: The instance of the arguments parser.
         dpdk_tree: The path to the DPDK source tree directory.
         tarball: The path to the DPDK tarball.
         remote: If :data:`True`, `dpdk_tree` or `tarball` is located on the SUT node, instead of the
@@ -512,6 +515,7 @@ def _process_test_suites(
     """Process the given argument to a list of :class:`TestSuiteConfig` to execute.
 
     Args:
+        parser: The instance of the arguments parser.
         args: The arguments to process. The args is a string from an environment variable
               or a list of from the user input containing tests suites with tests cases,
               each of which is a list of [test_suite, test_case, test_case, ...].
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index adee01f031..fd6706289e 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -300,7 +300,9 @@ def get_expected_packet(self, packet: Packet) -> Packet:
         """
         return self.get_expected_packets([packet])[0]
 
-    def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> list[Packet]:
+    def _adjust_addresses(
+        self, packets: list[Packet], expected: bool = False
+    ) -> list[Packet]:
         """L2 and L3 address additions in both directions.
 
         Copies of `packets` will be made, modified and returned in this method.
@@ -378,15 +380,21 @@ def verify(self, condition: bool, failure_description: str) -> None:
             self._fail_test_case_verify(failure_description)
 
     def _fail_test_case_verify(self, failure_description: str) -> None:
-        self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
+        self._logger.debug(
+            "A test case failed, showing the last 10 commands executed on SUT:"
+        )
         for command_res in self.sut_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
-        self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
+        self._logger.debug(
+            "A test case failed, showing the last 10 commands executed on TG:"
+        )
         for command_res in self.tg_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
         raise TestCaseVerifyError(failure_description)
 
-    def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
+    def verify_packets(
+        self, expected_packet: Packet, received_packets: list[Packet]
+    ) -> None:
         """Verify that `expected_packet` has been received.
 
         Go through `received_packets` and check that `expected_packet` is among them.
@@ -408,7 +416,9 @@ def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]
                 f"The expected packet {get_packet_summaries(expected_packet)} "
                 f"not found among received {get_packet_summaries(received_packets)}"
             )
-            self._fail_test_case_verify("An expected packet not found among received packets.")
+            self._fail_test_case_verify(
+                "An expected packet not found among received packets."
+            )
 
     def match_all_packets(
         self, expected_packets: list[Packet], received_packets: list[Packet]
@@ -444,7 +454,9 @@ def match_all_packets(
                 f"but {missing_packets_count} were missing."
             )
 
-    def _compare_packets(self, expected_packet: Packet, received_packet: Packet) -> bool:
+    def _compare_packets(
+        self, expected_packet: Packet, received_packet: Packet
+    ) -> bool:
         self._logger.debug(
             f"Comparing packets: \n{expected_packet.summary()}\n{received_packet.summary()}"
         )
@@ -473,10 +485,14 @@ def _compare_packets(self, expected_packet: Packet, received_packet: Packet) ->
             expected_payload = expected_payload.payload
 
         if expected_payload:
-            self._logger.debug(f"The expected packet did not contain {expected_payload}.")
+            self._logger.debug(
+                f"The expected packet did not contain {expected_payload}."
+            )
             return False
         if received_payload and received_payload.__class__ != Padding:
-            self._logger.debug("The received payload had extra layers which were not padding.")
+            self._logger.debug(
+                "The received payload had extra layers which were not padding."
+            )
             return False
         return True
 
@@ -503,7 +519,10 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
 
     def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         self._logger.debug("Looking at the IP layer.")
-        if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst:
+        if (
+            received_packet.src != expected_packet.src
+            or received_packet.dst != expected_packet.dst
+        ):
             return False
         return True
 
@@ -615,7 +634,11 @@ def class_name(self) -> str:
 
     @cached_property
     def class_obj(self) -> type[TestSuite]:
-        """A reference to the test suite's class."""
+        """A reference to the test suite's class.
+
+        Raises:
+            InternalError: If the test suite class is missing from the module.
+        """
 
         def is_test_suite(obj) -> bool:
             """Check whether `obj` is a :class:`TestSuite`.
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 0d5f0e0b32..6e06c75c3d 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -130,7 +130,9 @@ def _get_and_reset(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
 
     @classmethod
     @abstractmethod
-    def get_supported_capabilities(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
+    def get_supported_capabilities(
+        cls, sut_node: SutNode, topology: "Topology"
+    ) -> set[Self]:
         """Get the support status of each registered capability.
 
         Each subclass must implement this method and return the subset of supported capabilities
@@ -224,7 +226,10 @@ def get_supported_capabilities(
             with TestPmdShell(
                 sut_node, privileged=True, disable_device_start=True
             ) as testpmd_shell:
-                for conditional_capability_fn, capabilities in capabilities_to_check_map.items():
+                for (
+                    conditional_capability_fn,
+                    capabilities,
+                ) in capabilities_to_check_map.items():
                     supported_capabilities: set[NicCapability] = set()
                     unsupported_capabilities: set[NicCapability] = set()
                     capability_fn = cls._reduce_capabilities(
@@ -237,14 +242,18 @@ def get_supported_capabilities(
                         if capability.nic_capability in supported_capabilities:
                             supported_conditional_capabilities.add(capability)
 
-        logger.debug(f"Found supported capabilities {supported_conditional_capabilities}.")
+        logger.debug(
+            f"Found supported capabilities {supported_conditional_capabilities}."
+        )
         return supported_conditional_capabilities
 
     @classmethod
     def _get_decorated_capabilities_map(
         cls,
     ) -> dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]]:
-        capabilities_map: dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]] = {}
+        capabilities_map: dict[
+            TestPmdShellDecorator | None, set["DecoratedNicCapability"]
+        ] = {}
         for capability in cls.capabilities_to_check:
             if capability.capability_decorator not in capabilities_map:
                 capabilities_map[capability.capability_decorator] = set()
@@ -307,7 +316,9 @@ class TopologyCapability(Capability):
     _unique_capabilities: ClassVar[dict[str, Self]] = {}
 
     def _preprocess_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
-        test_case_or_suite.required_capabilities.discard(test_case_or_suite.topology_type)
+        test_case_or_suite.required_capabilities.discard(
+            test_case_or_suite.topology_type
+        )
         test_case_or_suite.topology_type = self
 
     @classmethod
@@ -353,6 +364,10 @@ def set_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
         At that point, the test case topologies have been set by the :func:`requires` decorator.
         The test suite topology only affects the test case topologies
         if not :attr:`~.topology.TopologyType.default`.
+
+        Raises:
+            ConfigurationError: If the topology type requested by the test case is more complex than
+                the test suite's.
         """
         if inspect.isclass(test_case_or_suite):
             if self.topology_type is not TopologyType.default:
@@ -443,7 +458,9 @@ class TestProtocol(Protocol):
     #: The reason for skipping the test case or suite.
     skip_reason: ClassVar[str] = ""
     #: The topology type of the test case or suite.
-    topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
+    topology_type: ClassVar[TopologyCapability] = TopologyCapability(
+        TopologyType.default
+    )
     #: The capabilities the test case or suite requires in order to be executed.
     required_capabilities: ClassVar[set[Capability]] = set()
 
@@ -471,7 +488,9 @@ def requires(
         The decorated test case or test suite.
     """
 
-    def add_required_capability(test_case_or_suite: type[TestProtocol]) -> type[TestProtocol]:
+    def add_required_capability(
+        test_case_or_suite: type[TestProtocol],
+    ) -> type[TestProtocol]:
         for nic_capability in nic_capabilities:
             decorated_nic_capability = DecoratedNicCapability.get_unique(nic_capability)
             decorated_nic_capability.add_to_required(test_case_or_suite)
diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index a50cf44c19..0746878770 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -87,7 +87,9 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
         # the input lcores may not be sorted
         self._lcore_list.sort()
-        self._lcore_str = f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+        self._lcore_str = (
+            f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
+        )
 
     @property
     def lcore_list(self) -> list[int]:
@@ -102,11 +104,15 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
                 segment.append(lcore_id)
             else:
                 formatted_core_list.append(
-                    f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
+                    f"{segment[0]}-{segment[-1]}"
+                    if len(segment) > 1
+                    else f"{segment[0]}"
                 )
                 current_core_index = lcore_ids_list.index(lcore_id)
                 formatted_core_list.extend(
-                    self._get_consecutive_lcores_range(lcore_ids_list[current_core_index:])
+                    self._get_consecutive_lcores_range(
+                        lcore_ids_list[current_core_index:]
+                    )
                 )
                 segment.clear()
                 break
@@ -166,7 +172,9 @@ def __init__(
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
-        self._lcores_to_filter = sorted(lcore_list, key=lambda x: x.core, reverse=not ascending)
+        self._lcores_to_filter = sorted(
+            lcore_list, key=lambda x: x.core, reverse=not ascending
+        )
         self.filter()
 
     @abstractmethod
@@ -231,6 +239,9 @@ def _filter_sockets(
 
         Returns:
             A list of lists of logical CPU cores. Each list contains cores from one socket.
+
+        Raises:
+            ValueError: If the number of the requested sockets by the filter can't be satisfied.
         """
         allowed_sockets: set[int] = set()
         socket_count = self._filter_specifier.socket_count
@@ -272,6 +283,10 @@ def _filter_cores_from_socket(
 
         Returns:
             The filtered logical CPU cores.
+
+        Raises:
+            ValueError: If the number of the requested cores per socket by the filter
+                can't be satisfied.
         """
         # no need to use ordered dict, from Python3.7 the dict
         # insertion order is preserved (LIFO).
@@ -287,7 +302,9 @@ def _filter_cores_from_socket(
                 else:
                     # we have enough lcores per this core
                     continue
-            elif self._filter_specifier.cores_per_socket > len(lcore_count_per_core_map):
+            elif self._filter_specifier.cores_per_socket > len(
+                lcore_count_per_core_map
+            ):
                 # only add cores if we need more
                 lcore_count_per_core_map[lcore.core] = 1
                 filtered_lcores.append(lcore)
@@ -327,6 +344,9 @@ def filter(self) -> list[LogicalCore]:
 
         Return:
             The filtered logical CPU cores.
+
+        Raises:
+            ValueError: If the specified lcore filter specifier is invalid.
         """
         if not len(self._filter_specifier.lcore_list):
             return self._lcores_to_filter
@@ -360,6 +380,9 @@ def lcore_filter(
 
     Returns:
         The filter that corresponds to `filter_specifier`.
+
+    Raises:
+        ValueError: If the supplied `filter_specifier` is invalid.
     """
     if isinstance(filter_specifier, LogicalCoreList):
         return LogicalCoreListFilter(core_list, filter_specifier, ascending)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index f87efb8f18..b316f23b4e 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -83,8 +83,14 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
-        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
+    def setup_hugepages(
+        self, number_of: int, hugepage_size: int, force_first_numa: bool
+    ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.
+
+        Raises:
+            ConfigurationError: If the given `hugepage_size` is not supported by the OS.
+        """
         self._logger.info("Getting Hugepage information.")
         hugepages_total = self._get_hugepages_total(hugepage_size)
         if (
@@ -127,7 +133,9 @@ def _mount_huge_pages(self) -> None:
         if result.stdout == "":
             remote_mount_path = "/mnt/huge"
             self.send_command(f"mkdir -p {remote_mount_path}", privileged=True)
-            self.send_command(f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True)
+            self.send_command(
+                f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True
+            )
 
     def _supports_numa(self) -> bool:
         # the system supports numa if self._numa_nodes is non-empty and there are more
@@ -135,9 +143,13 @@ def _supports_numa(self) -> bool:
         # there's no reason to do any numa specific configuration)
         return len(self._numa_nodes) > 1
 
-    def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: bool) -> None:
+    def _configure_huge_pages(
+        self, number_of: int, size: int, force_first_numa: bool
+    ) -> None:
         self._logger.info("Configuring Hugepages.")
-        hugepage_config_path = f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+        hugepage_config_path = (
+            f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
+        )
         if force_first_numa and self._supports_numa():
             # clear non-numa hugepages
             self.send_command(f"echo 0 | tee {hugepage_config_path}", privileged=True)
@@ -146,19 +158,25 @@ def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: boo
                 f"/hugepages-{size}kB/nr_hugepages"
             )
 
-        self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
+        self.send_command(
+            f"echo {number_of} | tee {hugepage_config_path}", privileged=True
+        )
 
     def update_ports(self, ports: list[Port]) -> None:
         """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
-            assert port.node == self.name, "Attempted to gather port info on the wrong node"
+            assert (
+                port.node == self.name
+            ), "Attempted to gather port info on the wrong node"
 
         port_info_list = self._get_lshw_info()
         for port in ports:
             for port_info in port_info_list:
                 if f"pci@{port.pci}" == port_info.get("businfo"):
-                    self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
+                    self._update_port_attr(
+                        port, port_info.get("logicalname"), "logical_name"
+                    )
                     self._update_port_attr(port, port_info.get("serial"), "mac_address")
                     port_info_list.remove(port_info)
                     break
@@ -169,10 +187,14 @@ def _get_lshw_info(self) -> list[LshwOutput]:
         output = self.send_command("lshw -quiet -json -C network", verify=True)
         return json.loads(output.stdout)
 
-    def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) -> None:
+    def _update_port_attr(
+        self, port: Port, attr_value: str | None, attr_name: str
+    ) -> None:
         if attr_value:
             setattr(port, attr_name, attr_value)
-            self._logger.debug(f"Found '{attr_name}' of port {port.pci}: '{attr_value}'.")
+            self._logger.debug(
+                f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
+            )
         else:
             self._logger.warning(
                 f"Attempted to get '{attr_name}' of port {port.pci}, but it doesn't exist."
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index c1844ecd5d..e8021a4afe 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -198,13 +198,18 @@ def close(self) -> None:
             session.close()
 
 
-def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession:
+def create_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLogger
+) -> OSSession:
     """Factory for OS-aware sessions.
 
     Args:
         node_config: The test run configuration of the node to connect to.
         name: The name of the session.
         logger: The logger instance this session will use.
+
+    Raises:
+        ConfigurationError: If the node's OS is unsupported.
     """
     match node_config.os:
         case OS.linux:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 62add7a4df..1b2885be5d 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -195,7 +195,9 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         """
 
     @abstractmethod
-    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
+    def copy_from(
+        self, source_file: str | PurePath, destination_dir: str | Path
+    ) -> None:
         """Copy a file from the remote node to the local filesystem.
 
         Copy `source_file` from the remote node associated with this remote
@@ -301,7 +303,9 @@ def copy_dir_to(
         """
 
     @abstractmethod
-    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
+    def remove_remote_file(
+        self, remote_file_path: str | PurePath, force: bool = True
+    ) -> None:
         """Remove remote file, by default remove forcefully.
 
         Args:
@@ -366,7 +370,7 @@ def is_remote_dir(self, remote_path: PurePath) -> bool:
         """Check if the `remote_path` is a directory.
 
         Args:
-            remote_tarball_path: The path to the remote tarball.
+            remote_path: The path to the remote tarball.
 
         Returns:
             If :data:`True` the `remote_path` is a directory, otherwise :data:`False`.
@@ -475,7 +479,9 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
 
     @abstractmethod
-    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
+    def setup_hugepages(
+        self, number_of: int, hugepage_size: int, force_first_numa: bool
+    ) -> None:
         """Configure hugepages on the node.
 
         Get the node's Hugepage Size, configure the specified count of hugepages
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index c0cca2ac50..f707b6e17b 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -96,7 +96,9 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         result = self.send_command(f"test -e {remote_path}")
         return not result.return_code
 
-    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
+    def copy_from(
+        self, source_file: str | PurePath, destination_dir: str | Path
+    ) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_dir)
 
@@ -113,12 +115,16 @@ def copy_dir_from(
     ) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_dir_from`."""
         source_dir = PurePath(source_dir)
-        remote_tarball_path = self.create_remote_tarball(source_dir, compress_format, exclude)
+        remote_tarball_path = self.create_remote_tarball(
+            source_dir, compress_format, exclude
+        )
 
         self.copy_from(remote_tarball_path, destination_dir)
         self.remove_remote_file(remote_tarball_path)
 
-        tarball_path = Path(destination_dir, f"{source_dir.name}.{compress_format.extension}")
+        tarball_path = Path(
+            destination_dir, f"{source_dir.name}.{compress_format.extension}"
+        )
         extract_tarball(tarball_path)
         tarball_path.unlink()
 
@@ -141,7 +147,9 @@ def copy_dir_to(
         self.extract_remote_tarball(remote_tar_path)
         self.remove_remote_file(remote_tar_path)
 
-    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
+    def remove_remote_file(
+        self, remote_file_path: str | PurePath, force: bool = True
+    ) -> None:
         """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(f=force)
         self.send_command(f"rm{opts} {remote_file_path}")
@@ -176,11 +184,15 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
             """
             if exclude_patterns:
                 exclude_patterns = convert_to_list_of_string(exclude_patterns)
-                return "".join([f" --exclude={pattern}" for pattern in exclude_patterns])
+                return "".join(
+                    [f" --exclude={pattern}" for pattern in exclude_patterns]
+                )
             return ""
 
         posix_remote_dir_path = PurePosixPath(remote_dir_path)
-        target_tarball_path = PurePosixPath(f"{remote_dir_path}.{compress_format.extension}")
+        target_tarball_path = PurePosixPath(
+            f"{remote_dir_path}.{compress_format.extension}"
+        )
 
         self.send_command(
             f"tar caf {target_tarball_path}{generate_tar_exclude_args(exclude)} "
@@ -191,7 +203,9 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
         return target_tarball_path
 
     def extract_remote_tarball(
-        self, remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None
+        self,
+        remote_tarball_path: str | PurePath,
+        expected_dir: str | PurePath | None = None,
     ) -> None:
         """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
         self.send_command(
@@ -236,7 +250,11 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
-        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
+        """Overrides :meth:`~.os_session.OSSession.build_dpdk`.
+
+        Raises:
+            DPDKBuildError: If the DPDK build failed.
+        """
         try:
             if rebuild:
                 # reconfigure, then build
@@ -267,7 +285,9 @@ def build_dpdk(
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
-        out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
+        out = self.send_command(
+            f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
+        )
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
@@ -282,7 +302,9 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
             self._check_dpdk_hugepages(dpdk_runtime_dirs)
             self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
 
-    def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
+    def _get_dpdk_runtime_dirs(
+        self, dpdk_prefix_list: Iterable[str]
+    ) -> list[PurePosixPath]:
         """Find runtime directories DPDK apps are currently using.
 
         Args:
@@ -310,7 +332,9 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
         Returns:
             The contents of remote_path. If remote_path doesn't exist, return None.
         """
-        out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
+        out = self.send_command(
+            f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
+        ).stdout
         if "No such file or directory" in out:
             return None
         else:
@@ -341,7 +365,9 @@ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[in
                             pids.append(int(match.group(1)))
         return pids
 
-    def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
+    def _check_dpdk_hugepages(
+        self, dpdk_runtime_dirs: Iterable[str | PurePath]
+    ) -> None:
         """Check there aren't any leftover hugepages.
 
         If any hugepages are found, emit a warning. The hugepages are investigated in the
@@ -360,7 +386,9 @@ def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) ->
                     self._logger.warning(out)
                     self._logger.warning("*******************************************")
 
-    def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
+    def _remove_dpdk_runtime_dirs(
+        self, dpdk_runtime_dirs: Iterable[str | PurePath]
+    ) -> None:
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
@@ -369,7 +397,11 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
-        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
+        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`.
+
+        Raises:
+            ValueError: If the given `compiler_name` is invalid.
+        """
         match compiler_name:
             case "gcc":
                 return self.send_command(
@@ -393,4 +425,6 @@ def get_node_info(self) -> OSSessionInfo:
             SETTINGS.timeout,
         ).stdout.split("\n")
         kernel_version = self.send_command("uname -r", SETTINGS.timeout).stdout
-        return OSSessionInfo(os_release_info[0].strip(), os_release_info[1].strip(), kernel_version)
+        return OSSessionInfo(
+            os_release_info[0].strip(), os_release_info[1].strip(), kernel_version
+        )
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 14d77c50a6..6adcff01c2 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -139,7 +139,9 @@ def remote_dpdk_build_dir(self) -> str | PurePath:
     def dpdk_version(self) -> str | None:
         """Last built DPDK version."""
         if self._dpdk_version is None:
-            self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_tree_path)
+            self._dpdk_version = self.main_session.get_dpdk_version(
+                self._remote_dpdk_tree_path
+            )
         return self._dpdk_version
 
     @property
@@ -153,7 +155,9 @@ def node_info(self) -> OSSessionInfo:
     def compiler_version(self) -> str | None:
         """The node's compiler version."""
         if self._compiler_version is None:
-            self._logger.warning("The `compiler_version` is None because a pre-built DPDK is used.")
+            self._logger.warning(
+                "The `compiler_version` is None because a pre-built DPDK is used."
+            )
 
         return self._compiler_version
 
@@ -181,7 +185,9 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
         Returns:
             The DPDK build information,
         """
-        return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
+        return DPDKBuildInfo(
+            dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
+        )
 
     def set_up_test_run(
         self,
@@ -264,13 +270,16 @@ def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
         Raises:
             RemoteFileNotFoundError: If the DPDK source tree is expected to be on the SUT node but
                 is not found.
+            ConfigurationError: If the remote DPDK source tree specified is not a valid directory.
         """
         if not self.main_session.remote_path_exists(dpdk_tree):
             raise RemoteFileNotFoundError(
                 f"Remote DPDK source tree '{dpdk_tree}' not found in SUT node."
             )
         if not self.main_session.is_remote_dir(dpdk_tree):
-            raise ConfigurationError(f"Remote DPDK source tree '{dpdk_tree}' must be a directory.")
+            raise ConfigurationError(
+                f"Remote DPDK source tree '{dpdk_tree}' must be a directory."
+            )
 
         self.__remote_dpdk_tree_path = dpdk_tree
 
@@ -306,9 +315,13 @@ def _validate_remote_dpdk_tarball(self, dpdk_tarball: PurePath) -> None:
             ConfigurationError: If the `dpdk_tarball` is a valid path but not a valid tar archive.
         """
         if not self.main_session.remote_path_exists(dpdk_tarball):
-            raise RemoteFileNotFoundError(f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT.")
+            raise RemoteFileNotFoundError(
+                f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT."
+            )
         if not self.main_session.is_remote_tarfile(dpdk_tarball):
-            raise ConfigurationError(f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive.")
+            raise ConfigurationError(
+                f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive."
+            )
 
     def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
         """Copy the local DPDK tarball to the SUT node.
@@ -323,7 +336,9 @@ def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
             f"Copying DPDK tarball to SUT: '{dpdk_tarball}' into '{self._remote_tmp_dir}'."
         )
         self.main_session.copy_to(dpdk_tarball, self._remote_tmp_dir)
-        return self.main_session.join_remote_path(self._remote_tmp_dir, dpdk_tarball.name)
+        return self.main_session.join_remote_path(
+            self._remote_tmp_dir, dpdk_tarball.name
+        )
 
     def _prepare_and_extract_dpdk_tarball(self, remote_tarball_path: PurePath) -> None:
         """Prepare the remote DPDK tree path and extract the tarball.
@@ -347,7 +362,9 @@ def remove_tarball_suffix(remote_tarball_path: PurePath) -> PurePath:
             if len(remote_tarball_path.suffixes) > 1:
                 if remote_tarball_path.suffixes[-2] == ".tar":
                     suffixes_to_remove = "".join(remote_tarball_path.suffixes[-2:])
-                    return PurePath(str(remote_tarball_path).replace(suffixes_to_remove, ""))
+                    return PurePath(
+                        str(remote_tarball_path).replace(suffixes_to_remove, "")
+                    )
             return remote_tarball_path.with_suffix("")
 
         tarball_top_dir = self.main_session.get_tarball_top_dir(remote_tarball_path)
@@ -390,7 +407,9 @@ def _set_remote_dpdk_build_dir(self, build_dir: str):
 
         self._remote_dpdk_build_dir = PurePath(remote_dpdk_build_dir)
 
-    def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration) -> None:
+    def _configure_dpdk_build(
+        self, dpdk_build_config: DPDKBuildOptionsConfiguration
+    ) -> None:
         """Populate common environment variables and set the DPDK build related properties.
 
         This method sets `compiler_version` for additional information and `remote_dpdk_build_dir`
@@ -400,9 +419,13 @@ def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration
             dpdk_build_config: A DPDK build configuration to test.
         """
         self._env_vars = {}
-        self._env_vars.update(self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch))
+        self._env_vars.update(
+            self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch)
+        )
         if compiler_wrapper := dpdk_build_config.compiler_wrapper:
-            self._env_vars["CC"] = f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
+            self._env_vars["CC"] = (
+                f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
+            )
         else:
             self._env_vars["CC"] = dpdk_build_config.compiler.name
 
@@ -453,7 +476,9 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
         if app_name == "all":
-            return self.main_session.join_remote_path(self.remote_dpdk_build_dir, "examples")
+            return self.main_session.join_remote_path(
+                self.remote_dpdk_build_dir, "examples"
+            )
         return self.main_session.join_remote_path(
             self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
         )
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 3824804310..2c10aff4ef 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -43,6 +43,12 @@ def get_from_value(cls, value: int) -> "TopologyType":
         :class:`TopologyType` is a regular :class:`~enum.Enum`.
         When getting an instance from value, we're not interested in the default,
         since we already know the value, allowing us to remove the ambiguity.
+
+        Args:
+            value: The value of the requested enum.
+
+        Raises:
+            ConfigurationError: If an unsupported link topology is supplied.
         """
         match value:
             case 0:
@@ -52,7 +58,9 @@ def get_from_value(cls, value: int) -> "TopologyType":
             case 2:
                 return TopologyType.two_links
             case _:
-                raise ConfigurationError("More than two links in a topology are not supported.")
+                raise ConfigurationError(
+                    "More than two links in a topology are not supported."
+                )
 
 
 class Topology:
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 945f6bbbbb..e7fd511a00 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -33,9 +33,16 @@ def create_traffic_generator(
 
     Returns:
         A traffic generator capable of capturing received packets.
+
+    Raises:
+        ConfigurationError: If an unknown traffic generator has been setup.
     """
     match traffic_generator_config:
         case ScapyTrafficGeneratorConfig():
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config, privileged=True)
+            return ScapyTrafficGenerator(
+                tg_node, traffic_generator_config, privileged=True
+            )
         case _:
-            raise ConfigurationError(f"Unknown traffic generator: {traffic_generator_config.type}")
+            raise ConfigurationError(
+                f"Unknown traffic generator: {traffic_generator_config.type}"
+            )
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 1251ca65a0..16cc361cab 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -173,7 +173,11 @@ def _create_packet_filter(self, filter_config: PacketFilteringConfig) -> str:
         return " && ".join(bpf_filter)
 
     def _shell_create_sniffer(
-        self, packets_to_send: list[Packet], send_port: Port, recv_port: Port, filter_config: str
+        self,
+        packets_to_send: list[Packet],
+        send_port: Port,
+        recv_port: Port,
+        filter_config: str,
     ) -> None:
         """Create an asynchronous sniffer in the shell.
 
@@ -227,7 +231,9 @@ def _shell_start_and_stop_sniffing(self, duration: float) -> list[Packet]:
         self.send_command(f"{self._sniffer_name}.start()")
         # Insert a one second delay to prevent timeout errors from occurring
         time.sleep(duration + 1)
-        self.send_command(f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)")
+        self.send_command(
+            f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)"
+        )
         # An extra newline is required here due to the nature of interactive Python shells
         packet_strs = self.send_command(
             f"for pakt in {sniffed_packets_name}: print(bytes_base64(pakt.build()))\n"
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 5ac61cd4e1..42b6735646 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -42,11 +42,12 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig, **kwargs):
         Args:
             tg_node: The traffic generator node where the created traffic generator will be running.
             config: The traffic generator's test run configuration.
+            **kwargs: Any additional arguments if any.
         """
         self._config = config
         self._tg_node = tg_node
         self._logger = get_dts_logger(f"{self._tg_node.name} {self._config.type}")
-        super().__init__(tg_node, **kwargs)
+        super().__init__()
 
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send `packet` and block until it is fully sent.
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index bc3f8d6d0f..6ff9a485ba 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -31,7 +31,9 @@
 REGEX_FOR_PCI_ADDRESS: str = r"[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}"
 _REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
 _REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
-REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
+REGEX_FOR_MAC_ADDRESS: str = (
+    rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
+)
 REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
 
 
@@ -56,7 +58,9 @@ def expand_range(range_str: str) -> list[int]:
         range_boundaries = range_str.split("-")
         # will throw an exception when items in range_boundaries can't be converted,
         # serving as type check
-        expanded_range.extend(range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1))
+        expanded_range.extend(
+            range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
+        )
 
     return expanded_range
 
@@ -73,7 +77,9 @@ def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
-        packet_summaries = json.dumps(list(map(lambda pkt: pkt.summary(), packets)), indent=4)
+        packet_summaries = json.dumps(
+            list(map(lambda pkt: pkt.summary(), packets)), indent=4
+        )
     return f"Packet contents: \n{packet_summaries}"
 
 
@@ -81,7 +87,9 @@ class StrEnum(Enum):
     """Enum with members stored as strings."""
 
     @staticmethod
-    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
+    def _generate_next_value_(
+        name: str, start: int, count: int, last_values: object
+    ) -> str:
         return name
 
     def __str__(self) -> str:
@@ -108,7 +116,9 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
 
                 meson_args = MesonArgs(enable_kmods=True).
         """
-        self._default_library = f"--default-library={default_library}" if default_library else ""
+        self._default_library = (
+            f"--default-library={default_library}" if default_library else ""
+        )
         self._dpdk_args = " ".join(
             (
                 f"-D{dpdk_arg_name}={dpdk_arg_value}"
@@ -149,7 +159,9 @@ def extension(self):
         For other compression formats, the extension will be in the format
         'tar.{compression format}'.
         """
-        return f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
+        return (
+            f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
+        )
 
 
 def convert_to_list_of_string(value: Any | list[Any]) -> list[str]:
@@ -177,7 +189,9 @@ def create_tarball(
         The path to the created tarball.
     """
 
-    def create_filter_function(exclude_patterns: str | list[str] | None) -> Callable | None:
+    def create_filter_function(
+        exclude_patterns: str | list[str] | None,
+    ) -> Callable | None:
         """Create a filter function based on the provided exclude patterns.
 
         Args:
@@ -192,7 +206,9 @@ def create_filter_function(exclude_patterns: str | list[str] | None) -> Callable
 
             def filter_func(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo | None:
                 file_name = os.path.basename(tarinfo.name)
-                if any(fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns):
+                if any(
+                    fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns
+                ):
                     return None
                 return tarinfo
 
@@ -285,7 +301,9 @@ def _make_packet() -> Packet:
             packet /= random.choice(l4_factories)(sport=src_port, dport=dst_port)
 
         max_payload_size = mtu - len(packet)
-        usable_payload_size = payload_size if payload_size < max_payload_size else max_payload_size
+        usable_payload_size = (
+            payload_size if payload_size < max_payload_size else max_payload_size
+        )
         return packet / random.randbytes(usable_payload_size)
 
     return [_make_packet() for _ in range(number_of)]
@@ -300,7 +318,7 @@ class MultiInheritanceBaseClass:
     :meth:`super.__init__` without repercussion.
     """
 
-    def __init__(self, *args, **kwargs) -> None:
+    def __init__(self) -> None:
         """Call the init method of :class:`object`."""
         super().__init__()
 
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index 7cfbd7ea00..524854ea89 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -38,7 +38,9 @@ class TestVlan(TestSuite):
     tag when insertion is enabled.
     """
 
-    def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id: int) -> None:
+    def send_vlan_packet_and_verify(
+        self, should_receive: bool, strip: bool, vlan_id: int
+    ) -> None:
         """Generate a VLAN packet, send and verify packet with same payload is received on the dut.
 
         Args:
@@ -57,12 +59,14 @@ def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id
                 break
         if should_receive:
             self.verify(
-                test_packet is not None, "Packet was dropped when it should have been received"
+                test_packet is not None,
+                "Packet was dropped when it should have been received",
             )
             if test_packet is not None:
                 if strip:
                     self.verify(
-                        not test_packet.haslayer(Dot1Q), "VLAN tag was not stripped successfully"
+                        not test_packet.haslayer(Dot1Q),
+                        "VLAN tag was not stripped successfully",
                     )
                 else:
                     self.verify(
@@ -88,11 +92,18 @@ def send_packet_and_verify_insertion(self, expected_id: int) -> None:
             if hasattr(packet, "load") and b"xxxxx" in packet.load:
                 test_packet = packet
                 break
-        self.verify(test_packet is not None, "Packet was dropped when it should have been received")
+        self.verify(
+            test_packet is not None,
+            "Packet was dropped when it should have been received",
+        )
         if test_packet is not None:
-            self.verify(test_packet.haslayer(Dot1Q), "The received packet did not have a VLAN tag")
             self.verify(
-                test_packet.vlan == expected_id, "The received tag did not match the expected tag"
+                test_packet.haslayer(Dot1Q),
+                "The received packet did not have a VLAN tag",
+            )
+            self.verify(
+                test_packet.vlan == expected_id,
+                "The received tag did not match the expected tag",
             )
 
     def vlan_setup(self, testpmd: TestPmdShell, port_id: int, filtered_id: int) -> None:
@@ -102,9 +113,6 @@ def vlan_setup(self, testpmd: TestPmdShell, port_id: int, filtered_id: int) -> N
             testpmd: Testpmd shell session to send commands to.
             port_id: Number of port to use for setup.
             filtered_id: ID to be added to the VLAN filter list.
-
-        Returns:
-            TestPmdShell: Testpmd session being configured.
         """
         testpmd.set_forward_mode(SimpleForwardingModes.mac)
         testpmd.set_promisc(port_id, False)
@@ -147,7 +155,9 @@ def test_vlan_no_receipt(self) -> None:
         with TestPmdShell(node=self.sut_node) as testpmd:
             self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
             testpmd.start()
-            self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
+            self.send_vlan_packet_and_verify(
+                should_receive=False, strip=False, vlan_id=2
+            )
 
     @func_test
     def test_vlan_header_insertion(self) -> None:
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 4/7] dts: apply Ruff formatting
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
                     ` (2 preceding siblings ...)
  2024-12-12 14:00   ` [PATCH v2 3/7] dts: resolve docstring linter errors Luca Vizzarro
@ 2024-12-12 14:00   ` Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 5/7] dts: update dts-check-format to use Ruff Luca Vizzarro
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-12 14:00 UTC (permalink / raw)
  To: dev; +Cc: Patrick Robb, Luca Vizzarro, Paul Szczepanek

While Ruff formatting is Black-compatible and is near-identical, it
still requires formatting for a small set of elements.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/framework/params/eal.py                   |  5 +-
 dts/framework/remote_session/dpdk_shell.py    |  1 -
 dts/framework/remote_session/python_shell.py  |  1 +
 .../single_active_interactive_shell.py        |  8 +--
 dts/framework/runner.py                       | 36 ++++----------
 dts/framework/settings.py                     |  5 +-
 dts/framework/test_suite.py                   | 37 ++++----------
 dts/framework/testbed_model/capability.py     | 20 ++------
 dts/framework/testbed_model/cpu.py            | 28 ++++-------
 dts/framework/testbed_model/linux_session.py  | 36 ++++----------
 dts/framework/testbed_model/node.py           |  4 +-
 dts/framework/testbed_model/os_session.py     | 13 ++---
 dts/framework/testbed_model/port.py           |  1 -
 dts/framework/testbed_model/posix_session.py  | 48 +++++-------------
 dts/framework/testbed_model/sut_node.py       | 49 +++++--------------
 dts/framework/testbed_model/topology.py       |  4 +-
 .../traffic_generator/__init__.py             |  8 +--
 .../testbed_model/traffic_generator/scapy.py  |  5 +-
 dts/framework/utils.py                        | 32 +++---------
 dts/tests/TestSuite_vlan.py                   |  8 +--
 20 files changed, 90 insertions(+), 259 deletions(-)

diff --git a/dts/framework/params/eal.py b/dts/framework/params/eal.py
index 71bc781eab..b90ff33dcf 100644
--- a/dts/framework/params/eal.py
+++ b/dts/framework/params/eal.py
@@ -27,10 +27,7 @@ class EalParams(Params):
         no_pci: Switch to disable PCI bus, e.g.: ``no_pci=True``.
         vdevs: Virtual devices, e.g.::
 
-            vdevs=[
-                VirtualDevice('net_ring0'),
-                VirtualDevice('net_ring1')
-            ]
+            vdevs = [VirtualDevice("net_ring0"), VirtualDevice("net_ring1")]
 
         ports: The list of ports to allow.
         other_eal_param: user defined DPDK EAL parameters, e.g.::
diff --git a/dts/framework/remote_session/dpdk_shell.py b/dts/framework/remote_session/dpdk_shell.py
index 82fa4755f0..c11d9ab81c 100644
--- a/dts/framework/remote_session/dpdk_shell.py
+++ b/dts/framework/remote_session/dpdk_shell.py
@@ -6,7 +6,6 @@
 Provides a base class to create interactive shells based on DPDK.
 """
 
-
 from abc import ABC
 from pathlib import PurePath
 
diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
index 953ed100df..9d4abab12c 100644
--- a/dts/framework/remote_session/python_shell.py
+++ b/dts/framework/remote_session/python_shell.py
@@ -6,6 +6,7 @@
 Typical usage example in a TestSuite::
 
     from framework.remote_session import PythonShell
+
     python_shell = PythonShell(self.tg_node, timeout=5, privileged=True)
     python_shell.send_command("print('Hello World')")
     python_shell.close()
diff --git a/dts/framework/remote_session/single_active_interactive_shell.py b/dts/framework/remote_session/single_active_interactive_shell.py
index a53e8fc6e1..3539f634f9 100644
--- a/dts/framework/remote_session/single_active_interactive_shell.py
+++ b/dts/framework/remote_session/single_active_interactive_shell.py
@@ -124,9 +124,7 @@ def __init__(
         super().__init__()
 
     def _setup_ssh_channel(self):
-        self._ssh_channel = (
-            self._node.main_session.interactive_session.session.invoke_shell()
-        )
+        self._ssh_channel = self._node.main_session.interactive_session.session.invoke_shell()
         self._stdin = self._ssh_channel.makefile_stdin("w")
         self._stdout = self._ssh_channel.makefile("r")
         self._ssh_channel.settimeout(self._timeout)
@@ -136,9 +134,7 @@ def _make_start_command(self) -> str:
         """Makes the command that starts the interactive shell."""
         start_command = f"{self._real_path} {self._app_params or ''}"
         if self._privileged:
-            start_command = self._node.main_session._get_privileged_command(
-                start_command
-            )
+            start_command = self._node.main_session._get_privileged_command(start_command)
         return start_command
 
     def _start_application(self) -> None:
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index d228ed1b18..510be1a870 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -136,25 +136,17 @@ def run(self) -> None:
 
             # for all test run sections
             for test_run_with_nodes_config in self._configuration.test_runs_with_nodes:
-                test_run_config, sut_node_config, tg_node_config = (
-                    test_run_with_nodes_config
-                )
+                test_run_config, sut_node_config, tg_node_config = test_run_with_nodes_config
                 self._logger.set_stage(DtsStage.test_run_setup)
-                self._logger.info(
-                    f"Running test run with SUT '{sut_node_config.name}'."
-                )
+                self._logger.info(f"Running test run with SUT '{sut_node_config.name}'.")
                 self._init_random_seed(test_run_config)
                 test_run_result = self._result.add_test_run(test_run_config)
                 # we don't want to modify the original config, so create a copy
                 test_run_test_suites = list(
-                    SETTINGS.test_suites
-                    if SETTINGS.test_suites
-                    else test_run_config.test_suites
+                    SETTINGS.test_suites if SETTINGS.test_suites else test_run_config.test_suites
                 )
                 if not test_run_config.skip_smoke_tests:
-                    test_run_test_suites[:0] = [
-                        TestSuiteConfig(test_suite="smoke_tests")
-                    ]
+                    test_run_test_suites[:0] = [TestSuiteConfig(test_suite="smoke_tests")]
                 try:
                     test_suites_with_cases = self._get_test_suites_with_cases(
                         test_run_test_suites, test_run_config.func, test_run_config.perf
@@ -162,8 +154,7 @@ def run(self) -> None:
                     test_run_result.test_suites_with_cases = test_suites_with_cases
                 except Exception as e:
                     self._logger.exception(
-                        f"Invalid test suite configuration found: "
-                        f"{test_run_test_suites}."
+                        f"Invalid test suite configuration found: " f"{test_run_test_suites}."
                     )
                     test_run_result.update_setup(Result.FAIL, e)
 
@@ -245,9 +236,7 @@ def _get_test_suites_with_cases(
                 test_cases.extend(perf_test_cases)
 
             test_suites_with_cases.append(
-                TestSuiteWithCases(
-                    test_suite_class=test_suite_class, test_cases=test_cases
-                )
+                TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases)
             )
         return test_suites_with_cases
 
@@ -351,9 +340,7 @@ def _run_test_run(
             test_run_result.update_setup(Result.FAIL, e)
 
         else:
-            self._run_test_suites(
-                sut_node, tg_node, test_run_result, test_suites_with_cases
-            )
+            self._run_test_suites(sut_node, tg_node, test_run_result, test_suites_with_cases)
 
         finally:
             try:
@@ -371,16 +358,13 @@ def _get_supported_capabilities(
         topology_config: Topology,
         test_suites_with_cases: Iterable[TestSuiteWithCases],
     ) -> set[Capability]:
-
         capabilities_to_check = set()
         for test_suite_with_cases in test_suites_with_cases:
             capabilities_to_check.update(test_suite_with_cases.required_capabilities)
 
         self._logger.debug(f"Found capabilities to check: {capabilities_to_check}")
 
-        return get_supported_capabilities(
-            sut_node, topology_config, capabilities_to_check
-        )
+        return get_supported_capabilities(sut_node, topology_config, capabilities_to_check)
 
     def _run_test_suites(
         self,
@@ -625,9 +609,7 @@ def _execute_test_case(
             self._logger.exception(f"Test case execution ERROR: {test_case_name}")
             test_case_result.update(Result.ERROR, e)
         except KeyboardInterrupt:
-            self._logger.error(
-                f"Test case execution INTERRUPTED by user: {test_case_name}"
-            )
+            self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}")
             test_case_result.update(Result.SKIP)
             raise KeyboardInterrupt("Stop DTS")
 
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 91f317105a..873d400bec 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -88,6 +88,7 @@
 Typical usage example::
 
   from framework.settings import SETTINGS
+
   foo = SETTINGS.foo
 """
 
@@ -257,9 +258,7 @@ def _get_help_string(self, action):
         return help
 
 
-def _required_with_one_of(
-    parser: _DTSArgumentParser, action: Action, *required_dests: str
-) -> None:
+def _required_with_one_of(parser: _DTSArgumentParser, action: Action, *required_dests: str) -> None:
     """Verify that `action` is listed together with at least one of `required_dests`.
 
     Verify that when `action` is among the command-line arguments or
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index fd6706289e..161bb10066 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -300,9 +300,7 @@ def get_expected_packet(self, packet: Packet) -> Packet:
         """
         return self.get_expected_packets([packet])[0]
 
-    def _adjust_addresses(
-        self, packets: list[Packet], expected: bool = False
-    ) -> list[Packet]:
+    def _adjust_addresses(self, packets: list[Packet], expected: bool = False) -> list[Packet]:
         """L2 and L3 address additions in both directions.
 
         Copies of `packets` will be made, modified and returned in this method.
@@ -380,21 +378,15 @@ def verify(self, condition: bool, failure_description: str) -> None:
             self._fail_test_case_verify(failure_description)
 
     def _fail_test_case_verify(self, failure_description: str) -> None:
-        self._logger.debug(
-            "A test case failed, showing the last 10 commands executed on SUT:"
-        )
+        self._logger.debug("A test case failed, showing the last 10 commands executed on SUT:")
         for command_res in self.sut_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
-        self._logger.debug(
-            "A test case failed, showing the last 10 commands executed on TG:"
-        )
+        self._logger.debug("A test case failed, showing the last 10 commands executed on TG:")
         for command_res in self.tg_node.main_session.remote_session.history[-10:]:
             self._logger.debug(command_res.command)
         raise TestCaseVerifyError(failure_description)
 
-    def verify_packets(
-        self, expected_packet: Packet, received_packets: list[Packet]
-    ) -> None:
+    def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
         """Verify that `expected_packet` has been received.
 
         Go through `received_packets` and check that `expected_packet` is among them.
@@ -416,9 +408,7 @@ def verify_packets(
                 f"The expected packet {get_packet_summaries(expected_packet)} "
                 f"not found among received {get_packet_summaries(received_packets)}"
             )
-            self._fail_test_case_verify(
-                "An expected packet not found among received packets."
-            )
+            self._fail_test_case_verify("An expected packet not found among received packets.")
 
     def match_all_packets(
         self, expected_packets: list[Packet], received_packets: list[Packet]
@@ -454,9 +444,7 @@ def match_all_packets(
                 f"but {missing_packets_count} were missing."
             )
 
-    def _compare_packets(
-        self, expected_packet: Packet, received_packet: Packet
-    ) -> bool:
+    def _compare_packets(self, expected_packet: Packet, received_packet: Packet) -> bool:
         self._logger.debug(
             f"Comparing packets: \n{expected_packet.summary()}\n{received_packet.summary()}"
         )
@@ -485,14 +473,10 @@ def _compare_packets(
             expected_payload = expected_payload.payload
 
         if expected_payload:
-            self._logger.debug(
-                f"The expected packet did not contain {expected_payload}."
-            )
+            self._logger.debug(f"The expected packet did not contain {expected_payload}.")
             return False
         if received_payload and received_payload.__class__ != Padding:
-            self._logger.debug(
-                "The received payload had extra layers which were not padding."
-            )
+            self._logger.debug("The received payload had extra layers which were not padding.")
             return False
         return True
 
@@ -519,10 +503,7 @@ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
 
     def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         self._logger.debug("Looking at the IP layer.")
-        if (
-            received_packet.src != expected_packet.src
-            or received_packet.dst != expected_packet.dst
-        ):
+        if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst:
             return False
         return True
 
diff --git a/dts/framework/testbed_model/capability.py b/dts/framework/testbed_model/capability.py
index 6e06c75c3d..63f99c4479 100644
--- a/dts/framework/testbed_model/capability.py
+++ b/dts/framework/testbed_model/capability.py
@@ -130,9 +130,7 @@ def _get_and_reset(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
 
     @classmethod
     @abstractmethod
-    def get_supported_capabilities(
-        cls, sut_node: SutNode, topology: "Topology"
-    ) -> set[Self]:
+    def get_supported_capabilities(cls, sut_node: SutNode, topology: "Topology") -> set[Self]:
         """Get the support status of each registered capability.
 
         Each subclass must implement this method and return the subset of supported capabilities
@@ -242,18 +240,14 @@ def get_supported_capabilities(
                         if capability.nic_capability in supported_capabilities:
                             supported_conditional_capabilities.add(capability)
 
-        logger.debug(
-            f"Found supported capabilities {supported_conditional_capabilities}."
-        )
+        logger.debug(f"Found supported capabilities {supported_conditional_capabilities}.")
         return supported_conditional_capabilities
 
     @classmethod
     def _get_decorated_capabilities_map(
         cls,
     ) -> dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]]:
-        capabilities_map: dict[
-            TestPmdShellDecorator | None, set["DecoratedNicCapability"]
-        ] = {}
+        capabilities_map: dict[TestPmdShellDecorator | None, set["DecoratedNicCapability"]] = {}
         for capability in cls.capabilities_to_check:
             if capability.capability_decorator not in capabilities_map:
                 capabilities_map[capability.capability_decorator] = set()
@@ -316,9 +310,7 @@ class TopologyCapability(Capability):
     _unique_capabilities: ClassVar[dict[str, Self]] = {}
 
     def _preprocess_required(self, test_case_or_suite: type["TestProtocol"]) -> None:
-        test_case_or_suite.required_capabilities.discard(
-            test_case_or_suite.topology_type
-        )
+        test_case_or_suite.required_capabilities.discard(test_case_or_suite.topology_type)
         test_case_or_suite.topology_type = self
 
     @classmethod
@@ -458,9 +450,7 @@ class TestProtocol(Protocol):
     #: The reason for skipping the test case or suite.
     skip_reason: ClassVar[str] = ""
     #: The topology type of the test case or suite.
-    topology_type: ClassVar[TopologyCapability] = TopologyCapability(
-        TopologyType.default
-    )
+    topology_type: ClassVar[TopologyCapability] = TopologyCapability(TopologyType.default)
     #: The capabilities the test case or suite requires in order to be executed.
     required_capabilities: ClassVar[set[Capability]] = set()
 
diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index 0746878770..46bf13960d 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -67,10 +67,10 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
         There are four supported logical core list formats::
 
-            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
-            lcore_list=[0,1,2,3]        # a list of int indices
-            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
-            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
+            lcore_list = [LogicalCore1, LogicalCore2]  # a list of LogicalCores
+            lcore_list = [0, 1, 2, 3]  # a list of int indices
+            lcore_list = ["0", "1", "2-3"]  # a list of str indices; ranges are supported
+            lcore_list = "0,1,2-3"  # a comma delimited str of indices; ranges are supported
 
         Args:
             lcore_list: Various ways to represent multiple logical cores.
@@ -87,9 +87,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
         # the input lcores may not be sorted
         self._lcore_list.sort()
-        self._lcore_str = (
-            f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
-        )
+        self._lcore_str = f'{",".join(self._get_consecutive_lcores_range(self._lcore_list))}'
 
     @property
     def lcore_list(self) -> list[int]:
@@ -104,15 +102,11 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
                 segment.append(lcore_id)
             else:
                 formatted_core_list.append(
-                    f"{segment[0]}-{segment[-1]}"
-                    if len(segment) > 1
-                    else f"{segment[0]}"
+                    f"{segment[0]}-{segment[-1]}" if len(segment) > 1 else f"{segment[0]}"
                 )
                 current_core_index = lcore_ids_list.index(lcore_id)
                 formatted_core_list.extend(
-                    self._get_consecutive_lcores_range(
-                        lcore_ids_list[current_core_index:]
-                    )
+                    self._get_consecutive_lcores_range(lcore_ids_list[current_core_index:])
                 )
                 segment.clear()
                 break
@@ -172,9 +166,7 @@ def __init__(
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
-        self._lcores_to_filter = sorted(
-            lcore_list, key=lambda x: x.core, reverse=not ascending
-        )
+        self._lcores_to_filter = sorted(lcore_list, key=lambda x: x.core, reverse=not ascending)
         self.filter()
 
     @abstractmethod
@@ -302,9 +294,7 @@ def _filter_cores_from_socket(
                 else:
                     # we have enough lcores per this core
                     continue
-            elif self._filter_specifier.cores_per_socket > len(
-                lcore_count_per_core_map
-            ):
+            elif self._filter_specifier.cores_per_socket > len(lcore_count_per_core_map):
                 # only add cores if we need more
                 lcore_count_per_core_map[lcore.core] = 1
                 filtered_lcores.append(lcore)
diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index b316f23b4e..bda2d448f7 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -83,9 +83,7 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(
-        self, number_of: int, hugepage_size: int, force_first_numa: bool
-    ) -> None:
+    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
         """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.
 
         Raises:
@@ -133,9 +131,7 @@ def _mount_huge_pages(self) -> None:
         if result.stdout == "":
             remote_mount_path = "/mnt/huge"
             self.send_command(f"mkdir -p {remote_mount_path}", privileged=True)
-            self.send_command(
-                f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True
-            )
+            self.send_command(f"mount -t hugetlbfs nodev {remote_mount_path}", privileged=True)
 
     def _supports_numa(self) -> bool:
         # the system supports numa if self._numa_nodes is non-empty and there are more
@@ -143,13 +139,9 @@ def _supports_numa(self) -> bool:
         # there's no reason to do any numa specific configuration)
         return len(self._numa_nodes) > 1
 
-    def _configure_huge_pages(
-        self, number_of: int, size: int, force_first_numa: bool
-    ) -> None:
+    def _configure_huge_pages(self, number_of: int, size: int, force_first_numa: bool) -> None:
         self._logger.info("Configuring Hugepages.")
-        hugepage_config_path = (
-            f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
-        )
+        hugepage_config_path = f"/sys/kernel/mm/hugepages/hugepages-{size}kB/nr_hugepages"
         if force_first_numa and self._supports_numa():
             # clear non-numa hugepages
             self.send_command(f"echo 0 | tee {hugepage_config_path}", privileged=True)
@@ -158,25 +150,19 @@ def _configure_huge_pages(
                 f"/hugepages-{size}kB/nr_hugepages"
             )
 
-        self.send_command(
-            f"echo {number_of} | tee {hugepage_config_path}", privileged=True
-        )
+        self.send_command(f"echo {number_of} | tee {hugepage_config_path}", privileged=True)
 
     def update_ports(self, ports: list[Port]) -> None:
         """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
-            assert (
-                port.node == self.name
-            ), "Attempted to gather port info on the wrong node"
+            assert port.node == self.name, "Attempted to gather port info on the wrong node"
 
         port_info_list = self._get_lshw_info()
         for port in ports:
             for port_info in port_info_list:
                 if f"pci@{port.pci}" == port_info.get("businfo"):
-                    self._update_port_attr(
-                        port, port_info.get("logicalname"), "logical_name"
-                    )
+                    self._update_port_attr(port, port_info.get("logicalname"), "logical_name")
                     self._update_port_attr(port, port_info.get("serial"), "mac_address")
                     port_info_list.remove(port_info)
                     break
@@ -187,14 +173,10 @@ def _get_lshw_info(self) -> list[LshwOutput]:
         output = self.send_command("lshw -quiet -json -C network", verify=True)
         return json.loads(output.stdout)
 
-    def _update_port_attr(
-        self, port: Port, attr_value: str | None, attr_name: str
-    ) -> None:
+    def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) -> None:
         if attr_value:
             setattr(port, attr_name, attr_value)
-            self._logger.debug(
-                f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
-            )
+            self._logger.debug(f"Found '{attr_name}' of port {port.pci}: '{attr_value}'.")
         else:
             self._logger.warning(
                 f"Attempted to get '{attr_name}' of port {port.pci}, but it doesn't exist."
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e8021a4afe..c6f12319ca 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -198,9 +198,7 @@ def close(self) -> None:
             session.close()
 
 
-def create_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLogger
-) -> OSSession:
+def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession:
     """Factory for OS-aware sessions.
 
     Args:
diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 1b2885be5d..28eccc05ed 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -22,6 +22,7 @@
     the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux
     and other commands for other OSs. It also translates the path to match the underlying OS.
 """
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from dataclasses import dataclass
@@ -195,9 +196,7 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         """
 
     @abstractmethod
-    def copy_from(
-        self, source_file: str | PurePath, destination_dir: str | Path
-    ) -> None:
+    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
         """Copy a file from the remote node to the local filesystem.
 
         Copy `source_file` from the remote node associated with this remote
@@ -303,9 +302,7 @@ def copy_dir_to(
         """
 
     @abstractmethod
-    def remove_remote_file(
-        self, remote_file_path: str | PurePath, force: bool = True
-    ) -> None:
+    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
         """Remove remote file, by default remove forcefully.
 
         Args:
@@ -479,9 +476,7 @@ def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
 
     @abstractmethod
-    def setup_hugepages(
-        self, number_of: int, hugepage_size: int, force_first_numa: bool
-    ) -> None:
+    def setup_hugepages(self, number_of: int, hugepage_size: int, force_first_numa: bool) -> None:
         """Configure hugepages on the node.
 
         Get the node's Hugepage Size, configure the specified count of hugepages
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 817405bea4..566f4c5b46 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -8,7 +8,6 @@
 drivers and address.
 """
 
-
 from dataclasses import dataclass
 
 from framework.config import PortConfig
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index f707b6e17b..29e314db6e 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -96,9 +96,7 @@ def remote_path_exists(self, remote_path: str | PurePath) -> bool:
         result = self.send_command(f"test -e {remote_path}")
         return not result.return_code
 
-    def copy_from(
-        self, source_file: str | PurePath, destination_dir: str | Path
-    ) -> None:
+    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_dir)
 
@@ -115,16 +113,12 @@ def copy_dir_from(
     ) -> None:
         """Overrides :meth:`~.os_session.OSSession.copy_dir_from`."""
         source_dir = PurePath(source_dir)
-        remote_tarball_path = self.create_remote_tarball(
-            source_dir, compress_format, exclude
-        )
+        remote_tarball_path = self.create_remote_tarball(source_dir, compress_format, exclude)
 
         self.copy_from(remote_tarball_path, destination_dir)
         self.remove_remote_file(remote_tarball_path)
 
-        tarball_path = Path(
-            destination_dir, f"{source_dir.name}.{compress_format.extension}"
-        )
+        tarball_path = Path(destination_dir, f"{source_dir.name}.{compress_format.extension}")
         extract_tarball(tarball_path)
         tarball_path.unlink()
 
@@ -147,9 +141,7 @@ def copy_dir_to(
         self.extract_remote_tarball(remote_tar_path)
         self.remove_remote_file(remote_tar_path)
 
-    def remove_remote_file(
-        self, remote_file_path: str | PurePath, force: bool = True
-    ) -> None:
+    def remove_remote_file(self, remote_file_path: str | PurePath, force: bool = True) -> None:
         """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(f=force)
         self.send_command(f"rm{opts} {remote_file_path}")
@@ -184,15 +176,11 @@ def generate_tar_exclude_args(exclude_patterns) -> str:
             """
             if exclude_patterns:
                 exclude_patterns = convert_to_list_of_string(exclude_patterns)
-                return "".join(
-                    [f" --exclude={pattern}" for pattern in exclude_patterns]
-                )
+                return "".join([f" --exclude={pattern}" for pattern in exclude_patterns])
             return ""
 
         posix_remote_dir_path = PurePosixPath(remote_dir_path)
-        target_tarball_path = PurePosixPath(
-            f"{remote_dir_path}.{compress_format.extension}"
-        )
+        target_tarball_path = PurePosixPath(f"{remote_dir_path}.{compress_format.extension}")
 
         self.send_command(
             f"tar caf {target_tarball_path}{generate_tar_exclude_args(exclude)} "
@@ -285,9 +273,7 @@ def build_dpdk(
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
         """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
-        out = self.send_command(
-            f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
-        )
+        out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
@@ -302,9 +288,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
             self._check_dpdk_hugepages(dpdk_runtime_dirs)
             self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
 
-    def _get_dpdk_runtime_dirs(
-        self, dpdk_prefix_list: Iterable[str]
-    ) -> list[PurePosixPath]:
+    def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
         """Find runtime directories DPDK apps are currently using.
 
         Args:
@@ -332,9 +316,7 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
         Returns:
             The contents of remote_path. If remote_path doesn't exist, return None.
         """
-        out = self.send_command(
-            f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
-        ).stdout
+        out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
         if "No such file or directory" in out:
             return None
         else:
@@ -365,9 +347,7 @@ def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[in
                             pids.append(int(match.group(1)))
         return pids
 
-    def _check_dpdk_hugepages(
-        self, dpdk_runtime_dirs: Iterable[str | PurePath]
-    ) -> None:
+    def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
         """Check there aren't any leftover hugepages.
 
         If any hugepages are found, emit a warning. The hugepages are investigated in the
@@ -386,9 +366,7 @@ def _check_dpdk_hugepages(
                     self._logger.warning(out)
                     self._logger.warning("*******************************************")
 
-    def _remove_dpdk_runtime_dirs(
-        self, dpdk_runtime_dirs: Iterable[str | PurePath]
-    ) -> None:
+    def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
@@ -425,6 +403,4 @@ def get_node_info(self) -> OSSessionInfo:
             SETTINGS.timeout,
         ).stdout.split("\n")
         kernel_version = self.send_command("uname -r", SETTINGS.timeout).stdout
-        return OSSessionInfo(
-            os_release_info[0].strip(), os_release_info[1].strip(), kernel_version
-        )
+        return OSSessionInfo(os_release_info[0].strip(), os_release_info[1].strip(), kernel_version)
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 6adcff01c2..a9dc0a474a 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -11,7 +11,6 @@
 An SUT node is where this SUT runs.
 """
 
-
 import os
 import time
 from dataclasses import dataclass
@@ -139,9 +138,7 @@ def remote_dpdk_build_dir(self) -> str | PurePath:
     def dpdk_version(self) -> str | None:
         """Last built DPDK version."""
         if self._dpdk_version is None:
-            self._dpdk_version = self.main_session.get_dpdk_version(
-                self._remote_dpdk_tree_path
-            )
+            self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_tree_path)
         return self._dpdk_version
 
     @property
@@ -155,9 +152,7 @@ def node_info(self) -> OSSessionInfo:
     def compiler_version(self) -> str | None:
         """The node's compiler version."""
         if self._compiler_version is None:
-            self._logger.warning(
-                "The `compiler_version` is None because a pre-built DPDK is used."
-            )
+            self._logger.warning("The `compiler_version` is None because a pre-built DPDK is used.")
 
         return self._compiler_version
 
@@ -185,9 +180,7 @@ def get_dpdk_build_info(self) -> DPDKBuildInfo:
         Returns:
             The DPDK build information,
         """
-        return DPDKBuildInfo(
-            dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
-        )
+        return DPDKBuildInfo(dpdk_version=self.dpdk_version, compiler_version=self.compiler_version)
 
     def set_up_test_run(
         self,
@@ -277,9 +270,7 @@ def _set_remote_dpdk_tree_path(self, dpdk_tree: PurePath):
                 f"Remote DPDK source tree '{dpdk_tree}' not found in SUT node."
             )
         if not self.main_session.is_remote_dir(dpdk_tree):
-            raise ConfigurationError(
-                f"Remote DPDK source tree '{dpdk_tree}' must be a directory."
-            )
+            raise ConfigurationError(f"Remote DPDK source tree '{dpdk_tree}' must be a directory.")
 
         self.__remote_dpdk_tree_path = dpdk_tree
 
@@ -315,13 +306,9 @@ def _validate_remote_dpdk_tarball(self, dpdk_tarball: PurePath) -> None:
             ConfigurationError: If the `dpdk_tarball` is a valid path but not a valid tar archive.
         """
         if not self.main_session.remote_path_exists(dpdk_tarball):
-            raise RemoteFileNotFoundError(
-                f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT."
-            )
+            raise RemoteFileNotFoundError(f"Remote DPDK tarball '{dpdk_tarball}' not found in SUT.")
         if not self.main_session.is_remote_tarfile(dpdk_tarball):
-            raise ConfigurationError(
-                f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive."
-            )
+            raise ConfigurationError(f"Remote DPDK tarball '{dpdk_tarball}' must be a tar archive.")
 
     def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
         """Copy the local DPDK tarball to the SUT node.
@@ -336,9 +323,7 @@ def _copy_dpdk_tarball_to_remote(self, dpdk_tarball: Path) -> PurePath:
             f"Copying DPDK tarball to SUT: '{dpdk_tarball}' into '{self._remote_tmp_dir}'."
         )
         self.main_session.copy_to(dpdk_tarball, self._remote_tmp_dir)
-        return self.main_session.join_remote_path(
-            self._remote_tmp_dir, dpdk_tarball.name
-        )
+        return self.main_session.join_remote_path(self._remote_tmp_dir, dpdk_tarball.name)
 
     def _prepare_and_extract_dpdk_tarball(self, remote_tarball_path: PurePath) -> None:
         """Prepare the remote DPDK tree path and extract the tarball.
@@ -362,9 +347,7 @@ def remove_tarball_suffix(remote_tarball_path: PurePath) -> PurePath:
             if len(remote_tarball_path.suffixes) > 1:
                 if remote_tarball_path.suffixes[-2] == ".tar":
                     suffixes_to_remove = "".join(remote_tarball_path.suffixes[-2:])
-                    return PurePath(
-                        str(remote_tarball_path).replace(suffixes_to_remove, "")
-                    )
+                    return PurePath(str(remote_tarball_path).replace(suffixes_to_remove, ""))
             return remote_tarball_path.with_suffix("")
 
         tarball_top_dir = self.main_session.get_tarball_top_dir(remote_tarball_path)
@@ -407,9 +390,7 @@ def _set_remote_dpdk_build_dir(self, build_dir: str):
 
         self._remote_dpdk_build_dir = PurePath(remote_dpdk_build_dir)
 
-    def _configure_dpdk_build(
-        self, dpdk_build_config: DPDKBuildOptionsConfiguration
-    ) -> None:
+    def _configure_dpdk_build(self, dpdk_build_config: DPDKBuildOptionsConfiguration) -> None:
         """Populate common environment variables and set the DPDK build related properties.
 
         This method sets `compiler_version` for additional information and `remote_dpdk_build_dir`
@@ -419,13 +400,9 @@ def _configure_dpdk_build(
             dpdk_build_config: A DPDK build configuration to test.
         """
         self._env_vars = {}
-        self._env_vars.update(
-            self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch)
-        )
+        self._env_vars.update(self.main_session.get_dpdk_build_env_vars(dpdk_build_config.arch))
         if compiler_wrapper := dpdk_build_config.compiler_wrapper:
-            self._env_vars["CC"] = (
-                f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
-            )
+            self._env_vars["CC"] = f"'{compiler_wrapper} {dpdk_build_config.compiler.name}'"
         else:
             self._env_vars["CC"] = dpdk_build_config.compiler.name
 
@@ -476,9 +453,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
         if app_name == "all":
-            return self.main_session.join_remote_path(
-                self.remote_dpdk_build_dir, "examples"
-            )
+            return self.main_session.join_remote_path(self.remote_dpdk_build_dir, "examples")
         return self.main_session.join_remote_path(
             self.remote_dpdk_build_dir, "examples", f"dpdk-{app_name}"
         )
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 2c10aff4ef..0bad59d2a4 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -58,9 +58,7 @@ def get_from_value(cls, value: int) -> "TopologyType":
             case 2:
                 return TopologyType.two_links
             case _:
-                raise ConfigurationError(
-                    "More than two links in a topology are not supported."
-                )
+                raise ConfigurationError("More than two links in a topology are not supported.")
 
 
 class Topology:
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index e7fd511a00..e501f6d5ee 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -39,10 +39,6 @@ def create_traffic_generator(
     """
     match traffic_generator_config:
         case ScapyTrafficGeneratorConfig():
-            return ScapyTrafficGenerator(
-                tg_node, traffic_generator_config, privileged=True
-            )
+            return ScapyTrafficGenerator(tg_node, traffic_generator_config, privileged=True)
         case _:
-            raise ConfigurationError(
-                f"Unknown traffic generator: {traffic_generator_config.type}"
-            )
+            raise ConfigurationError(f"Unknown traffic generator: {traffic_generator_config.type}")
diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 16cc361cab..07e1242548 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -12,7 +12,6 @@
 implement the methods for handling packets by sending commands into the interactive shell.
 """
 
-
 import re
 import time
 from typing import ClassVar
@@ -231,9 +230,7 @@ def _shell_start_and_stop_sniffing(self, duration: float) -> list[Packet]:
         self.send_command(f"{self._sniffer_name}.start()")
         # Insert a one second delay to prevent timeout errors from occurring
         time.sleep(duration + 1)
-        self.send_command(
-            f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)"
-        )
+        self.send_command(f"{sniffed_packets_name} = {self._sniffer_name}.stop(join=True)")
         # An extra newline is required here due to the nature of interactive Python shells
         packet_strs = self.send_command(
             f"for pakt in {sniffed_packets_name}: print(bytes_base64(pakt.build()))\n"
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 6ff9a485ba..6839bcf243 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -31,9 +31,7 @@
 REGEX_FOR_PCI_ADDRESS: str = r"[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}"
 _REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC: str = r"(?:[\da-fA-F]{2}[:-]){5}[\da-fA-F]{2}"
 _REGEX_FOR_DOT_SEP_MAC: str = r"(?:[\da-fA-F]{4}.){2}[\da-fA-F]{4}"
-REGEX_FOR_MAC_ADDRESS: str = (
-    rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
-)
+REGEX_FOR_MAC_ADDRESS: str = rf"{_REGEX_FOR_COLON_OR_HYPHEN_SEP_MAC}|{_REGEX_FOR_DOT_SEP_MAC}"
 REGEX_FOR_BASE64_ENCODING: str = "[-a-zA-Z0-9+\\/]*={0,3}"
 
 
@@ -58,9 +56,7 @@ def expand_range(range_str: str) -> list[int]:
         range_boundaries = range_str.split("-")
         # will throw an exception when items in range_boundaries can't be converted,
         # serving as type check
-        expanded_range.extend(
-            range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1)
-        )
+        expanded_range.extend(range(int(range_boundaries[0]), int(range_boundaries[-1]) + 1))
 
     return expanded_range
 
@@ -77,9 +73,7 @@ def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
-        packet_summaries = json.dumps(
-            list(map(lambda pkt: pkt.summary(), packets)), indent=4
-        )
+        packet_summaries = json.dumps(list(map(lambda pkt: pkt.summary(), packets)), indent=4)
     return f"Packet contents: \n{packet_summaries}"
 
 
@@ -87,9 +81,7 @@ class StrEnum(Enum):
     """Enum with members stored as strings."""
 
     @staticmethod
-    def _generate_next_value_(
-        name: str, start: int, count: int, last_values: object
-    ) -> str:
+    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
         return name
 
     def __str__(self) -> str:
@@ -116,9 +108,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
 
                 meson_args = MesonArgs(enable_kmods=True).
         """
-        self._default_library = (
-            f"--default-library={default_library}" if default_library else ""
-        )
+        self._default_library = f"--default-library={default_library}" if default_library else ""
         self._dpdk_args = " ".join(
             (
                 f"-D{dpdk_arg_name}={dpdk_arg_value}"
@@ -159,9 +149,7 @@ def extension(self):
         For other compression formats, the extension will be in the format
         'tar.{compression format}'.
         """
-        return (
-            f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
-        )
+        return f"{self.value}" if self == self.none else f"{self.none.value}.{self.value}"
 
 
 def convert_to_list_of_string(value: Any | list[Any]) -> list[str]:
@@ -206,9 +194,7 @@ def create_filter_function(
 
             def filter_func(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo | None:
                 file_name = os.path.basename(tarinfo.name)
-                if any(
-                    fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns
-                ):
+                if any(fnmatch.fnmatch(file_name, pattern) for pattern in exclude_patterns):
                     return None
                 return tarinfo
 
@@ -301,9 +287,7 @@ def _make_packet() -> Packet:
             packet /= random.choice(l4_factories)(sport=src_port, dport=dst_port)
 
         max_payload_size = mtu - len(packet)
-        usable_payload_size = (
-            payload_size if payload_size < max_payload_size else max_payload_size
-        )
+        usable_payload_size = payload_size if payload_size < max_payload_size else max_payload_size
         return packet / random.randbytes(usable_payload_size)
 
     return [_make_packet() for _ in range(number_of)]
diff --git a/dts/tests/TestSuite_vlan.py b/dts/tests/TestSuite_vlan.py
index 524854ea89..35221fe362 100644
--- a/dts/tests/TestSuite_vlan.py
+++ b/dts/tests/TestSuite_vlan.py
@@ -38,9 +38,7 @@ class TestVlan(TestSuite):
     tag when insertion is enabled.
     """
 
-    def send_vlan_packet_and_verify(
-        self, should_receive: bool, strip: bool, vlan_id: int
-    ) -> None:
+    def send_vlan_packet_and_verify(self, should_receive: bool, strip: bool, vlan_id: int) -> None:
         """Generate a VLAN packet, send and verify packet with same payload is received on the dut.
 
         Args:
@@ -155,9 +153,7 @@ def test_vlan_no_receipt(self) -> None:
         with TestPmdShell(node=self.sut_node) as testpmd:
             self.vlan_setup(testpmd=testpmd, port_id=0, filtered_id=1)
             testpmd.start()
-            self.send_vlan_packet_and_verify(
-                should_receive=False, strip=False, vlan_id=2
-            )
+            self.send_vlan_packet_and_verify(should_receive=False, strip=False, vlan_id=2)
 
     @func_test
     def test_vlan_header_insertion(self) -> None:
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 5/7] dts: update dts-check-format to use Ruff
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
                     ` (3 preceding siblings ...)
  2024-12-12 14:00   ` [PATCH v2 4/7] dts: apply Ruff formatting Luca Vizzarro
@ 2024-12-12 14:00   ` Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 6/7] dts: remove old linters and formatters Luca Vizzarro
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-12 14:00 UTC (permalink / raw)
  To: dev; +Cc: Patrick Robb, Luca Vizzarro, Paul Szczepanek

Replace the current linters and formatter in favour of Ruff in the
dts-check-format tool.

Bugzilla ID: 1358
Bugzilla ID: 1455

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 devtools/dts-check-format.sh | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/devtools/dts-check-format.sh b/devtools/dts-check-format.sh
index 3f43e17e88..44501f6d3b 100755
--- a/devtools/dts-check-format.sh
+++ b/devtools/dts-check-format.sh
@@ -52,18 +52,11 @@ if $format; then
 	if command -v git > /dev/null; then
 		if git rev-parse --is-inside-work-tree >&-; then
 			heading "Formatting in $directory/"
-			if command -v black > /dev/null; then
-				echo "Formatting code with black:"
-				black .
+			if command -v ruff > /dev/null; then
+				echo "Formatting code with ruff:"
+				ruff format
 			else
-				echo "black is not installed, not formatting"
-				errors=$((errors + 1))
-			fi
-			if command -v isort > /dev/null; then
-				echo "Sorting imports with isort:"
-				isort .
-			else
-				echo "isort is not installed, not sorting imports"
+				echo "ruff is not installed, not formatting"
 				errors=$((errors + 1))
 			fi
 
@@ -89,11 +82,18 @@ if $lint; then
 		echo
 	fi
 	heading "Linting in $directory/"
-	if command -v pylama > /dev/null; then
-		pylama .
-		errors=$((errors + $?))
+	if command -v ruff > /dev/null; then
+		ruff check --fix
+
+		git update-index --refresh
+		retval=$?
+		if [ $retval -ne 0 ]; then
+			echo 'The "needs update" files have been fixed by the linter.'
+			echo 'Please update your commit.'
+		fi
+		errors=$((errors + retval))
 	else
-		echo "pylama not found, unable to run linter"
+		echo "ruff not found, unable to run linter"
 		errors=$((errors + 1))
 	fi
 fi
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 6/7] dts: remove old linters and formatters
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
                     ` (4 preceding siblings ...)
  2024-12-12 14:00   ` [PATCH v2 5/7] dts: update dts-check-format to use Ruff Luca Vizzarro
@ 2024-12-12 14:00   ` Luca Vizzarro
  2024-12-12 14:00   ` [PATCH v2 7/7] dts: update linters in doc page Luca Vizzarro
  2024-12-20 23:14   ` [PATCH v2 0/7] dts: add Ruff and docstring linting Patrick Robb
  7 siblings, 0 replies; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-12 14:00 UTC (permalink / raw)
  To: dev; +Cc: Patrick Robb, Luca Vizzarro, Paul Szczepanek

Since the addition of Ruff, all the previously used linters and
formatters are no longer used, therefore remove.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/poetry.lock    | 170 +--------------------------------------------
 dts/pyproject.toml |  24 -------
 2 files changed, 1 insertion(+), 193 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index aa821f0101..a53bbe03b8 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -108,40 +108,6 @@ files = [
 tests = ["pytest (>=3.2.1,!=3.3.0)"]
 typecheck = ["mypy"]
 
-[[package]]
-name = "black"
-version = "22.12.0"
-description = "The uncompromising code formatter."
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"},
-    {file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"},
-    {file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"},
-    {file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"},
-    {file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"},
-    {file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"},
-    {file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"},
-    {file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"},
-    {file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"},
-    {file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"},
-    {file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"},
-    {file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"},
-]
-
-[package.dependencies]
-click = ">=8.0.0"
-mypy-extensions = ">=0.4.3"
-pathspec = ">=0.9.0"
-platformdirs = ">=2"
-tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
-
-[package.extras]
-colorama = ["colorama (>=0.4.3)"]
-d = ["aiohttp (>=3.7.4)"]
-jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
-uvloop = ["uvloop (>=0.15.2)"]
-
 [[package]]
 name = "certifi"
 version = "2023.7.22"
@@ -328,20 +294,6 @@ files = [
     {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
 ]
 
-[[package]]
-name = "click"
-version = "8.1.6"
-description = "Composable command line interface toolkit"
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "click-8.1.6-py3-none-any.whl", hash = "sha256:fa244bb30b3b5ee2cae3da8f55c9e5e0c0e86093306301fb418eb9dc40fbded5"},
-    {file = "click-8.1.6.tar.gz", hash = "sha256:48ee849951919527a045bfe3bf7baa8a959c423134e1a5b98c05c20ba75a1cbd"},
-]
-
-[package.dependencies]
-colorama = {version = "*", markers = "platform_system == \"Windows\""}
-
 [[package]]
 name = "colorama"
 version = "0.4.6"
@@ -462,23 +414,6 @@ files = [
     {file = "invoke-1.7.3.tar.gz", hash = "sha256:41b428342d466a82135d5ab37119685a989713742be46e42a3a399d685579314"},
 ]
 
-[[package]]
-name = "isort"
-version = "5.12.0"
-description = "A Python utility / library to sort Python imports."
-optional = false
-python-versions = ">=3.8.0"
-files = [
-    {file = "isort-5.12.0-py3-none-any.whl", hash = "sha256:f84c2818376e66cf843d497486ea8fed8700b340f308f076c6fb1229dff318b6"},
-    {file = "isort-5.12.0.tar.gz", hash = "sha256:8bef7dde241278824a6d83f44a544709b065191b95b6e50894bdc722fcba0504"},
-]
-
-[package.extras]
-colors = ["colorama (>=0.4.3)"]
-pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"]
-plugins = ["setuptools"]
-requirements-deprecated-finder = ["pip-api", "pipreqs"]
-
 [[package]]
 name = "jinja2"
 version = "3.1.2"
@@ -565,17 +500,6 @@ files = [
     {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
 ]
 
-[[package]]
-name = "mccabe"
-version = "0.7.0"
-description = "McCabe checker, plugin for flake8"
-optional = false
-python-versions = ">=3.6"
-files = [
-    {file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"},
-    {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
-]
-
 [[package]]
 name = "mypy"
 version = "1.10.0"
@@ -680,43 +604,6 @@ files = [
 [package.dependencies]
 six = "*"
 
-[[package]]
-name = "pathspec"
-version = "0.11.1"
-description = "Utility library for gitignore style pattern matching of file paths."
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "pathspec-0.11.1-py3-none-any.whl", hash = "sha256:d8af70af76652554bd134c22b3e8a1cc46ed7d91edcdd721ef1a0c51a84a5293"},
-    {file = "pathspec-0.11.1.tar.gz", hash = "sha256:2798de800fa92780e33acca925945e9a19a133b715067cf165b8866c15a31687"},
-]
-
-[[package]]
-name = "platformdirs"
-version = "3.9.1"
-description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "platformdirs-3.9.1-py3-none-any.whl", hash = "sha256:ad8291ae0ae5072f66c16945166cb11c63394c7a3ad1b1bc9828ca3162da8c2f"},
-    {file = "platformdirs-3.9.1.tar.gz", hash = "sha256:1b42b450ad933e981d56e59f1b97495428c9bd60698baab9f3eb3d00d5822421"},
-]
-
-[package.extras]
-docs = ["furo (>=2023.5.20)", "proselint (>=0.13)", "sphinx (>=7.0.1)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"]
-test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.3.1)", "pytest-cov (>=4.1)", "pytest-mock (>=3.10)"]
-
-[[package]]
-name = "pycodestyle"
-version = "2.10.0"
-description = "Python style guide checker"
-optional = false
-python-versions = ">=3.6"
-files = [
-    {file = "pycodestyle-2.10.0-py2.py3-none-any.whl", hash = "sha256:8a4eaf0d0495c7395bdab3589ac2db602797d76207242c17d470186815706610"},
-    {file = "pycodestyle-2.10.0.tar.gz", hash = "sha256:347187bdb476329d98f695c213d7295a846d1152ff4fe9bacb8a9590b8ee7053"},
-]
-
 [[package]]
 name = "pycparser"
 version = "2.21"
@@ -872,23 +759,6 @@ azure-key-vault = ["azure-identity (>=1.16.0)", "azure-keyvault-secrets (>=4.8.0
 toml = ["tomli (>=2.0.1)"]
 yaml = ["pyyaml (>=6.0.1)"]
 
-[[package]]
-name = "pydocstyle"
-version = "6.1.1"
-description = "Python docstring style checker"
-optional = false
-python-versions = ">=3.6"
-files = [
-    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
-    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
-]
-
-[package.dependencies]
-snowballstemmer = "*"
-
-[package.extras]
-toml = ["toml"]
-
 [[package]]
 name = "pyelftools"
 version = "0.31"
@@ -900,17 +770,6 @@ files = [
     {file = "pyelftools-0.31.tar.gz", hash = "sha256:c774416b10310156879443b81187d182d8d9ee499660380e645918b50bc88f99"},
 ]
 
-[[package]]
-name = "pyflakes"
-version = "2.5.0"
-description = "passive checker of Python programs"
-optional = false
-python-versions = ">=3.6"
-files = [
-    {file = "pyflakes-2.5.0-py2.py3-none-any.whl", hash = "sha256:4579f67d887f804e67edb544428f264b7b24f435b263c4614f384135cea553d2"},
-    {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
-]
-
 [[package]]
 name = "pygments"
 version = "2.16.1"
@@ -925,33 +784,6 @@ files = [
 [package.extras]
 plugins = ["importlib-metadata"]
 
-[[package]]
-name = "pylama"
-version = "8.4.1"
-description = "Code audit tool for python"
-optional = false
-python-versions = ">=3.7"
-files = [
-    {file = "pylama-8.4.1-py3-none-any.whl", hash = "sha256:5bbdbf5b620aba7206d688ed9fc917ecd3d73e15ec1a89647037a09fa3a86e60"},
-    {file = "pylama-8.4.1.tar.gz", hash = "sha256:2d4f7aecfb5b7466216d48610c7d6bad1c3990c29cdd392ad08259b161e486f6"},
-]
-
-[package.dependencies]
-mccabe = ">=0.7.0"
-pycodestyle = ">=2.9.1"
-pydocstyle = ">=6.1.1"
-pyflakes = ">=2.5.0"
-
-[package.extras]
-all = ["eradicate", "mypy", "pylint", "radon", "vulture"]
-eradicate = ["eradicate"]
-mypy = ["mypy"]
-pylint = ["pylint"]
-radon = ["radon"]
-tests = ["eradicate (>=2.0.0)", "mypy", "pylama-quotes", "pylint (>=2.11.1)", "pytest (>=7.1.2)", "pytest-mypy", "radon (>=5.1.0)", "toml", "types-setuptools", "types-toml", "vulture"]
-toml = ["toml (>=0.10.2)"]
-vulture = ["vulture"]
-
 [[package]]
 name = "pynacl"
 version = "1.5.0"
@@ -1388,4 +1220,4 @@ zstd = ["zstandard (>=0.18.0)"]
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "5f9b61492d95b09c717325396e981bb526fac9b0c16869f1aebc3a57b7b80e49"
+content-hash = "eb9976250d5022a9036bf2b7630ce71e95152d0440a9cb1f127b3b691429777b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 2658a3d22c..8ba76bd3a7 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -24,17 +24,12 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
-pydocstyle = "6.1.1"
 typing-extensions = "^4.11.0"
 aenum = "^3.1.15"
 pydantic = "^2.9.2"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^1.10.0"
-black = "^22.6.0"
-isort = "^5.10.1"
-pylama = "^8.4.1"
-pyflakes = "^2.5.0"
 toml = "^0.10.2"
 ruff = "^0.8.1"
 
@@ -74,27 +69,8 @@ explicit-preview-rules = true # enable ONLY the explicitly selected preview rule
 [tool.ruff.lint.pydocstyle]
 convention = "google"
 
-[tool.pylama]
-linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
-format = "pylint"
-max_line_length = 100
-
-[tool.pylama.linter.pycodestyle]
-ignore = "E203,W503"
-
-[tool.pylama.linter.pydocstyle]
-convention = "google"
-
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
 show_error_codes = true
 warn_unused_ignores = true
-
-[tool.isort]
-profile = "black"
-
-[tool.black]
-target-version = ['py310']
-include = '\.pyi?$'
-line-length = 100
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 7/7] dts: update linters in doc page
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
                     ` (5 preceding siblings ...)
  2024-12-12 14:00   ` [PATCH v2 6/7] dts: remove old linters and formatters Luca Vizzarro
@ 2024-12-12 14:00   ` Luca Vizzarro
  2024-12-17 10:15     ` Xu, HailinX
  2024-12-20 23:14   ` [PATCH v2 0/7] dts: add Ruff and docstring linting Patrick Robb
  7 siblings, 1 reply; 17+ messages in thread
From: Luca Vizzarro @ 2024-12-12 14:00 UTC (permalink / raw)
  To: dev; +Cc: Patrick Robb, Luca Vizzarro, Paul Szczepanek

Ruff has now superseded all the previous linters and formatters. Update
DTS doc page accordingly.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 doc/guides/tools/dts.rst | 26 +++++++-------------------
 1 file changed, 7 insertions(+), 19 deletions(-)

diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 8c972c31c4..abc389b42a 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -416,32 +416,20 @@ There are four types of methods that comprise a test suite:
 DTS Developer Tools
 -------------------
 
-There are three tools used in DTS to help with code checking, style and formatting:
+There are two tools used in DTS to help with code checking, style and formatting:
 
-* `isort <https://pycqa.github.io/isort/>`_
+* `ruff <https://astral.sh/ruff/>`_
 
-  Alphabetically sorts python imports within blocks.
-
-* `black <https://github.com/psf/black>`_
-
-  Does most of the actual formatting (whitespaces, comments, line length etc.)
-  and works similarly to clang-format.
-
-* `pylama <https://github.com/klen/pylama>`_
-
-  Runs a collection of python linters and aggregates output.
-  It will run these tools over the repository:
-
-  .. literalinclude:: ../../../dts/pyproject.toml
-     :language: cfg
-     :start-after: [tool.pylama]
-     :end-at: linters
+  An extremely fast all-in-one linting and formatting solution, which covers most
+  if not all the major rules such as: pylama, flake8, pylint etc. Its built-in
+  formatter is also Black-compatible and is able to sort imports automatically
+  like isort would.
 
 * `mypy <https://github.com/python/mypy>`_
 
   Enables static typing for Python, exploiting the type hints in the source code.
 
-These three tools are all used in ``devtools/dts-check-format.sh``,
+These two tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH v2 7/7] dts: update linters in doc page
  2024-12-12 14:00   ` [PATCH v2 7/7] dts: update linters in doc page Luca Vizzarro
@ 2024-12-17 10:15     ` Xu, HailinX
  0 siblings, 0 replies; 17+ messages in thread
From: Xu, HailinX @ 2024-12-17 10:15 UTC (permalink / raw)
  To: Luca Vizzarro, dev
  Cc: Patrick Robb, Paul Szczepanek, Cui, KaixinX, Puttaswamy, Rajesh T

Hi Luca,

There is no error with this series.
The error is caused by the CI doesn't apply any changes with doc/*.
In history, Community decided to exclude doc/*,  as doc/* change frequently, especially the release notes, cause a lot of conflict with main tree.
We manually merged this patch and successfully compiled it.


Regards,
Xu, Hailin

> -----Original Message-----
> From: Luca Vizzarro <luca.vizzarro@arm.com>
> Sent: Thursday, December 12, 2024 10:00 PM
> To: dev@dpdk.org
> Cc: Patrick Robb <probb@iol.unh.edu>; Luca Vizzarro
> <luca.vizzarro@arm.com>; Paul Szczepanek <paul.szczepanek@arm.com>
> Subject: [PATCH v2 7/7] dts: update linters in doc page
> 
> Ruff has now superseded all the previous linters and formatters. Update DTS
> doc page accordingly.
> 
> Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
> Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
> ---
>  doc/guides/tools/dts.rst | 26 +++++++-------------------
>  1 file changed, 7 insertions(+), 19 deletions(-)
> 
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index
> 8c972c31c4..abc389b42a 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -416,32 +416,20 @@ There are four types of methods that comprise a
> test suite:
>  DTS Developer Tools
>  -------------------
> 
> -There are three tools used in DTS to help with code checking, style and
> formatting:
> +There are two tools used in DTS to help with code checking, style and
> formatting:
> 
> -* `isort <https://pycqa.github.io/isort/>`_
> +* `ruff <https://astral.sh/ruff/>`_
> 
> -  Alphabetically sorts python imports within blocks.
> -
> -* `black <https://github.com/psf/black>`_
> -
> -  Does most of the actual formatting (whitespaces, comments, line length
> etc.)
> -  and works similarly to clang-format.
> -
> -* `pylama <https://github.com/klen/pylama>`_
> -
> -  Runs a collection of python linters and aggregates output.
> -  It will run these tools over the repository:
> -
> -  .. literalinclude:: ../../../dts/pyproject.toml
> -     :language: cfg
> -     :start-after: [tool.pylama]
> -     :end-at: linters
> +  An extremely fast all-in-one linting and formatting solution, which
> + covers most  if not all the major rules such as: pylama, flake8,
> + pylint etc. Its built-in  formatter is also Black-compatible and is
> + able to sort imports automatically  like isort would.
> 
>  * `mypy <https://github.com/python/mypy>`_
> 
>    Enables static typing for Python, exploiting the type hints in the source code.
> 
> -These three tools are all used in ``devtools/dts-check-format.sh``,
> +These two tools are all used in ``devtools/dts-check-format.sh``,
>  the DTS code check and format script.
>  Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
> 
> --
> 2.43.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 0/7] dts: add Ruff and docstring linting
  2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
                     ` (6 preceding siblings ...)
  2024-12-12 14:00   ` [PATCH v2 7/7] dts: update linters in doc page Luca Vizzarro
@ 2024-12-20 23:14   ` Patrick Robb
  7 siblings, 0 replies; 17+ messages in thread
From: Patrick Robb @ 2024-12-20 23:14 UTC (permalink / raw)
  To: Luca Vizzarro; +Cc: dev, Paul Szczepanek

[-- Attachment #1: Type: text/plain, Size: 2216 bytes --]

Series-reviewed-by: Patrick Robb <probb@iol.unh.edu>
Tested-by: Patrick Robb <probb@iol.unh.edu>

Paul I will merge to next-dts now instead of waiting until after Winter
holidays if that is okay with you.

On Thu, Dec 12, 2024 at 9:02 AM Luca Vizzarro <luca.vizzarro@arm.com> wrote:

> v2:
> - updated the doc page
>
> Luca Vizzarro (7):
>   dts: add Ruff as linter and formatter
>   dts: enable Ruff preview pydoclint rules
>   dts: resolve docstring linter errors
>   dts: apply Ruff formatting
>   dts: update dts-check-format to use Ruff
>   dts: remove old linters and formatters
>   dts: update linters in doc page
>
>  devtools/dts-check-format.sh                  |  30 +--
>  doc/guides/tools/dts.rst                      |  26 +--
>  dts/framework/params/eal.py                   |   5 +-
>  dts/framework/remote_session/dpdk_shell.py    |   1 -
>  dts/framework/remote_session/python_shell.py  |   1 +
>  .../single_active_interactive_shell.py        |   3 +-
>  dts/framework/runner.py                       |  14 +-
>  dts/framework/settings.py                     |   3 +
>  dts/framework/test_suite.py                   |   6 +-
>  dts/framework/testbed_model/capability.py     |  13 +-
>  dts/framework/testbed_model/cpu.py            |  21 +-
>  dts/framework/testbed_model/linux_session.py  |   6 +-
>  dts/framework/testbed_model/node.py           |   3 +
>  dts/framework/testbed_model/os_session.py     |   3 +-
>  dts/framework/testbed_model/port.py           |   1 -
>  dts/framework/testbed_model/posix_session.py  |  16 +-
>  dts/framework/testbed_model/sut_node.py       |   2 +-
>  dts/framework/testbed_model/topology.py       |   6 +
>  .../traffic_generator/__init__.py             |   3 +
>  .../testbed_model/traffic_generator/scapy.py  |   7 +-
>  .../traffic_generator/traffic_generator.py    |   3 +-
>  dts/framework/utils.py                        |   6 +-
>  dts/poetry.lock                               | 197 +++---------------
>  dts/pyproject.toml                            |  40 ++--
>  dts/tests/TestSuite_vlan.py                   |  22 +-
>  25 files changed, 179 insertions(+), 259 deletions(-)
>
> --
> 2.43.0
>
>

[-- Attachment #2: Type: text/html, Size: 2960 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2024-12-20 23:16 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-12-10 10:32 [PATCH 0/6] dts: add Ruff and docstring linting Luca Vizzarro
2024-12-10 10:32 ` [PATCH 1/6] dts: add Ruff as linter and formatter Luca Vizzarro
2024-12-10 10:32 ` [PATCH 2/6] dts: enable Ruff preview pydoclint rules Luca Vizzarro
2024-12-10 10:32 ` [PATCH 3/6] dts: fix docstring linter errors Luca Vizzarro
2024-12-10 10:32 ` [PATCH 4/6] dts: apply Ruff formatting Luca Vizzarro
2024-12-10 10:32 ` [PATCH 5/6] dts: update dts-check-format to use Ruff Luca Vizzarro
2024-12-10 10:32 ` [PATCH 6/6] dts: remove old linters and formatters Luca Vizzarro
2024-12-12 14:00 ` [PATCH v2 0/7] dts: add Ruff and docstring linting Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 1/7] dts: add Ruff as linter and formatter Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 2/7] dts: enable Ruff preview pydoclint rules Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 3/7] dts: resolve docstring linter errors Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 4/7] dts: apply Ruff formatting Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 5/7] dts: update dts-check-format to use Ruff Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 6/7] dts: remove old linters and formatters Luca Vizzarro
2024-12-12 14:00   ` [PATCH v2 7/7] dts: update linters in doc page Luca Vizzarro
2024-12-17 10:15     ` Xu, HailinX
2024-12-20 23:14   ` [PATCH v2 0/7] dts: add Ruff and docstring linting Patrick Robb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).